Can education, prevention, and moderation go hand in hand?

Netino by Webhelp is recognized for its expertise in content moderation, but we want to go further! Moderation has become necessary to protect every internet user. But why are we in danger from other users’ content? Can we prevent rather than cure?

This week we asked Cédric, our Head of Content Management & Moderation, a few questions about this.

| Today, cyberbullying and online hate are unfortunately recurring topics that we see in the news. But do you think there has been a real increase in this phenomenon in the last year?

In 2022, the 53 million French social media users spent an average of 1 hour and 46 minutes per day on social media and used 5 different platforms per month. If we approach this question from a probabilistic angle, we might be tempted to say that the more an individual spends time online on social media, the more they interact with other users, and the more they share personal content, the more they mechanically increase the risks of being the target of cyberbullying.

A very interesting 2021 study by the e-Enfance association reported that 20% of young people have already been confronted with cyberbullying. If we consider that a high school class has an average of 30 students, this means that 6 of them have already been harassed to varying degrees on the internet. Compared to previous years, this percentage has increased, particularly among 15-17 year olds.

| Moderation is now a common practice on social media, but do we know when it actually comes into effect?

From the perspective of many users, the moderation rules of major social media platforms remain unclear, and the protocols for reporting abuses can leave the impression that they are ineffective. The severity of moderation can also vary from one platform to another, which is an additional source of incomprehension about “what is allowed and what is not” on the internet.

Except for rules that may be specific to them (such as nudity in works of art), platforms essentially intervene in cases of serious violations of the laws of the country or when reports made by the community are numerous and concentrated over a relatively short period of time.

Platforms are potentially more helpless in dealing with issues such as insidious harassment over time, denigration formulated in a sarcastic, non-obscene language register, etc.

| Do you think it would be possible to educate internet users on these issues? Like courses in schools as there are courses in civic education.

Given the importance of social networks and messaging platforms in the daily lives of young people, it becomes difficult not to address the issue in the context of classrooms. 89% of 15-21 year olds use Instagram and 63% of 16-24 year olds check Tiktok daily. This is not a marginal habit; in concrete terms, all students are concerned with the subject.

Initiatives in schools are multiplying, although they are often very punctual events carried out by external speakers or parentheses made during classes by teachers.

Of course, middle and high schools are not intended to replace their fundamental teachings with tutorials on the proper use of social media. However, regular workshops, for example quarterly, can be relevant in the perspective of educating students on responsible usage, informing them about legal risks in case of violation, but also helping to build their critical thinking skills regarding the reliability of content consulted online.

Because on this last point, I will quote the very compelling 2021 Kantar barometer in which 46% of young people stated that they consider Tiktok as a full-fledged source of information…

| Generally, we have the impression that it is the younger generation that suffers from these virtual aggressions. So, how can we better prepare them for the use of social networks?

In my opinion, the answer to this question has two parts.

Firstly, prevention, particularly concerning behaviors that put teenagers at risk, such as sharing personal information, sharing multiple photos and videos of themselves, and improper privacy settings.

Before posting any content, users (regardless of their age) should go through a mental checklist. Do I want my photos and videos potentially viewed by hundreds of thousands of strangers? Will they be recorded and kept for years by individuals with questionable intentions, even if I delete them in the meantime? Will what I show or say today still be something I want to show or say in 10 years?

Secondly, sanctions and their escalation. One of the principles of education is that learning occurs through repetition. Deleting a user’s social media account is often a delayed response to a serious behavior that is the consequence of a more or less long-term drift. Moreover, permanently banning an account usually only encourages the user to create a new alias immediately afterward. In contrast, warnings and micro-sanctions (such as mute or ban for short periods, from a few hours to a few days, along with a non-generic explanation) can have virtuous effects if and only if they are reactive and systematized from the first signs of misconduct. Their punctual application is also devoid of positive effects. They are, in essence, extensions of practices that are quite common in chat-type spaces when moderated correctly.

The challenge in the case of social media lies in the considerable volume of contributions to be verified. Only moderation assisted by Artificial Intelligence can scan all content and react within seconds to decide and apply a pedagogical measure proportional to the offense. Of course, humans still have a place in such a system, particularly for post-tracking activities, by checking the machine’s verdicts, refining them, and thus enriching AI models. A mixed human/machine moderation approach, where humans review content identified as “at risk” by AI, also makes sense in a more qualitative and individualized perspective of moderation.

| Public authorities generally try to legislate on these issues, but is that really the only way, and does it speak to people in general?

The future European DSA (Digital Services Act), which is expected to come into effect in 2023, is a step in the right direction and should strengthen national initiatives such as the Avia law. Overall, given the importance of social media as new agora, moderation can no longer be a secondary brick in the construction of a communal space but will be at the heart of the reflection.

And, we tend to forget, but the family unit also plays a crucial role in preventing risky behaviors. The same e-Childhood study mentioned above reports that 83% of parents surveyed in the panel admit they do not know exactly what their child is doing on social media. It is reasonable to expect some supervision from the state and platforms, but it is also up to parents not to abdicate their educational and control responsibilities. Wanting to know where a 14-year-old adolescent goes out is not incongruous. Therefore, it should not be any different to want to know how they use tools that connect them to millions of other people.


If your company wants to protect its communities and communication spaces, do not hesitate to contact us to establish together how moderation could become your faithful ally.

N'hésitez pas à partager cet article !
"Can education, prevention, and moderation go hand in hand?"