AI at the Service of Online Security: Going Beyond Trends

With the recent explosion of AI such as Chat GPT, Dall-e, or Midjourney, it was time to talk about them! Let’s start by quickly reviewing the clichés that have been invading our LinkedIn feeds for several months now. Yes, text prediction models, image generation, and even voice generation are impressive and sometimes even frightening. Certainly, their use raises numerous ethical issues, and obviously, they will contribute to profoundly transforming our relationship with work. However, these tools are not new, and they can be particularly welcome in certain fields, especially one we know well: content moderation!

👉 Just between May 2022 and 2023, out of 188,566,955 pieces of content posted on social networks and analyzed by Netino by Webhelp, over 112 million were processed without human intervention. Among them, the algorithm detected 13% of calls for violence, insults, scams, or discrimination, which represents a total of 25 million highly or very highly toxic contents that internet users and moderators were not exposed to.

The help of artificial intelligence is undoubtedly welcome to handle such a large number of posts. It is particularly effective during sudden variations in the number of comments or targeted events. With an algorithm capable of analyzing thousands of contents in fractions of a second, scalability is not an issue. The Christchurch terrorist attack, live-streamed on Facebook in 2019, provides a good example. It took 17 minutes for the video of a white supremacist cold-bloodedly murdering 51 people in two mosques to be removed from the platform. This livestream was only viewed “just” 4,000 times, but the next day, 1.5 million videos related to the attack were published. It must be acknowledged that algorithms proved to be quite useful in accomplishing this titanic task of detecting reposted clips and screenshots. However, this does not mean that non-human moderation alone is sufficient or even desirable.

| Moderation is not just automatic

Algorithms can handle contents that blatantly violate platform rules, but in more complex cases, they are far from efficient.

More nuanced, ironic, or sarcastic speeches denouncing certain shocking contents require human analysis, as emphasized by Arun Chandra, former vice president of scaled operations at Facebook in 2019: “Some remarks, for example, can be used to target other users, but they can also be made in jest, to highlight the sectarianism a user may face, or to quote elements from popular culture.” Artificial intelligence becomes a tool towards more nuanced moderation and safer digital spaces.

| Will everyone soon be replaced?

One of the (many) reasons to fear these tools is the supposed massive job destruction they would cause: a understandable fear, but one that certainly needs to be nuanced. This is particularly stated by Patricia Vendamin, recently interviewed by Libération. As a sociologist specializing in labor trends, she questions “alarmist predictions” that “unnecessarily worry” and have often been proven false in the past. She also denounces their deterministic nature and rightly emphasizes that it is primarily a legal framework that will determine the real impact of these new tools.

Hybrid moderation illustrates how artificial intelligence accompanies and complements the work of those who moderate our digital spaces. Rather than resigning ourselves to seeing them replace us, AI can be seen as a means to get rid of repetitive tasks, improve working conditions, and free up time, thus combating the bullshittization of jobs and an increasingly prevalent loss of meaning in work.


If you want to learn more about our moderation process and why we rely on a hybrid concept, download our new white paper now!

N'hésitez pas à partager cet article !
"AI at the Service of Online Security: Going Beyond Trends"