Why does AI require human action to moderate?

Your brand’s pages require special attention. Security and respect in these areas of dialogue, exchange, and customer relations are, of course, a priority for your image. Part of this vast clean-up of inappropriate comments is in the hands of platforms and their AI, ensuring compliance with their charter. But these algorithms are far from infallible, and their effectiveness is sometimes far from surpassing the added value of human moderation.

| Covid-19 as a witness to the limits of AI

The pandemic has raised many issues since its inception, even in unexpected sectors such as content moderation. Engagement on social media increased by 61% since March 2020, and hence, the need for quality moderation.

During the health crisis, many moderators at the service of Facebook, Twitter, or Youtube were sent home. Since work could not be done remotely for security reasons, the share of artificial intelligence software was strengthened. The result? At Youtube, eleven million videos were deleted between April and June, twice as many as usual.

After reviewing the decisions made by the machine, more than half of the videos were put back online. Usually, this figure is one in four. On the side of Mark Zuckerberg’s platform or the blue bird, a call for a boycott by several brands denounced uneven moderation in the summer of 2020.

AI does too much or not enough. And at a time when social media giants share their willingness to increasingly rely on them, it is clear that humans remain an indispensable part of the machine.

| The irreplaceable subtlety of human understanding

Insults, discrimination, intimidation, misinformation, harassment: complex subjects with manifestations of such diversity that it seems impossible for a machine to grasp them in their entirety. For Réda Aboulkacem, CTO of Netino: “AI as it currently stands, even the most advanced, does not allow for the subtlety of a human when moderating.” Where does information end and debate begin? Where does debate turn into insults? Only a human will be able to read between the lines.

Although AI relies on information provided by flesh-and-blood operators via DL (deep learning, the technology that allows an algorithm to learn by repeating a task), internet language codes evolve much faster than their ability to assimilate them. “AI will not detect all possible spelling mistakes in an insult,” says Réda Aboulkacem, for example. Similarly, “a human can see that an expression is diverted from its original meaning with the aim of having a negative effect. He will be quicker to alert about a new trolling trend than to educate AI to recognize it.”

Not to mention that putting the terms analyzed in context, irony, or second degree, are still challenges for these technologies. “AI will probably never be able to detect a specific case, such as the nuance between a heated debate between internet users that may warrant intervention and aggressive opinion towards a public figure that falls under the right to expression.”

Can we imagine a world where AI is the only moderator of the web? Perhaps. But is it desirable? Do we want to leave our spaces for free expression to algorithms? At Netino, technology and humans work together. If AI identifies potentially problematic content, only a human will have the sensitivity and culture to give the final hammer blow.

| Moderation, a matter of optimizing customer relations

Allowing your community to speak is essential, while also controlling the excesses of some internet users who could damage your image. A genuine effort to create a dialogue with your brand’s community is one of the keys to building trust with customers. “We prioritize quality above all else, and this involves having a thoughtful moderation policy that allows for open conversation,” explains Reda Aboulkacem. But accepting or deleting a comment is one thing. Providing added value to moderation is a completely different challenge where even the best bots have their limits.

A human moderator not only serves as a referee in exchanges between your subscribers, but also as a liaison between the consumer and your company. They will be able to identify relevant remarks about the products offered by the company in order to bring them to your attention. This will enable you to take the necessary steps as quickly as possible and anticipate bad publicity, unlike companies that have experienced a crisis due to a lack of moderation.

N'hésitez pas à partager cet article !
"Why does AI require human action to moderate?"