Automated online content moderation

Content moderation has become a mandatory step for all brands or companies that want to successfully carry out an effective communication strategy. In order to control their brand image on the web, brands and companies no longer have a choice today: they must secure their social spaces. As the number of interactions on the web continues to increase, engagement on social networks has increased by 61% since March 2020, and as a result, brands and companies must implement an effective moderation policy to preserve their e-reputation.

| What is content moderation?

Content moderation consists of managing contributions on a participatory site (such as a blog or forum), a network or a social media. Internet users are increasingly using media and social networks to express an opinion or share a positive or negative experience.

In a context where brands and companies seek to assert maximum online presence, the number of interactions can be very voluminous. On the Internet, everything goes very fast, and it only takes a slanderous comment to tarnish your e-reputation.

| Why is it so important to moderate content?

Thanks to User Generated Content (UGC) left by Internet users, a website, forum, or social network has become real communication channels that allow brands and companies to communicate, exchange, and understand their opinions on a brand’s products or services, for example.

These UGC can be both textual content and photographs, videos, links, or audio content, and it is important to have an effective moderation service because due to the growing number of interactions on the web, the volumes of data to be analyzed can be gigantic. Unfortunately, it sometimes happens that some contributions to exchange spaces are purely malicious, resulting in the behavior of some Internet users who seek to harm the reputation of a brand or company.

This is where the moderator comes in, who is responsible for controlling and filtering published content, analyzing and deleting UGC that do not comply with the pre-established rules in your charter. We cannot directly control the behavior of deviant Internet users (trolls, insults, fake news, fake profiles, illicit content…), but we have the possibility thanks to our internal tool which we own, Modératus, to respond and moderate millions of messages per month and to have a maximum of reactivity thanks to real-time alerting in case of detected sensitive content.

At the same time, content moderation allows a brand or company to collect and analyze all positive and negative opinions published by customers or prospects, which is a real goldmine of information. By analyzing consumer sentiment, brands and companies have notable insights that will allow them to adjust future actions to be taken.

| Why is our AI so powerful?

Since 2002, we have been involved in community spaces of the largest media, print and audiovisual press players. As of January 1, 2022, more than 98 French, Belgian, Spanish and Swiss media outlets trust us to moderate and secure their dialog spaces on the web.

Content moderation can be done at two distinct levels. Either through an agent, “the moderator”, or in an automated way. Depending on the volumes of data to be processed, it is clear that the hybridization (Human & Artificial Intelligence) of content moderation can be advantageous. This automation is made possible thanks to AI, designed to identify hate content, insults, and immediately delete all problematic content.

As an illustration, at Netino by Webhelp, no fewer than 130 million verbatims were processed in 2021 in the “Media” sector alone. Of this mass, 49 million of these verbatims were accepted or rejected thanks to our AI. Upon closer inspection, more than 11 million toxic comments were ultimately rejected by our AI, and more than 17 million toxic comments were rejected by a human moderator.

Our moderation solution relies on a mesh of algorithms based on both pure Machine Learning, Natural Language Processing, and on corpora of terms/phrase fragments detected by our system and refined/enriched by our moderators. However, an artificial intelligence will hardly be able to understand the subtlety of human language; certain phrases or groups of words no longer mean the same thing when processed without context. We emphasize that we practice daily quality picking. In fact, our human moderators ensure the coherence of the verdicts issued and constantly update the AI to keep it relevant in the face of new expressions and emerging topics.

Result: 93% of toxic content is detected and rejected by our AI before any human intervention in the value chain.

N'hésitez pas à partager cet article !
"Automated online content moderation"