Lead Forensics

Quality Assurance for Content Moderation

Online content moderation has been an increasingly important and debated topic, with new regulations, such as the EU’s Digital Services Act (DSA), expected to further reinforce this trend. Regulations will create more legally-binding obligations for online platforms with respect to content moderation, in order to improve users’ online well-being and the better functioning of the online world.

Challenges of Content Moderation

However, while millions of posts appear every day on social platforms, only a few hundred thousand people work in the current content moderation industry. Despite plans from platforms to recruit more moderators, the amount of work managed by each moderator remains very large: they often have to review thousands of posts every day, leaving them with a very narrow (and stressful) window to decide whether or not an online post should be removed, raising possible issues regarding the accuracy, consistency and potential fairness of a company’s content moderation and its impact on free speech. 

In addition to the very limited time to make moderation decisions, the quality of moderation can also be affected by AI tools deployed by platforms, the highly contextual nature of many online posts, and the large quantity of online content falling in the grey zone between harmful and safe. Potential biases of content moderators further exacerbate the issue. For example, some moderators might be too lenient or too strict with respect to company guidelines, and can also be impacted by how long they have been working in the day, others may be accurate on some categories of instances but lack the expertise or training on some others, while other moderators might be biased specifically towards some categories of content (e.g., culturally, politically, etc).

Importance of Quality Assurance

Ensuring the quality of content moderation is a challenge that has important implications for the proper functioning of social media and freedom of expression online. Quality assurance (QA) for content moderation is essential to ensure that the right balance between safety and freedom of expression is met in a fair and effective manner. Poor content moderation can also raise reputation, regulatory, and other business risks for online platforms, including a possible loss of users. QA becomes even more challenging and important as companies outsource content moderation to external providers – whose quality also needs to be continuously monitored. In this context, online platforms are looking for ways to monitor and improve the quality of their moderation processes. Quality can be measured using metrics such as accuracy, consistency and fairness (e.g. similar cases get similar decisions). Consistency is critical both over time for each moderator and across moderators. 

The typical quality assurance process for online content moderation is based on performing regular (for example weekly) controlled evaluations: for example, after carefully labelling a number of content items (e.g., users’ posts), managers provide them to multiple moderators, which allows to compute a score for each of them based to how they perform relative to each other as well as relative to the desired labels the company selected for these items. 

However, this common QA practice does not leverage all data available, and as the evaluations are done only once a while, one cannot detect potential QA issues real time – for example because a moderator may drift even temporarily. An important challenge related to quality and consistency evaluation is the ability to use many, if not all past decisions from all moderators, in order not to be limited by a small number of weekly test instances. Very importantly, this help get rid of additional evaluation processes entirely, while improving the reliability of the evaluation and ensuring continuous monitoring. 

Managing/Improving QA

In our study, we discuss some approaches for managing content moderation quality real time, without the need to perform regular (and costly!) tests or requiring multiple moderators to handle the same cases. We develop a new method for comparing content moderators’ performances even when there is no overlap across moderators in the content they manage (i.e., each instance is only handled by a single moderator), using the data of the moderators’ previous decisions.  To this purpose, we also discuss how to adapt crowd labelling algorithms for performing QA in content moderation – an approach that we believe can be promising to further explore. 

In one of the experiments, we study how accurately different QA methods, some of them based on crowd labelling algorithms (see report about what these methods are), perform (the y-axis measures in this case how accurately the different methods can identify the ranking of moderators based on their accuracy/performance) as moderators label (e.g., remove or not) increasingly more content (the x-axis).

To find out more about building an accurate and efficient content moderation system, contact us at info@tremau.com.

To download Improving Quality and Consistency in Single Label Content Moderation, please fill out the form below.

Tremau Policy Research Team

We're excited that you're enjoying our content and would like to read more.

 By providing us with your email address, you’ll gain access to exclusive content, special offers, and updates about our latest articles. We’re committed to providing you with high-quality content that’s informative, engaging, and relevant to your interests.

Access the Improving Quality and Consistency in Single Label Content Moderation:

JOIN OUR COMMUNITY

Stay ahead of the curve – sign up to receive the latest policy and tech advice impacting your business.

Share This Post

Further articles

Trust & Safety Software
Best practices in trust & safety

Making the Right Choice: Buy or Build Your Trust & Safety Software?

In the realm of software development, the age-old question about building software in-house or buying it from vendors is very common.  It is not surprising that this question is also very common when it comes to enterprise-level Trust & SafetyThe field and practices that manage challenges related to content- and conduct-related risk, including but not

figting terrorist content
Global Regulations

Fighting terrorist content online: insights from the FRISCO Project

In today’s digital age, there’s been a troubling increase in how terrorists exploit the internet, which has become a major concern for both online and offline security (OECD, 2022). Facebook alone removed a record 16 million pieces of terrorist propaganda in the first quarter of 2022, along with over 13 million instances of hate speechHate

Join our community

Stay ahead of the curve – sign up to receive the latest policy and tech advice impacting your business.