Lead Forensics

Content Moderation: Key Practices & Challenges

Content moderation has become increasingly important for online platforms to protect their users from potential abuses. The evolving regulatory landscape has also put growing responsibilities on the way user-generated content should be moderated online. Notably, the upcoming Digital Services Act (DSA), which affects almost every online service provider active in the EU, will bring unprecedented obligations to online services in a wide range of sectors, as well as considerable penalties for those who fail to meet the new requirements (up to 6% of annual global turnover).

Similar regulations are under development in multiple jurisdictions around the world (Australia, Canada, UK, and South Korea – to name a few). Thus, designing and implementing a strategy for content moderation is vital not only for contributing to online trust & safety and ensuring the retention and satisfaction of the platforms’ users, but also for a company’s ability to do business in the markets where regulations are being developed. A company’s success will largely be determined by the degree to which it has managed to ingrain the new content moderation requirements into its business model.

Content moderation practices 

To understand the challenges in achieving efficient and effective content moderation, Tremau interviewed content moderators and managers working in the Trust & Safety departments across more than 30 companies, ranging from mega platforms to early-stage start-ups. Notwithstanding the different types of content that moderators are exposed to given the diversity of online platforms, we have identified a set of common important practices adopted by companies and clear areas for improvement. Three major sections identified include: detection of harmful or illegal content, moderation process and controls, and crisis management.

I. Detection of harmful or illegal content

A major challenge in content moderation is the tremendous volume of content produced in real-time. In order to accurately identify the very small proportion of potentially problematic content from the rest, companies often use a mixture of re-active moderation (answering user reports) and pro-active moderation (automated detection tools). Based on pre-determined rules or machine learning detection models, AI-empowered automated detection usually selects content that is potentially illegal, such as terrorist content or counterfeit products, or content that clearly violates a company’s terms of service. Many companies also employ automated tools as a preliminary filter, and based on the confidence threshold in the detection, a human moderator is introduced in the process to verify results.

Despite the improved efficiency brought by automated detection, the overwhelming majority of our interviewees have pointed out that the room for improvement is still large. One frequently mentioned drawback is the difficulty in treating nuanced cases, which makes a human moderator’s job indispensable. Moreover, no AI tool can be a perfect substitute for human intervention in this job given the continuously evolving and highly diverse culture and requirements. Thus, automated content moderation tools should not be built upon the principle of replacing human moderators, but of working with them.

II. Moderation process & controls

A common issue with content moderation systems is that companies typically have to continuously fill the gap between their existing workflows and the evolving regulatory obligations – often by frequently “patching” their moderation systems. Thus, a much-needed capability is to build content moderator-centric systems according to the company’s evolving regulatory obligations, allowing better coordination among different teams and a more effective and efficient moderation strategy.

II.1 Multi-level moderation process

Violations of content policies are often categorized into pre-defined groups such as violence, foul language, and extremis. However, moderators can often find themselves reviewing much more nuanced, complex or context-sensitive cases. A key practice adopted by various companies is to establish multi-level moderation teams & processes. In this structure, frontline moderators are responsible for making a “Yes or No” decision for the most clear-cut cases, and send more complicated cases to higher level moderators who have more experience as well as access to more information. In rare situations of very difficult cases, senior Trust & Safety managers or other departments concerned discuss and make the final solution.

II.2 Moderation decision trees

Another practice to support moderation decision-making for frontline workers is to use a decision tree during the moderation process, a practice that has been widely adopted by customer support departments and other call centers. By decomposing a complex moderation question into an array of smaller and easier options, a decision tree allows moderators to judge cases in a more structured and standardized manner, which can boost the efficiency and quality of the overall process.

II.3 Quality Assurance

Accuracy and consistency of content moderation are also key concerns. Companies develop both ex-ante and ex-post control measures to improve the quality of content moderation. Intensive training before starting as a moderator is commonly seen across companies, and regular training sessions also take place in many companies to keep moderators tuned in with the latest regulatory or terms of service updates. 

Considering the constantly evolving regulations, at both national and international levels, companies often draft extensive and detailed guidelines for moderators to refer to before reaching a decision. Reviewing the accuracy of past moderation decisions on a regular basis is also widely adopted by companies. Often a random sample of the total cases treated by a moderator in any given period will be pulled from stored data and sent for examination, or some cases may be given to multiple moderators to examine their consistency; the calculated accuracy rate is often a key component of the moderators’ KPI. 

III. Crisis management 

Another key challenge during the moderation process is that content moderators’ tasks involve much more than simply judging whether a post should be removed or not. For example, crisis management is also part of their job when, for example, they encounter urgent cases, such as a livestream of self-harm or terrorist attack such as the livestreaming of Buffalo shooting. Such cases demand immediate outreach to law enforcement departments or other appropriate local authorities and should be considered as the digital “first aid” of our time.

Content moderators also need to provide some degree of customer support, as users may file complaints against certain moderation decisions – hence moderators must also be enabled to easily retrieve all relevant information of past cases or of users to better communicate with them.

Toward a better design of content moderation 

Although content moderation is essential for almost every online platform that hosts regular interactions among users, most companies usually do not have enough resources to build or, often more challenging, to maintain and keep up-to-date, efficient and effective internal moderation systems. On this note, Tremau’s conversations with content moderators enabled us to identify a number of recommendations to create an efficient and consistent content moderation processes.

For example, given the multi-faceted nature of content moderation, the most efficient approach to enhancing content moderation processes is to integrate related functions and controls into a more moderator-centric centralized system, which enables the moderators to avoid constantly shifting between tools, ensuring a smoother workflow, important efficiency gains and more accurate KPIs and quality control.

A centralized system also allows data to be reconciled in a unified platform, thereby giving moderators the complete context needed to make decisions and enabling automated transparency reporting. It also facilitates a risk-based approach via prioritization, which allows moderators to treat cases more effectively and enables the implementation of convenient contact channels with authorities and other stakeholders in case of emergencies. Such rapid reaction mechanisms are still not mature enough in many companies.

With access to more efficient processes as well as analytics, it then becomes possible to also better protect moderators’ wellness against traumatizing content.

What does this mean for your business?

To meet the challenges of protecting their users & complying with regulations that are continuously evolving, a number of online platforms will need to enhance their content moderation processes and controls. The measures discussed above streamline the moderation processes to be more efficient, and – with appropriate structuring of data – can automate transparency reporting, which is increasingly in demand across voluntary codes and regulations.

With regulations such as the Terrorist Content Online Regulation, which sets a 1-hour limit for online services to remove Terrorist and Violent Extremist Content (TVEC) from their platforms, there also needs to be further investments into reliable mechanisms to prioritize content in moderation queues. Thus, “Compliance by Design” will become a necessary focus for building effective and future-proof content moderation systems. Successfully building these capabilities will soon become a key differentiator, and even a critical factor, for survival. 

How can Tremau help you?

Tremau’s solution provides a single trust & safety content moderation platform that prioritizes compliance as a service and integrates workflow automation and other AI tools. The platform ensures that providers of online services can respect all DSA requirements while improving their key trust & safety performance metrics, protecting their brands, increasing handling capacity, as well as reducing their administrative and reporting burden.

We would like to thank all the content moderators & managers who took the time to talk to us and contributed to our findings.

Tremau Policy Research Team

JOIN OUR COMMUNITY

Stay ahead of the curve – sign up to receive the latest policy and tech advice impacting your business.

Share This Post

Further articles

Global Regulations

What does Canada’s proposed Online Harms Act mean for your platform?

In the last three years, roughly 540 million people—representing some of the most lucrative markets—have come under the protection of next-generation online safety laws in the European Union (EU), United Kingdom (UK), and Australia. The dominoes are falling, and Canada is suiting up to join the party.  Canada’s Bill C-63, otherwise known as the Online

Trust & Safety Software
Best practices in trust & safety

Making the Right Choice: Buy or Build Your Trust & Safety Software?

In the realm of software development, the age-old question about building software in-house or buying it from vendors is very common.  It is not surprising that this question is also very common when it comes to enterprise-level Trust & Safety (T&S) tools. Luckily, there is a long history of research on this question, starting from

Join our community

Stay ahead of the curve – sign up to receive the latest policy and tech advice impacting your business.