Lead Forensics

Content moderators: How to protect those who protect us?

Content moderators have become indispensable for online platforms’ everyday operations. However, major platforms outsourcing their content moderation to contractors all around the world face an increasingly pressing challenge: Employee turnover at these sites is high, as most moderators cannot continue for more than 2 years on average.

Poor mental health is one of the major reasons behind moderators leaving their positions, as their jobs require them to review large volumes of texts, pictures, and videos containing highly disturbing content around violence, extremism, drugs, child sexual abuse materials (CSAM), self-harm, and many more. Long-term exposure to such harmful content has triggered serious mental health issues among moderators, including depression and anxiety. With deteriorated mental health conditions, more severe issues like PTSD and addictions to drug and alcohol have also been noted to emerge.

Disturbing content does not only cost content moderators their mental health, it also has a financial impact on platforms. For example, the San Mateo Superior Court required Facebook to pay millions to content moderators who had developed PTSD on the job. Moreover, as Non-Disclosure Agreements (NDA) have become common practices, content moderators often find themselves unable to talk to trusted friends or family members about their work experience. This leads to a lack of support for moderators, misunderstanding of their precarious conditions, and growing unwillingness to voice their difficulties.

The intensity of the job is another major problem. While there are millions of posts appearing on various social platforms every day, there are only about 100,000 people working in the current content moderation industry. Despite mega-platforms’ promises to recruit more moderators in recent years, the amount of work distributed to each moderator continues to be enormous: they have to review thousands of posts each day, which leaves them only with a very tight window to decide whether a post should be removed or not – creating new issues around the accuracy and consistency of a company’s content moderation and impact on freedom of expression.

Indeed, ensuring the quality of content moderation is a challenge with important implications about the well-functioning of social media, freedom of expression, and fairness. Besides the very limited time frame for making moderation decisions, the quality of moderation can also be affected by individual biases, the AI tools deployed by platforms and the highly contextual nature of many posts, not to say the large amount of online content in the grey area between harmful and harmless. Apart from problems brought by online content, the complex constellation of laws, policies, platforms’ terms and conditions, and internal instructions also add difficulties for moderators to respond quickly and accurately.

The tech industry has already acknowledged these challenges. Several solutions exist to address these problems, but they still have considerable limitations. AI has been widely implemented in content moderation for both removing anything that is explicitly illegal, and for detecting suspicious content for human moderators to investigate. However, one salient drawback of AI is that it can only work on those “straightforward” cases covering broad categories, such as “nudity” or “blood”: for anything more nuanced, the current AI tools have proven to be prone to mistakes.  For example, Thomas Jefferson’s words in the Declaration of Independence once got taken down automatically as “hate speech” because the phrase “Indian Savages” was flagged as inappropriate by the AI tool. 

Quality of content moderation

Another problem with current AI tools in content moderation is that most AI only works on text and visual-based content, while AI tools tailored for audio-based content moderation or in more interactive settings, such as live chat and live stream, are still in development. Furthermore, it has been established that AI tools often reflect the inherited biases of their creators, and for tools empowered by “black boxes”, their opaque decision-making processes may even create new problems in transparency auditing and quality assurance. 

Providing mental health care for moderators is another important practice across companies. Wellness coaches and counselors are commonly seen in content moderation sites, as well as occasional employee support programs, but many moderators consider them inadequate and call for professional intervention from clinical psychiatrists and psychologists. “Wellness break” included in daily working hours is another expected buffer against deteriorating mental health, but it is also criticized for being too short compared to hours of exposure of traumatizing content.

There is still a lot that needs to be done to protect those who protect us from the worst aspects of the Internet. Possible improvements should be pursued in both technological and organizational solutions. Both the industry and academia have been working on improving the accuracy and efficiency of AI in automated detection and removal of harmful content. Apart from training smarter AI for more efficient automation, AI may also contribute in preventing or reducing exposure to disturbing content by interactively blurring them for human moderators. 

Technology can also play a role in developing tools specialized in assisting content moderators in their work routines, in order to promote better task distribution across moderators, facilitate smoother internal communications for more complicated moderation decisions, and achieve more streamlined quality assurance of content moderation. 

Companies should also assume more responsibilities in proactive protection of their workers’ mental wellness. For example, the tech industry can learn a lot from previous experiences from other high-risk jobs, such as the policejournalists, and child exploitation investigators. A first critical practice in these fields is to clearly inform employees and those who want to join the inherent risk of reviewing harmful content. Companies should also invest in building regular, long-term resilience training programs and hosting high-quality clinical mental health care teams in-house. 

More importantly, there are strict maximum exposure times especially for those working in environments containing hazardous substances. Similar standards of maxim exposure time should apply to content moderation. Finally, across the nascent content moderation industry, building meaningful interpersonal networks among moderators can be valuable, fostering mutual support among “insiders” and eventually bringing the interests of content moderators to future agenda, who are crucial stakeholders of regulating the digital space.

JOIN OUR COMMUNITY

Stay ahead of the curve – sign up to receive the latest policy and tech advice impacting your business.

Share This Post

Further articles

Global Regulations

What does Canada’s proposed Online Harms Act mean for your platform?

In the last three years, roughly 540 million people—representing some of the most lucrative markets—have come under the protection of next-generation online safety laws in the European Union (EU), United Kingdom (UK), and Australia. The dominoes are falling, and Canada is suiting up to join the party.  Canada’s Bill C-63, otherwise known as the Online

Trust & Safety Software
Best practices in trust & safety

Making the Right Choice: Buy or Build Your Trust & Safety Software?

In the realm of software development, the age-old question about building software in-house or buying it from vendors is very common.  It is not surprising that this question is also very common when it comes to enterprise-level Trust & SafetyThe field and practices that manage challenges related to content- and conduct-related risk, including but not

Join our community

Stay ahead of the curve – sign up to receive the latest policy and tech advice impacting your business.