For the better part of a decade, as regulators took a backseat in platform regulation, voluntary initiatives to promote action to curb online harms flourished. Codes of conduct served as a way to encourage platform action, transparency, and accountability in areas like terrorist content, hate speechHate speech is any form of communication, whether written, spoken or otherwise expressed, that attacks or incites violence, discrimination or hostility against a particular individual or group on the basis of their race, ethnicity, nationality, religion, sexual orientation, gender identity, or other characteristics. More and child safety. With regulatory action now in full swing, most prominently with Europe’s Digital Services Act (DSA), what will become of these initiatives? Read on to learn what the DSA says about codes of conduct and what it means for you.
Codes of conduct in a nutshell
In short, codes of conduct will remain voluntary and supplement DSA obligations. For example, under Article 16 DSA, platforms must allow users to report content and process user reports in a timely manner. However, the Article does not specify what “timely” means – and for good reason: not all illegal content comes with the same urgency. This is where benchmarks established in codes are particularly useful. For hate speech, the 2016 Code on Countering Illegal Hate Speech specifies the limit at 24 hours following a user report.
With this aim of complementing the DSA, the European Commission will sponsor the creation of EU-wide codes. These codes will include Key Performance Indicators (KPIs) that the parties will use as a benchmark when reporting on the outcomes of their implementation actions. The Commission will evaluate their compliance and assess the code itself (particularly its effectiveness), reviewing it if needed.
In what areas should we expect codes?
While the Commission can promote codes on any topic, the DSA highlights some priorities.
Firstly, the challenges of tackling different types of illegal content, like child sexual abuse material, hate speech, and illegal products. Secondly, tackling four types of systemic risks, i.e., risks to public interests that may arise from the use and features of very large online platforms (VLOPs). These are:
- Risks linked to the spread of illegal content and activities,
- Impacts on fundamental rights like freedom of expression and children’s rights,
- Effects on democratic processes, and
- Effects of VLOPs on public health and minors, including risks stemming from disinformationFalse information that is circulated on online platforms to deceive people. It has the potential to cause public harm and is often done for economic or political gain. More.
The Commission won’t start from scratch, as it has previously sponsored relevant codes, such as the Strengthened Code of Conduct for Countering Disinformation (2022), the updated Product Safety Pledge (2022), and the Hate Speech Code (2016, update expected this year).
As for new codes to look out for, the Commission recently created a working group on child safety focused on an upcoming Code on Age-Appropriate Design. In addition, the Commission will promote new codes on advertising and accessibility. While no EU-sponsored codes exist on these topics, the advertising code may draw from initiatives like the EASA’s Best Practices for Digital Marketing, while the accessibility one may build on the European Accessibility Act and the European Standard on Accessibility requirements for ICT products and services.
Implications for Very Large Online Platforms’ risk assessments
In principle, platforms of all sizes are free to participate. Yet the DSA strongly signals that VLOPs are expected to sign up. Indeed, VLOPs may find several incentives to do so. For one, VLOPs have a duty to assess and mitigate systemic risks on their services, and complying with a code of conduct may count as a risk mitigation measure under the DSA. Hence, while applying a code does not presume compliance, it does signal a VLOP’s goodwill to meet their risk mitigation obligations. On the flipside, if VLOPs refuse the Commission’s invitation to adhere, it can consider the refusal when determining if a DSA infringement has taken place. Refusing doesn’t necessarily imply non-compliance, but VLOPs who refuse must be ready to clearly explain how they meet their DSA obligations with different measures.
Furthermore, codes of conduct can help operationalise other DSA duties, which VLOPs must comply with by the end of August 2023. For example, under Article 39 DSA, VLOPs must create a repository of all ads shown on their platform. The Strengthened Disinformation Code contains measures and KPIs for building a repository of political advertisement that could be adapted for all ads; giving VLOPs some confidence that their ad repositories meet the DSA standard.
Why smaller platforms should care about codes of conduct
The lack of explicit benchmarks on how to meet the DSA’s compliance bar can be a challenge for smaller platforms. Codes of conduct serve as a resource-efficient way to achieve more certainty that their efforts are sufficient, especially as commitments under codes will likely be adapted for smaller platforms.
The measures in codes of conduct are a result of lengthy conversations among experts across different sectors. Hence, in addition to signalling goodwill to regulators and acting as a benchmark for compliance, codes often contain important information on how to tackle difficult content moderationReviewing user-generated content to ensure that it complies with a platform’s T&C as well as with legal guidelines. See also: Content Moderator More challenges. After all, addressing online harms is not purely a compliance question. Research shows that hate and abuse online have serious negative effects on user experiences; codes of conduct can signal ways to improve platform responses and deploy best practices to retain users and grow the business. For example, all service providers, signatories or not, can look at the Strengthened Disinformation Code for guidance on demonetising disinformation and collaborating with fact-checkers. Similarly, the Hate Speech Code can help structure user reporting processes, while the Product Safety Pledge can help marketplaces streamline their notice & take-down processes.
How can Tremau help you?
Tremau offers the next generation Trust & Safety platform to centralise, streamline and ensure compliance of all your T&S processes and commitments. Merge all content flags, notices, and prepare transparency reports in one place. Most importantly, view all relevant data points in one place to gain the insights you will need to assess and mitigate your risks. In addition, if you need help in understanding what steps to take to address your compliance risks and efficiency gaps, or in enhancing user retention by addressing online harms, Tremau’s advisory team can help. Check out our website or contact us for more.