Lead Forensics

An Overview of Transparency Reporting

The Internet has created enormous potential for free, democratic, and open exchanges. Yet, some have abused this incredible tool to propagate harmful and illegal content online. This has resulted in a growing interest in holding big tech platforms accountable for the way in which they moderate content on their services. This may include, but is not limited to, openly sharing information on data collection and access, as well as removal requests. Civil society has been key in putting pressure on big technology companies to be more transparent, which has resulted in the popularity of “transparency reports”. 

Transparency is critical for the well-functioning of democratic and safe exchanges, and a requirement for fair processes, so it is not surprising that this is also becoming a centerpiece of upcoming regulations for online platforms

In 2010, Google became the first internet company to publish a formal transparency report, a practice that became more widely adopted by 2013, amidst growing concerns including about risks of government surveillance. Since then, a number of principles and frameworks – ranging from civil society initiatives to government policies – have been adopted around transparency reporting. Today, most frameworks target transparency about the moderation of Terrorist and Violent Extremist Content (TVEC) or Child Sexual Abuse Material online (CSAM); however, the upcoming regulations, such as the European Digital Services Act, expand transparency reporting to cover all a company’s content moderation process. 

Overview of voluntary transparency reporting frameworks 

The following table provides an overview of the main bodies and principles that guide transparency reporting today. 

TitleScopeWhat does it entail
Global Internet Forum to Counter Terrorism (GIFCT) 2017Terrorist and extremist content on online platforms• Requires its members to produce transparency reports and produces its own transparency reports.
• Through a multi-stakeholder approach, it defines the elements of meaningful transparency and holds its member tech companies accountable. 
Tech against terrorism 2017Guideline on transparency reporting on online counterterrorism efforts (targeted at Governments & small online service providers)• Asks governments to detail the processes and systems they use to discover, report, and store terrorist content and activity, and what redress mechanisms they provide.
• Provides transparency reporting guidelines for tech companies and advise on community guidelines enforcement and methods to increase transparency around content moderation processes.
Tech against terrorism 2017Terrorist and extremist content on online platforms
Santa Clara Principles 2018Targeted at online service providers that do content moderation• Recommendations for steps that companies engaged in content moderation should take to provide meaningful due process to impacted stakeholders
• Better ensure that the enforcement of their content guidelines is fair, unbiased, proportional, and respectful of users’ rights.
• Sets out foundational and operational principles as well as implementation mechanisms. 
EU Code of Conduct on Countering Illegal Hatespeech Online 2019Targeted at voluntary industry signatories • The Code of Conduct was created in 2016 in cooperation with tech companies to respond to xenophobia and racism online.
• Signatories commit to proving transparency reports and ensuring removal requests for illegal content are dealt with in less than 24 hours. 
Centre for democracy and technology 2021A framework for policymakers • Focuses on user’s speech, access to information, and privacy from government surveillance.
OECD Voluntary Transparency Reporting Framework 2022Terrorist and violent extremist content (TVEC) on platforms• A response to the problem of a variety of different frameworks, definitions, and stakes recognized in other transparency reports.
• Sets a standard for baseline transparency on TVEC. 
• Launched a portal for submitting and accessing standardized transparency reports from online services.
Tech Coalition 2022Targeted at voluntary industry signatories for CSAM• TRUST is a voluntary industry framework for transparency reporting that focuses on child sexual exploitation and abuse online.
•  It takes into account the variety of digital services in this environment as well as differences in company size and maturity. 
EU Code of Practice on Disinformation 2022Targeted at voluntary industry signatories • Created in 2018 and updated in 2022, this Code addresses disinformation, specifically in the context of Covid-19 and the war on Ukraine. 
• Requests platforms to provide monthly reports on their efforts to promote authoritative data, improve users’ awareness, and limit disinformation and false advertising; it also sets up a Transparency Centre and a Task Force to oversee the implementation of the code and keep it future-proof.  

Regulations on transparency reporting

Aside from frameworks from civil society groups and voluntary codes created in cooperation with the industry, many governments have (or are in the process of) passing laws around online hate speech that encourage transparency reporting. As mentioned above, the DSA requires all online intermediaries to provide transparency reports, the details of which vary according to the type of service. The Platform Transparency and Accountability Act in the US also aims to address this growing issue and implement transparency legislation. Similarly, the proposed American Digital Services Oversight and Safety Act of 2022 sets out transparency reporting obligations for content moderation.

Implications for online service providers

With the increasing demand for accountability and transparency from online platforms as well as governments, it is not surprising that numerous frameworks for transparency reporting have come up. Despite the variations, at its core, transparency reporting entails a clear and consistent account being kept of requests for removal or restriction of content. 

Conclusion

To ensure alignment with industry best practices and compliance with regulatory requirements for transparency, companies will need new processes and tools that are effective at handling and organizing large volumes of content moderation activities, and which are continuously aligned with rapidly evolving expectations and requirements.  Concretely, this means having the ability to track all actions taken on user content, all notices coming from every potential source, and even further, track all complaints about any content moderation decisions taken by the online service. Streamlining and unifying these workflows will be crucial for all players to remain compliant and ensure the trust of their users.

To find out more, contact us at info@tremau.com.

Tremau Policy Research Team

JOIN OUR COMMUNITY

Stay ahead of the curve – sign up to receive the latest policy and tech advice impacting your business.

Share This Post

Further articles

Global Regulations

What does Canada’s proposed Online Harms Act mean for your platform?

In the last three years, roughly 540 million people—representing some of the most lucrative markets—have come under the protection of next-generation online safety laws in the European Union (EU), United Kingdom (UK), and Australia. The dominoes are falling, and Canada is suiting up to join the party.  Canada’s Bill C-63, otherwise known as the Online

Trust & Safety Software
Best practices in trust & safety

Making the Right Choice: Buy or Build Your Trust & Safety Software?

In the realm of software development, the age-old question about building software in-house or buying it from vendors is very common.  It is not surprising that this question is also very common when it comes to enterprise-level Trust & SafetyThe field and practices that manage challenges related to content- and conduct-related risk, including but not

Join our community

Stay ahead of the curve – sign up to receive the latest policy and tech advice impacting your business.