Lead Forensics

An Overview of Transparency Reporting

The Internet has created enormous potential for free, democratic, and open exchanges. Yet, some have abused this incredible tool to propagate harmful and illegal content online. This has resulted in a growing interest in holding big tech platforms accountable for the way in which they moderate content on their services. This may include, but is not limited to, openly sharing information on data collection and access, as well as removal requests. Civil society has been key in putting pressure on big technology companies to be more transparent, which has resulted in the popularity of “transparency reports”. 

Transparency is critical for the well-functioning of democratic and safe exchanges, and a requirement for fair processes, so it is not surprising that this is also becoming a centerpiece of upcoming regulations for online platforms

In 2010, Google became the first internet company to publish a formal transparency report, a practice that became more widely adopted by 2013, amidst growing concerns including about risks of government surveillance. Since then, a number of principles and frameworks – ranging from civil society initiatives to government policies – have been adopted around transparency reporting. Today, most frameworks target transparency about the moderation of Terrorist and Violent Extremist Content (TVEC) or Child Sexual Abuse Material online (CSAM); however, the upcoming regulations, such as the European Digital Services Act, expand transparency reporting to cover all a company’s content moderation process. 

Overview of voluntary transparency reporting frameworks 

The following table provides an overview of the main bodies and principles that guide transparency reporting today. 

TitleScopeWhat does it entail
Global Internet Forum to Counter Terrorism (GIFCT) 2017Terrorist and extremist content on online platforms• Requires its members to produce transparency reports and produces its own transparency reports.
• Through a multi-stakeholder approach, it defines the elements of meaningful transparency and holds its member tech companies accountable. 
Tech against terrorism 2017Guideline on transparency reporting on online counterterrorism efforts (targeted at Governments & small online service providers)• Asks governments to detail the processes and systems they use to discover, report, and store terrorist content and activity, and what redress mechanisms they provide.
• Provides transparency reporting guidelines for tech companies and advise on community guidelines enforcement and methods to increase transparency around content moderation processes.
Tech against terrorism 2017Terrorist and extremist content on online platforms
Santa Clara Principles 2018Targeted at online service providers that do content moderation• Recommendations for steps that companies engaged in content moderation should take to provide meaningful due process to impacted stakeholders
• Better ensure that the enforcement of their content guidelines is fair, unbiased, proportional, and respectful of users’ rights.
• Sets out foundational and operational principles as well as implementation mechanisms. 
EU Code of Conduct on Countering Illegal Hatespeech Online 2019Targeted at voluntary industry signatories • The Code of Conduct was created in 2016 in cooperation with tech companies to respond to xenophobia and racism online.
• Signatories commit to proving transparency reports and ensuring removal requests for illegal content are dealt with in less than 24 hours. 
Centre for democracy and technology 2021A framework for policymakers • Focuses on user’s speech, access to information, and privacy from government surveillance.
OECD Voluntary Transparency Reporting Framework 2022Terrorist and violent extremist content (TVEC) on platforms• A response to the problem of a variety of different frameworks, definitions, and stakes recognized in other transparency reports.
• Sets a standard for baseline transparency on TVEC. 
• Launched a portal for submitting and accessing standardized transparency reports from online services.
Tech Coalition 2022Targeted at voluntary industry signatories for CSAM• TRUST is a voluntary industry framework for transparency reporting that focuses on child sexual exploitation and abuse online.
•  It takes into account the variety of digital services in this environment as well as differences in company size and maturity. 
EU Code of Practice on Disinformation 2022Targeted at voluntary industry signatories • Created in 2018 and updated in 2022, this Code addresses disinformation, specifically in the context of Covid-19 and the war on Ukraine. 
• Requests platforms to provide monthly reports on their efforts to promote authoritative data, improve users’ awareness, and limit disinformation and false advertising; it also sets up a Transparency Centre and a Task Force to oversee the implementation of the code and keep it future-proof.  

Regulations on transparency reporting

Aside from frameworks from civil society groups and voluntary codes created in cooperation with the industry, many governments have (or are in the process of) passing laws around online hate speech that encourage transparency reporting. As mentioned above, the DSA requires all online intermediaries to provide transparency reports, the details of which vary according to the type of service. The Platform Transparency and Accountability Act in the US also aims to address this growing issue and implement transparency legislation. Similarly, the proposed American Digital Services Oversight and Safety Act of 2022 sets out transparency reporting obligations for content moderation.

Implications for online service providers

With the increasing demand for accountability and transparency from online platforms as well as governments, it is not surprising that numerous frameworks for transparency reporting have come up. Despite the variations, at its core, transparency reporting entails a clear and consistent account being kept of requests for removal or restriction of content. 

Conclusion

To ensure alignment with industry best practices and compliance with regulatory requirements for transparency, companies will need new processes and tools that are effective at handling and organizing large volumes of content moderation activities, and which are continuously aligned with rapidly evolving expectations and requirements.  Concretely, this means having the ability to track all actions taken on user content, all notices coming from every potential source, and even further, track all complaints about any content moderation decisions taken by the online service. Streamlining and unifying these workflows will be crucial for all players to remain compliant and ensure the trust of their users.

To find out more, contact us at info@tremau.com.

Tremau Policy Research Team

JOIN OUR COMMUNITY

Stay ahead of the curve – sign up to receive the latest policy and tech advice impacting your business.

Share This Post

Further articles

Best practices in trust & safety

Building a Robust Trust & Safety Framework

In today’s digital landscape, Trust and Safety have become paramount concerns for online platforms. With the introduction of regulations like the EU’s Digital Services Act (DSA) and the UK’s Online Safety Act (OSA), platforms face increasing pressure to ensure user safety and maintain trust. However, many find themselves at a loss when it comes to

DSA Compliance

Regulating Online Gaming: Challenges and Future Landscape

The online gaming industry has experienced significant growth in the past years. In the EU alone, the sector generated €23.48 billion in 2022, four times the revenue of digital music and almost twice that of video-on-demand. Only in the EU, the sector generated €23.48 billion in the EU in 2022, namely four times the revenue

Join our community

Stay ahead of the curve – sign up to receive the latest policy and tech advice impacting your business.