Lead Forensics

Gonzalez, Taamneh, and the Future of Content Moderation

The US “may be about to change the law on this massively complex question about human rights on the Internet through the backdoor”, tweeted Daphne Keller, Platform Regulation Director at the Stanford Cyber Policy Centre, in a thread detailing the Gonzalez and Taamneh cases that will be appearing at the Supreme Court this week. While the aforementioned cases raise questions on platform liability with regards to content they leave up on the platform, recently passed laws in Texas and Florida – which will also be tested in the Supreme Court – limit content platforms can take down. 

These four cases are at the heart of the catch 22 situation online platforms find themselves in: on the one hand there is pressure to remove content to protect user safety, and on the other, to leave content up to protect freedom of speech. At the core of this debate is whether online platforms can be held liable for the speech they host, and its outcome has the potential to completely transform the future of the tech industry. 

Platform liability in the US – Section 230 in a nutshell 

Section 230 of the Communications Decency Act (1996) – 26 words that set the stage for the internet as we know it today – shields online platforms from liability for content posted by their users. More than two decades after its publication, it remains hotly debated with some arguing it provides too much protection for online platforms, while others state that this section is crucial to maintain freedom and diversity on the internet. Despite many attempts, there has been limited success in Congress to introduce substantive changes to the law. The Supreme Court is therefore in particularly challenging territory – they have to rule on an issue where law makers have not been able to agree on for decades.

What are the Supreme Court hearings about?

The Gonzalez v. Google LLC case involves a dispute between the family of a victim of the Paris terror attacks from 2015, and Google, over YouTube’s recommendations of terrorist content. Similarly, Twitter Inc. v. Taamneh follows the 2017 terrorist attack in an Istanbul nightclub, where the relatives of the victim have accused Twitter, Facebook, and Google for aiding and abetting the attack by enabling the dissemination of terrorist content. As both these cases consider whether the platform can be held responsible for content it hosts, they open Section 230 to potential modifications.

Defending the current liability protection, Google has argued that Section 230 promotes free expression online and empowers websites to create their own moderation rules to make the internet a safer place. While this law has so far protected platforms when it comes to content their users post, the primary question in this case is whether Section 230 also protects the platforms’ recommendation algorithms – a feature that is crucial to many platforms’ architectures today, and for some, like Tiktok, the recommendation is the service. 

On the other hand, in the Taamneh hearing, the courts will set aside Section 230 to discuss whether a platform can be charged with aiding and abetting terrorism if the service was not directly employed for the attack. In a previous hearing, the 9th Circuit ruled that indeed they can be held responsible; however, as the court did not consider Section 230, the platforms remained protected under it. Depending on whether the Supreme Court weakens the general liability protection with the Gonzalez case, it could create a significant problem for platforms as they could all be held liable for aiding and abetting terrorism.

How are the Texas and Florida laws impacting online platforms?

Both states have recently tried to pass laws that make it illegal for online platforms to moderate content or restrict users in many cases. For both laws, petitions are pending in front of the Supreme Court, that has decided not to take them up this year. These laws add to the tensions around regulation in the online space and the potential rulings of the Gonzalez and Taamneh cases. While the latter two urge platforms to do more to moderate certain content on their services – to the extent of holding them liable for promoting and/or hosting such content – the state laws argue that content should not be moderated under provisions of free speech.

Notably, in the case of the Texas law, House Bill 20 forbids large social media platforms from moderation based on the “viewpoint of the speaker” – in this case, ‘lawful but awful’ content would be required to stay up as long as it is not illegal. In a panel organised by the Stanford Cyber Policy Centre on February 17th, speakers highlighted that this could pose specific risks to children. For example, content promoting eating disorders and self-harm would be required to stay up, if content discouraging the same was also up, as both could be drawn to speaker viewpoints.

To remove or not to remove?

These contradictory laws and decisions promise to transform content moderation on online platforms as it exists today. At its core, while the state laws mandate that platforms do not remove certain content and users, the Supreme Court cases could change Section 230 and make platforms liable for the content they recommend or fail to remove. This conflict could seemingly be resolved with the upcoming hearings, or alternatively, open up a Pandora’s box of tech regulation problems. Ultimately, the decisions in the upcoming days will impact not just the online ecosystem, but also the principles that govern it. 

How can Tremau help you?

Whatever the decision of the hearings may be, one thing is certain – it has the potential to impact all online platforms and their content moderation processes.

JOIN OUR COMMUNITY

Stay ahead of the curve – sign up to receive the latest policy and tech advice impacting your business.

Share This Post

Further articles

Global Regulations

What does Canada’s proposed Online Harms Act mean for your platform?

In the last three years, roughly 540 million people—representing some of the most lucrative markets—have come under the protection of next-generation online safety laws in the European Union (EU), United Kingdom (UK), and Australia. The dominoes are falling, and Canada is suiting up to join the party.  Canada’s Bill C-63, otherwise known as the Online

Trust & Safety Software
Best practices in trust & safety

Making the Right Choice: Buy or Build Your Trust & Safety Software?

In the realm of software development, the age-old question about building software in-house or buying it from vendors is very common.  It is not surprising that this question is also very common when it comes to enterprise-level Trust & SafetyThe field and practices that manage challenges related to content- and conduct-related risk, including but not

Join our community

Stay ahead of the curve – sign up to receive the latest policy and tech advice impacting your business.