Lead Forensics

Tremau and Pangram Labs partner to take on AI-generated content

As we stand on the cusp of the biggest election year in history, the intersection of technology and democracy takes centre stage once again. More than 50 countries around the world with a combined population of around 4.2 billion will hold national and regional elections in 2024, featuring seven of the ten most populous nations in the world. With the rapid advance of generative AI, which allows anyone to create realistic images, video, audio, or text based on user-provided prompts, the electoral processes can face new challenges. 

Generative AI has garnered attention for its potential to influence public opinion and therefore impact debates and decisions. From deepfake videos to “smart” targeted AI-generated campaigns at scale, the deployment of generative AI techniques can pose significant threats to the integrity of democratic processes. These risks can take many shapes and forms including last-minute attempts to deter people from voting or manufacture an event featuring a generated depiction of a candidate that is difficult to debunk, or spread targeted false stories.  

What does this mean for online platforms? Simply: avoid uncomfortable questions around accountability about the spread of questionable AI generated on your platform, particularly in the face of potential scandals, and improve your trust & safety operations to handle potential new threats. Pangram Labs is developing the most accurate AI-generated content detection methods to automate identification and moderation of AI content. Combined with human-in-the-loop technologies enabled by Tremau, this is an effective process to control and moderate AI content before it threatens the integrity of a platform or election.

When the European Commission first released its proposal for an AI Act in April 2021, generative AI was far from being an urgent concern of regulators. That all changed with the recent advances in AI such as GPT-4. Because of this, the European Parliament substantially amended the European Commission’s initial proposal, notably introducing specific rules that apply to generative AI systems (the Parliament Proposal). Generative AI falls under the category of “General Purpose AI Systems” that have to comply with transparency requirements including disclosing that the content was generated by AI. Meeting these regulatory obligations can be made simple with a compliant-by-design content moderation platform such as Tremau’s moderation platform.

Many businesses have recognized that preparing for the challenges and risks for AI is not only a regulatory issue. The fast development of AI poses challenges for brand safety and platform health – as such it is essential to be aware of how bad actors use AI content for spam and disinformation and use tools to keep it off of your platform. 

For all these reasons, and as part of our mission to build a safe and beneficial digital world for all, Tremau and Pangram Labs are partnering to provide AI-generated content detection and AI disclosures for user-generated content. This way, platforms can stay compliant and keep their user-generated content authentic.

How can we help you?

At Tremau we work to help you best navigate the new world of powerful AI and new regulations.

Pangram Labs is building tools to automate the detection of AI-generated content, starting with text and speech. Learn more at pangramlabs.com.

To find out more, contact us at info@tremau.com and info@pangramlabs.com.


Stay ahead of the curve – sign up to receive the latest policy and tech advice impacting your business.

Share This Post

Further articles

Tremau in the News

GenAI and the risk for public opinion – Interview to Theos Evgeniou

Please find the full interview on Forbes.fr En parlant de risques, les craintes sont aujourd’hui focalisées sur le potentiel bouleversement des élections européennes puis américaines… Theos Evgeniou, Chief Innovation Officer at Tremau: “La technologie a déjà été utilisée pour impacter les électeurs et les élections, même avant l’arrivée de l’IA générative. Il est vrai, cependant,

Global Regulations

Online service providers should prepare for tough new laws in Australia

On Wednesday evening (Sydney time), Australia’s Federal Court extended an interim injunction mandating X Corp to hide posts containing copies of a live-streamed stabbing attack in a Sydney church.  The injunction followed an official removal notice issued by the eSafety Commissioner, ordering that X remove the posts from its platform globally. Having already geo-blocked the

Join our community

Stay ahead of the curve – sign up to receive the latest policy and tech advice impacting your business.