As we stand on the cusp of the biggest election year in history, the intersection of technology and democracy takes centre stage once again. More than 50 countries around the world with a combined population of around 4.2 billion will hold national and regional elections in 2024, featuring seven of the ten most populous nations in the world. With the rapid advance of generative AI, which allows anyone to create realistic images, video, audio, or text based on user-provided prompts, the electoral processes can face new challenges.
Generative AI has garnered attention for its potential to influence public opinion and therefore impact debates and decisions. From deepfake videos to “smart” targeted AI-generated campaigns at scale, the deployment of generative AI techniques can pose significant threats to the integrity of democratic processes. These risks can take many shapes and forms including last-minute attempts to deter people from voting or manufacture an event featuring a generated depiction of a candidate that is difficult to debunk, or spread targeted false stories.
What does this mean for online platformsAn online platform refers to a digital service that enables interactions between two or more sets of users who are distinct but interdependent and use the service to communicate via the internet. The phrase "online platform" is a broad term used to refer to various internet services such as marketplaces, search engines, social media, etc. In the DSA, online platforms...? Simply: avoid uncomfortable questions around accountability about the spread of questionable AI generated on your platform, particularly in the face of potential scandals, and improve your trust & safetyThe field and practices that manage challenges related to content- and conduct-related risk, including but not limited to consideration of safety-by-design, product governance, risk assessment, detection, response, quality assurance, and transparency. See also: Safety by design operations to handle potential new threats. Pangram Labs is developing the most accurate AI-generated content detection methods to automate identification and moderation of AI content. Combined with human-in-the-loop technologies enabled by Tremau, this is an effective process to control and moderate AI content before it threatens the integrity of a platform or election.
When the European Commission first released its proposal for an AI Act in April 2021, generative AI was far from being an urgent concern of regulators. That all changed with the recent advances in AI such as GPT-4. Because of this, the European Parliament substantially amended the European Commission’s initial proposal, notably introducing specific rules that apply to generative AI systems (the Parliament Proposal). Generative AI falls under the category of “General Purpose AI Systems” that have to comply with transparency requirements including disclosing that the content was generated by AI. Meeting these regulatory obligations can be made simple with a compliant-by-design content moderationReviewing user-generated content to ensure that it complies with a platform’s T&C as well as with legal guidelines. See also: Content Moderator platform such as Tremau’s moderation platform.
Many businesses have recognized that preparing for the challenges and risks for AI is not only a regulatory issue. The fast development of AI poses challenges for brand safety and platform health – as such it is essential to be aware of how bad actors use AI content for spam and disinformationFalse information that is circulated on online platforms to deceive people. It has the potential to cause public harm and is often done for economic or political gain. and use tools to keep it off of your platform.
For all these reasons, and as part of our mission to build a safe and beneficial digital world for all, Tremau and Pangram Labs are partnering to provide AI-generated content detection and AI disclosures for user-generated content. This way, platforms can stay compliant and keep their user-generated content authentic.
How can we help you?
At Tremau we work to help you best navigate the new world of powerful AI and new regulations.
Pangram Labs is building tools to automate the detection of AI-generated content, starting with text and speech. Learn more at pangramlabs.com.
To find out more, contact us at info@tremau.com and info@pangramlabs.com.