The focus on online content moderation for children continues to be in the headlines. In March, UK Prime Minister Keir Starmer weighed in after the Netflix series ‘Adolescence’ topped the UK charts, saying: “This violence carried out by young men, influenced by what they see online, is a real problem. It’s abhorrent, and we have to tackle it.” Online platforms are increasingly trying to address this issue. This week, Meta announced the expansion of its Instagram ‘Teen Accounts’ to more of its services, alongside new limits for teenagers under 16 years old on Instagram.
As public pressure mounts, platforms are designing measures and regulators are stepping in with new requirements. Cutting through the noise, one priority continues to rise to the top for tech companies, policymakers, and the public: protecting children from online harm.
Children today are spending more time online than ever before—raising urgent concerns about their safety. According to the American Academy of Child & Adolescent Psychiatry, in the U.S., kids aged 8–12 spend 4–6 hours per day on screens; teens average 9 hours.
In the UK, according to Ofcom, 99% of children between the years of 12–17 are active online, with nearly a quarter using smartphones in ways consistent with behavioral addiction, according to a UK Parliamentary report.
In the EU, the European Commission highlights that, according to 2022 PISA ICT survey data, 96% of 15-year-olds across 22 EU member states use social media on a daily basis, with 37% spending more than 3 hours a day on these platforms.
But screen time is only part of the problem. Children also face serious risks online, as highlighted by UNICEF, such as exposure to harmful or violent content, the threat of online exploitation or abuse, and mental health risks from online experiences like cyberbullying.
In the physical world, these risks can be more easily mitigated by clear age-based protections, but online, those boundaries can be blurred—or in certain cases nonexistent.
For years, children have been accessing digital platforms with guardrails often limited to self-reported age and limited verification methodologies. The result: young users engaging with services that cannot necessarily guarantee their safety.
That seems to be changing—fast.
A New Era of Regulation for Child Protection
Regulatory bodies are increasingly addressing these risks by passing stricter online safety laws, especially for children, as can be seen across the world:
- 🇬🇧 United Kingdom: The UK’s Online Safety Act (OSA) requires platforms to assess and mitigate content risks to children.
- 🇦🇺 Australia: The Online Safety Amendment (Social Media Minimum Age) Bill 2024 goes further—restricting users under 16 from accessing some social media platforms.
- 🇪🇺 European Union: The EU’s Digital Services Act (DSA) includes several articles on child protection, and new discussions on the proposed Digital Fairness Act focus on online addictiveness, especially for children.
- 🇺🇲 United States of America: The USA has enacted multiple state-level bills (e.g., in Tennessee, California, Alabama, Idaho) focused on children and minors’ protection, and discussions of federal-level legislation were taking place in 2024.
This shift represents structural overhauls that could force companies to rethink how they design, moderate, and manage access to their platforms.
🇬🇧 United Kingdom - Online Safety Act
The UK’s Online Safety Act (OSA) is entering a critical phase on this matter. With the submission of the illegal content risk assessments, platforms must now focus on protecting child users from online harms.
By April 16, 2025, platforms must complete a children’s access assessment – to determine whether their service is likely to be accessed by children. If so, and if they lack highly effective age assurance, by July 2025, platforms will need to complete the children’s harm risk assessment and implement targeted safety measures.
Ofcom’s draft guidance has outlined various types of harmful content across three tiers: priority primary content (e.g., pornography), primary content (e.g., eating disorder content), and non-designated content (emerging risks). Some of these overlap with the OSA’s illegal harms; others represent new regulatory territory.
Ofcom has also published a draft Code of Practice that includes age assurance requirements and user support obligations. Platforms will need to take a close look at those codes as they could serve as the baseline for compliance.
🇦🇺 Australia - Online Safety Amendment (Social Media Minimum Age) Bill 2024
In November 2024, Australia passed what could be seen as a world-first in child safety laws: a mandatory minimum age of 16 years for accounts on certain social media platforms.
The Online Safety Amendment (Social Media Minimum Age) Bill defines “age-restricted” platforms as those enabling social interaction and user-generated content.
Messaging apps, gaming platforms, and educational tools are exempt for now, with the Bill focusing on social platforms. Although the law doesn’t name companies, Australia’s Communications Minister has already pointed to major platforms that will fall under the ban.
Companies will need to take proper measures to ensure fundamental protections are in place and demonstrate compliance to the eSafety Commissioner by the end of 2025.
🇪🇺 European Union - Digital Services Act
For minors, the EU’s Digital Services Act (DSA) requires platforms to assess and mitigate risks, adopt high-level safety, privacy, and security measures, prohibit targeted advertising, and limit data processing for users under the age of 16.
The European Commission has also published a toolbox with technical specifications for an age verification application to enforce these requirements on minors’ protection and bridge the gap until the EU Digital Identity (EUDI) Wallets become available by the end of 2026.
Further recent discussions in the EU also include the Digital Fairness Act, an EU legislative proposal that aims to combat techniques such as dark patterns, social media influencer marketing, addictive design of digital products, and online profiling—with a greater focus on minor protection.
🇺🇲 United States of America - State-level legislations
In the USA, despite conflicting opinions on the topic of overall content moderation, there have been multiple bills for children and minors protection enacted at the state-level.
In Tennessee, the Protecting Children from Social Media Act requires social media platforms to verify the age of users and obtain parental consent for minors before they create accounts. In Alabama, House Bill 164 sets up age verification requirements and data privacy protections for minors, alongside other requirements. And in California, platforms are prohibited through the Privacy Rights for California Minors in the Digital World Act from advertising certain content to minors, and are required to protect their personal information.
At a federal level, the USA has been actively discussing in 2024 the Kids Online Safety Act (KOSA) and the Children and Teens’ Online Privacy Protection Act (COPPA 2.0) – which respectively focus on mandating platforms to implement safeguards for minors and provide parental control tools, and extend privacy protections to children up to 16 years old from the original COPPA.
Implications for Platforms
Compliance Measures To Be Taken
Online services will have to take multiple steps to comply with the new online safety laws aimed at protecting children. These include but are not limited to:
- Conducting risk assessments at least annually, depending on the regulation
- Implementing robust age assurance systems
- Developing or integrating effective age verification technology to limit underage users
- Enhancing child safety measures and policies in place
- Modifying content moderation, algorithms, and settings to comply with safety-by-design principles, where relevant
- Establishing data privacy & deletion processes for minors’ personal information
Cost of Compliance – Challenges Ahead Under New Age Verification Rules
Complying with these new standards could potentially come with its challenges. Age verification technologies can be expensive, increasing potential costs for smaller companies, and raising data protection and privacy concerns. Service providers would need to abide by data protection laws while ensuring that their age assurance mechanisms are robust and capable of determining that users on the platform are of an appropriate age.
Additionally, compliance could create legal and reputational risks. Failure to comply could also lead to severe regulatory penalties. Platforms operating in the UK risk fines imposed by Ofcom of up to £18 million or 10% of a company’s qualifying worldwide revenue, whichever is greater, alongside potential criminal action against senior managers. Similarly, companies operating in Australia could face fines up to AUD 50 million ($33 million USD).
These are just a few examples of the challenges that businesses will have to navigate, pushing platforms to rethink current business and product strategies, cost of compliance, and geographical expansions.
What Comes Next
This is just the beginning. As more countries move to regulate online child safety, it seems that platforms will be pushed to build stronger protections by design. Despite international debates on the topic of regulation in content moderation across various stakeholders in the global public and private spheres, child protection is one of the areas where we increasingly see regulatory consensus on a way forward.
At Tremau, we continue to monitor the global regulatory landscape closely and help platforms stay ahead, with overall compliance and safety, including children and minors, at the center of what we do.
For more information on child safety regulatory frameworks, compliance and risk assessments – get in touch.
Recommended
A fair process is the essential element for Trust & Safety
Building fair content-moderation processes is a business imperative for platforms to attract and retain users.Effective Content Moderation at Scale – T&S Summit London
We joined a panel on content moderation at scale, covering automation, human oversight, transparency, and fair processes.Building a Robust Trust & Safety Framework
Explore the key pillars of successful T&S operations to guide platforms' operations.