The UK’s communications regulator, Ofcom, has issued new guidance for website operators, mandating the use of “highly effective age assurance” techniques to prevent under-18s from accessing online pornography.
By July 2025, websites hosting adult content will be required to implement robust age-checking processes, which could include open banking verification, credit card checks, mobile network operator age checks, or facial age estimation. While Ofcom did not specify exact technical requirements, it emphasized that the methods must be “highly effective” and tailored to meet the service provider’s duties under the Online Safety Act.
Simply adopting one of the suggested methods won’t guarantee compliance. Providers must prove their overall process is effective at keeping minors out. Ofcom also expects all user-to-user and search services covered by the Act to conduct a children’s access assessment by April 16, determining whether their platforms are likely to be accessed by minors.
Big Changes Ahead for Adult Content Sites and Social Media
Dame Melanie Dawes, Ofcom’s chief executive, noted that the lack of safeguards has long left children vulnerable to accessing pornography and harmful material online. She remarked, “Today, this starts to change.” Dawes explained that services hosting explicit content must begin implementing age checks immediately, while social media platforms permitting adult content have until July 2025 to comply.
Lina Ghazal, Head of Regulatory and Public Affairs at Verifymy, called Ofcom’s announcement “a pivotal moment in the fight to make the internet a safer place, particularly for children.” She added that the guidance gives content providers the clarity needed to implement robust age-verification measures.
However, history suggests the rollout may not be smooth. In Florida, Pornhub responded to similar legislation by pulling out of the state entirely, triggering a surge in VPN use as residents sought to bypass restrictions. Critics worry Ofcom’s plans could face similar resistance or lead to unintended consequences like digital exclusion or privacy risks.
Privacy Concerns and Technological Challenges
Silkie Carlo, director of privacy group Big Brother Watch, criticized the measures, arguing that many age-verification technologies are intrusive and introduce risks, including privacy breaches, errors, and censorship. “We must avoid anything like a digital ID system for the internet that would both eradicate privacy online and fail to keep children safe,” Carlo said.
She also highlighted that many age-assurance methods, such as biometric scans or ID checks, can be circumvented and shouldn’t be treated as a cure-all. Instead, she advocated for alternatives like parental controls, user settings, and age ratings as more reliable methods to protect children.
Robin Tombs, CEO of identity platform provider Yoti, welcomed the guidance but urged Ofcom to provide more clarity. “By not listing a definitive set of approved methods, platforms are left uncertain about the suitability of alternatives they might consider,” Tombs said. He added that safeguards like liveness detection must be included to prevent children from spoofing age checks.
Concerns Over Future Implications
Critics also worry the new rules might pave the way for increased government surveillance under the guise of online safety. Some fear the measures could be the first step toward a broader digital ID system that could undermine user privacy while failing to protect children effectively.
This isn’t the UK’s first attempt at tackling online age verification. Previous efforts faltered over privacy concerns and technical challenges. With Ofcom now steering the ship, the success of this initiative remains to be seen, as does its impact on privacy, civil liberties, and the internet landscape.