British Technology Companies and Child Protection Agencies to Test AI's Ability to Generate Abuse Content
Tech firms and child safety agencies will be granted authority to evaluate whether AI systems can generate child exploitation material under recently introduced British laws.
Substantial Rise in AI-Generated Harmful Material
The announcement coincided with revelations from a safety watchdog showing that reports of AI-generated CSAM have increased dramatically in the past year, rising from 199 in 2024 to 426 in 2025.
New Legal Framework
Under the amendments, the authorities will permit designated AI companies and child protection organizations to inspect AI models – the underlying systems for conversational AI and visual AI tools – and ensure they have sufficient protective measures to stop them from producing images of child sexual abuse.
"Fundamentally about stopping abuse before it happens," declared the minister for AI and online safety, noting: "Experts, under strict protocols, can now identify the risk in AI models early."
Tackling Regulatory Obstacles
The amendments have been introduced because it is illegal to create and own CSAM, meaning that AI developers and others cannot generate such content as part of a evaluation regime. Previously, officials had to wait until AI-generated CSAM was published online before addressing it.
This law is aimed at preventing that issue by enabling to halt the creation of those images at their origin.
Legal Framework
The changes are being introduced by the government as revisions to the crime and policing bill, which is also implementing a prohibition on possessing, producing or sharing AI systems designed to create exploitative content.
Practical Impact
This recently, the official toured the London base of a children's helpline and heard a simulated call to advisors involving a report of AI-based exploitation. The call depicted a adolescent requesting help after being blackmailed using a sexualised deepfake of themselves, constructed using AI.
"When I learn about young people experiencing extortion online, it is a cause of extreme frustration in me and rightful anger amongst families," he stated.
Alarming Statistics
A prominent online safety organization reported that cases of AI-generated exploitation material – such as online pages that may include multiple files – had more than doubled so far this year.
Cases of the most severe material – the most serious form of abuse – increased from 2,621 visual files to 3,086.
- Female children were overwhelmingly targeted, accounting for 94% of illegal AI images in 2025
- Portrayals of newborns to two-year-olds increased from five in 2024 to 92 in 2025
Industry Reaction
The legislative amendment could "constitute a crucial step to guarantee AI tools are secure before they are released," stated the head of the internet monitoring organization.
"AI tools have made it so victims can be victimised all over again with just a simple actions, giving criminals the capability to make potentially limitless amounts of sophisticated, photorealistic exploitative content," she added. "Content which further commodifies survivors' trauma, and renders young people, particularly girls, less safe both online and offline."
Counseling Session Information
Childline also released information of counselling sessions where AI has been mentioned. AI-related risks mentioned in the sessions comprise:
- Employing AI to rate body size, physique and appearance
- AI assistants discouraging young people from talking to safe adults about abuse
- Being bullied online with AI-generated material
- Online extortion using AI-manipulated images
During April and September this year, the helpline delivered 367 counselling sessions where AI, chatbots and related topics were mentioned, four times as many as in the equivalent timeframe last year.
Half of the mentions of AI in the 2025 sessions were connected with mental health and wellbeing, encompassing using chatbots for support and AI therapeutic apps.