UNICEF warns 1.2 million children hit by deepfake abuse

By United Nations Children's Fund

UNICEF warns 1.2 million children hit by deepfake abuse

At least 1.2 million children have had their images manipulated into sexually explicit deepfakes in the past year, according to new evidence that reveals a proliferation of AI-generated sexualized images of youngsters and a shortage of laws to stop it, United Nations Children’s Fund (UNICEF) warned in a statement.

“The harm from deepfake abuse is real and urgent,” the UN agency said. “Children cannot wait for the law to catch up.”

The findings come from a study across 11 countries conducted by UNICEF, INTERPOL, and the ECPAT global network working to end child sexual exploitation. In some countries, this represents one in 25 children—roughly one child in a typical classroom. Deepfakes are images, videos, or audio generated or manipulated with AI to look real, and they’re increasingly being used to produce sexualized content involving children, including through “nudification,” where AI tools strip or alter clothing in photos to create fake nude or sexualized images.

“When a child’s image or identity is used, that child is directly victimized,” UNICEF said. “Even without an identifiable victim, AI-generated child sexual abuse material normalizes the sexual exploitation of children, fuels demand for abusive content, and presents significant challenges for law enforcement in identifying and protecting children that need help. Deepfake abuse is abuse, and there is nothing fake about the harm it causes.”

UNICEF welcomed efforts by some AI developers who are implementing “safety-by-design” approaches and robust guardrails to prevent misuse of their systems. However, the response so far is patchy, and too many AI models aren’t being developed with adequate safeguards. The risks grow when generative AI tools are embedded directly into social media platforms where manipulated images spread rapidly. Children themselves are deeply aware of this risk—in some study countries, up to two thirds of youngsters said they worry AI could be used to create fake sexual images or videos.

To address this fast-growing threat, UNICEF issued Guidance on AI and Children 3.0 in December with recommendations for policies and systems that uphold child rights. Right now, the agency is calling for immediate action: governments need to expand definitions of child sexual abuse material to include AI-generated content and criminalize its creation, procurement, possession, and distribution. AI developers should implement safety-by-design approaches and robust guardrails. Digital companies should prevent the circulation of AI-generated child sexual abuse material, not merely remove it, and strengthen content moderation with investment in detection technologies.