Child Online Safety in the Age of AI

AI

The internet has always posed risks for children, but the rise of AI-generated child sexual abuse material (CSAM) is creating a new, deeply troubling frontier. Experts at the Internet Watch Foundation (IWF) warn that advances in artificial intelligence are not only making abuse imagery more realistic and widespread, but are also putting real children in greater danger than ever before.

The New Threat: AI-Generated Abuse Images

  • AI can now create images and videos of child abuse that are nearly indistinguishable from real photographs.

  • These images are appearing on both the dark web and open platforms, making them accessible to a wider audience—including children themselves.

  • In 2024, reports of AI-generated child sexual abuse imagery in the UK surged by 380%, with the IWF verifying thousands of illegal images and several videos.

Why AI Puts Real Child Victims at Greater Risk

1. Realistic Fakes Distract from Real Victims

"The risk is that we and law enforcement end up trying to save children that aren’t real or fail to act because they assume the child is AI-generated."

— Dan Sexton, IWF Chief Technology Officer

  • Law enforcement and child protection agencies are struggling to tell real abuse from AI-generated content. This can lead to resources being wasted on "fake" cases, while real children in danger are overlooked.

  • AI is being used to create new abuse scenarios using images of real victims, further traumatising survivors whose abuse is continually recirculated online.

2. Perpetual Victimisation

  • AI tools can "de-age" celebrities and unclothe children in ordinary photos, inserting them into abusive scenarios.

  • Survivors report ongoing trauma as images of their abuse are manipulated and shared in new forms, making it difficult to heal and move on.

3. Normalisation and Escalation of Abuse

  • Even if some AI-generated images do not depict real children, they reinforce harmful sexual fantasies and normalise exploitation.

  • Research suggests that viewing such material increases the risk that offenders will attempt hands-on abuse.

The Scale of the Problem

  • In a single dark web forum, over 3,500 AI-generated abuse images were found in one month, with a 10% rise in the most severe (Category A) material.

  • Many images are so convincing that even trained analysts struggle to tell if a real child is at risk.

The Bottom Line

AI-generated child sexual abuse images are not victimless. They perpetuate harm, distract from real victims, and make it harder for authorities to protect children. As technology advances, so must our collective response—through stronger laws, smarter tech, and informed communities—to ensure children are safe online.


At Big Sister, we advocate for children's safety online. Our app marks a positive change in the way that our children are protected online, using flags and alerts to warn parents of dangerous content without breaking down trust and privacy barriers between children and adults.

Find out more about how to protect your children online without breaking their trust in our latest blog here.

Or sign up to our waitlist to be the first to know when the app launches and get access to our early bird discount.

Next
Next

Sexting Risks for Teens: Staying Safe