The AI-Generated Child Abuse Nightmare Is Here

Thousands of child abuse images are being created with AI. New images of old victims are appearing, as criminals trade datasets.
Photo collage showing pixels over an image of a young woman.
PHOTO-ILLUSTRATION: WIRED STAFF; GETTY IMAGES

A horrific new era of ultrarealistic, AI-generated, child sexual abuse images is now underway, experts warn. Offenders are using downloadable open source generative AI models, which can produce images, to devastating effects. The technology is being used to create hundreds of new images of children who have previously been abused. Offenders are sharing datasets of abuse images that can be used to customize AI models, and they’re starting to sell monthly subscriptions to AI-generated child sexual abuse material (CSAM).

The details of how the technology is being abused are included in a new, wide-ranging report released by the Internet Watch Foundation (IWF), a nonprofit based in the UK that scours and removes abuse content from the web. In June, the IWF said it had found seven URLs on the open web containing suspected AI-made material. Now its investigation into one dark web CSAM forum, providing a snapshot of how AI is being used, has found almost 3,000 AI-generated images that the IWF considers illegal under UK law.

The AI-generated images include the rape of babies and toddlers, famous preteen children being abused, as well as BDSM content featuring teenagers, according to the IWF research. “We’ve seen demands, discussions, and actual examples of child sex abuse material featuring celebrities,” says Dan Sexton, the chief technology officer at the IWF. Sometimes, Sexton says, celebrities are de-aged to look like children. In other instances, adult celebrities are portrayed as those abusing children.

While reports of AI-generated CSAM are still dwarfed by the number of real abuse images and videos found online, Sexton says he is alarmed at the speed of the development and the potential it creates for new kinds of abusive images. The findings are consistent with other groups investigating the spread of CSAM online. In one shared database, investigators around the world have flagged 13,500 AI-generated images of child sexual abuse and exploitation, Lloyd Richardson, the director of information technology at the Canadian Centre for Child Protection, tells WIRED. “That's just the tip of the iceberg,” Richardson says.

A Realistic Nightmare

The current crop of AI image generators—capable of producing compelling art, realistic photographs, and outlandish designs—provide a new kind of creativity and a promise to change art forever. They’ve also been used to create convincing fakes, like Balenciaga Pope and an early version of Donald Trump’s arrest. The systems are trained on huge volumes of existing images, often scraped from the web without permission, and allow images to be created from simple text prompts. Asking for an “elephant wearing a hat” will result in just that.

It’s not a surprise that offenders creating CSAM have adopted image-generation tools. “The way that these images are being generated is, typically, they are using openly available software,” Sexton says. Offenders whom the IWF has seen frequently reference Stable Diffusion, an AI model made available by UK-based firm Stability AI. The company did not respond to WIRED’s request for comment. In the second version of its software, released at the end of last year, the company changed its model to make it harder for people to create CSAM and other nude images.

Sexton says criminals are using older versions of AI models and fine-tuning them to create illegal material of children. This involves feeding a model existing abuse images or photos of people’s faces, allowing the AI to create images of specific individuals. “We’re seeing fine-tuned models which create new imagery of existing victims,” Sexton says. Perpetrators are “exchanging hundreds of new images of existing victims” and making requests about individuals, he says. Some threads on dark web forums share sets of faces of victims, the research says, and one thread was called: “Photo Resources for AI and Deepfaking Specific Girls.”

Grasping the scale of the problem is challenging. Over the course of September, analysts at the IWF focused on one dark web CSAM forum, which it does not name, that generally focuses on “softcore imagery” and imagery of girls. Within a newer AI section of the forum, a total of 20,254 AI-generated photos were posted last month, researchers found. A team of 12 analysts at the organization spent 87.5 hours assessing 11,108 of these images.

In total, the IWF judged 2,978 images to be criminal. Most of these—2,562—were realistic enough to be treated the same way as non-AI CSAM. Half of the images were classed as Category C, meaning they are indecent, with 564 showing the most severe types of abuse. The images likely depicted children aged between 7 and 13 years old, and were 99.6 percent female children, the IWF says. (Of the thousands of noncriminal AI-generated images the researchers reviewed, most featured children but did not include sexual activity, the IWF says).

“The scale at which such images can be created is worrisome,” says Nishant Vishwamitra, an assistant professor at the University of Texas at San Antonio who is working on the detection of deepfakes and AI CSAM images online. The IWF’s report notes that the organization is starting to see some creators of abusive content advertise image creation services—including making “bespoke” images and offering monthly subscriptions.

This may increase as the images continue to become more realistic. “Some of it is getting so good that it's tricky for an analyst to discern whether or not it is in fact AI-generated,” says Lloyd Richardson from the Canadian Centre for Child Protection. The realism also presents potential problems for investigators who spend hours trawling through abuse images to classify them and help identify victims. Analysts at the IWF, according to the organization's new report, say the quality has improved quickly—although there are still some simple signs that images may not be real, such as extra fingers or incorrect lighting. “I am also concerned that future images may be of such good quality that we won’t even notice,” says one unnamed analyst quoted in the report.

“I doubt anyone would suspect these aren’t actual photographs of an actual girl,” reads one comment posted to a forum by an offender and included in the IWF report. Another says: “It's been a few months since I've checked boy AI. My God it's gotten really good!”

Guardrails and Gaps

In many countries, the creation and sharing of AI CSAM can fall under existing child protection laws. “The possession of this material, as well as the spreading, viewing and creation, is illegal as well,” says Arda Gerkens, the president of the Authority for Online Terrorist and Child Pornographic Material, the Dutch regulator. Prosecutors in the US have called for Congress to strengthen laws relating to AI CSAM. More broadly, researchers have called for a multipronged approach to dealing with CSAM that’s shared online.

There are also various techniques and measures that tech companies and researchers are looking at to stop AI-generated CSAM from being created, and also to stop it from bleeding out of dark web forums onto the open internet. Gerkens says it is possible for tech companies creating AI models to build in safeguards, and that “all tech developers need to be aware of the possibility their tools will be abused.”

These include watermarking images, creating better detection tools, and detecting prompts that may lead to AI CSAM being created. David Thiel, the chief technologist at the Stanford Internet Observatory, says Big Tech companies are looking to use machine learning models to help detect new AI-generated CSAM imagery that may get shared on their platforms, in addition to using existing tools.

Thiel, along with Melissa Stroebel and Rebecca Portnoff from child protection group Thorn, recently published research on how AI CSAM could be reduced. The research says developers should remove harmful content from their training data, red team their models to find ways they could be abused, include biases in the models that can stop them from producing child nudity or sexual content including children, and be transparent about training data. Creators of open source models should evaluate the platforms where their models can be downloaded from and remove access to historical models, the report says.

But many say that safety measures are coming too late. The technology is already being used to create harmful content. “Anything that you put after the fact is simply going to be a Band-Aid,” Lloyd Richardson says. “We’re still dealing with the cleanup of people trying to be first to market with particular models,” Thiel adds.

Meanwhile, technology continues to improve, and it’s likely that AI-generated videos will come in the future. “We never really acknowledged—what if someone could just install something on their computer in their home and create as many [photos] as they could fit on their hard drive? Completely new images of existing victims, new victims,” the IWF’s Sexton says. “There’s already so much content of children out there on the internet. That is not going to make it easier.”