AI-generated child sexual abuse images increasingly found on public internet, warns IWF

The Internet Watch Foundation (IWF) has warned that AI-generated child sexual abuse content is increasingly being found on publicly accessible areas of the internet.

The charity revealed that 99 per cent of this AI content is located on publicly available areas of the internet and is not hidden on the dark web.

In the past six months, IWF analysts have seen a six per cent increase in confirmed reports containing AI generated child sexual abuse material compared with the previous 12 months.
Almost 80 per cent of the reports have come from members of the public who have stumbled across the criminal imagery on sites such as forums or AI galleries. The remainder were actioned by IWF analysts through proactive searching.

Over half of the AI generated content found in the past six months was hosted on servers in two countries, with 36 per cent of content hosted in Russia and 22 per cent hosted in the US.
Servers in Japan hosted 11 per cent of the content found whilst the Netherlands accounted for eight per cent.

The IWF said that many of the images and videos of children being hurt and abused are so realistic that they can be very difficult to tell apart from imagery of real children. These are regarded as criminal content in the eyes of UK law, much in the same way as ‘traditional’ child sexual abuse material would be.

The charity added that it traces where child sexual abuse content is hosted so that analysts can get it removed.

Addresses of webpages containing AI-generated child sexual abuse images are uploaded on to the IWF’s URL list which is shared with the tech industry to block the sites and prevent people from being able to access or see them.

The AI images are also hashed – given a unique code similar to a digital fingerprint – and tagged as AI on a hash list of more than two million images which can be used by law enforcement in their investigations.

“People can be under no illusion that AI-generated child sexual abuse material causes horrific harm, not only to those who might see it but to those survivors who are repeatedly victimised every time images and videos of their abuse are mercilessly exploited for the twisted enjoyment of predators online,” said Derek Ray-Hill, interim chief executive officer at the IWF. “To create the level of sophistication seen in the AI imagery, the software used has also had to be trained on existing sexual abuse images and videos of real child victims shared and distributed on the internet.”

In August this year, the IWF warned that there is nothing to prevent child sexual abuse materials being sent via encrypted platforms such as WhatsApp.

Following the news that BBC presenter Huw Edwards had viewed indecent material, the charity warned that these images could still spread “today, tomorrow and the next day” via the Meta-owned messaging service.



Share Story:

Recent Stories


Bringing Teams to the table – Adding value by integrating Microsoft Teams with business applications
A decade ago, the idea of digital collaboration started and ended with sending documents over email. Some organisations would have portals for sharing content or simplistic IM apps, but the ways that we communicated online were still largely primitive.

Automating CX: How are businesses using AI to meet customer expectations?
Virtual agents are set to supplant the traditional chatbot and their use cases are evolving at pace, with many organisations deploying new AI technologies to meet rising customer demand for self-service and real-time interactions.