AI-generated child pornography increasing at ‘chilling’ rate, as watchdog warns it is now becoming hard to spot | Science & Tech News


The amount of AI-generated child pornography found on the internet is increasing at a “chilling” rate, according to a national watchdog.

The Internet Watch Foundation deals with child pornography online, removing hundreds of thousands of images every year.

Now, it says artificial intelligence is making the work much harder.

“I find it really chilling as it feels like we are at a tipping point,” said “Jeff”, a senior analyst at the Internet Watch Foundation (IWF), who uses a fake name at work to protect his identity.

In the last six months, Jeff and his team have dealt with more AI-generated child pornography than the preceding year, reporting a 6% increase in the amount of AI content.

A lot of the AI imagery they see of children being hurt and abused is disturbingly realistic.

“‘Whereas before we would be able to definitely tell what is an AI image, we’re reaching the point now where even a trained analyst […] would struggle to see whether it was real or not,” Jeff told Sky News.

In order to make AI pornography so realistic, the software is trained on existing sexual abuse images, according to the IWF.

The Internet Watch Foundation deals with child sexual abuse material online
Image:
The Internet Watch Foundation deals with child sexual abuse material online

“People can be under no illusion,” said Derek Ray-Hill, the IWF’s interim chief executive.

“AI-generated child sexual abuse material causes horrific harm, not only to those who might see it but to those survivors who are repeatedly victimised every time images and videos of their abuse are mercilessly exploited for the twisted enjoyment of predators online.”

The IWF is warning that almost all the content was not hidden on the dark web but found on publicly available areas of the internet.

“This new technology is transforming how child sexual abuse material is being produced,” said Professor Clare McGlynn, a legal expert who specialises in online abuse and pornography at Durham University.

Read more from Sky News:
Gang sold fake vintage wine for £12,500 a bottle
Budget 2024: What could Chancellor announce?

Mayor bans cactus plants in buildings

She told Sky News it is “easy and straightforward” now to produce AI-generated child sexual abuse images and then advertise and share them online.

“Until now, it’s been easy to do without worrying about the police coming to prosecute you,” she said.

In the last year, a number of paedophiles have been charged after creating AI child pornography, including Neil Darlington who used AI while trying to blackmail girls into sending him explicit images.

Read more: AI paedophile has ‘lenient’ punishment increased

Creating explicit pictures of children is illegal, even if they are generated using AI, and IWF analysts work with police forces and tech providers to remove and trace images they find online.

Follow Sky News on WhatsApp
Follow Sky News on WhatsApp

Keep up with all the latest news from the UK and around the world by following Sky News

Tap here

Analysts upload URLs of webpages containing AI-generated child sexual abuse images to a list which is shared with the tech industry so it can block the sites.

The AI images are also given a unique code like a digital fingerprint so they can be automatically traced even if they are deleted and re-uploaded somewhere else.

More than half of the AI-generated content found by the IWF in the last six months was hosted on servers in Russia and the US, with a significant amount also found in Japan and the Netherlands.



Source link