Yvette Cooper has announced the UK will be the first country in the world to make it illegal to own AI tools designed to generate images of child sexual abuse.
The Home Secretary said the new offence will be punishable by up to five years in prison.
Those who have been found to own AI ‘paedophile manuals’ could be jailed for up to three years, under measures the Government will bring forward in the Crime and Policing Bill.
‘Nudeifying’ real-life images of children or stitching their faces onto existing images of abuse are among the ways AI is being used by abusers, the Government department said.
Fake images are also being used to blackmail children and force them to livestream further abuse.
Ministers believe that the online abuse can lead viewers to go out and offend in real life.
Ms Cooper said: ‘We know that sick predators’ activities online often lead to them carrying out the most horrific abuse in person.
‘This Government will not hesitate to act to ensure the safety of children online by ensuring our laws keep pace with the latest threats.’
Yvette Cooper has announced the UK will be the first country in the world to make it illegal to own AI tools designed to generate images of child sexual abuse
The Home Secretary said the new offence will be punishable by up to five years in prison under measures the Government will bring forward in the Crime and Policing Bill (file pic)
The Bill will also introduce a specific offence for paedophiles who run websites to share child sex abuse, which could carry a 10-year prison sentence.
New powers will also be given to Border Force to prevent the spread of child sexual abuse images from abroad, including by allowing officers to call for individuals suspected of posing a risk to children to give up their phones for inspection.
Ms Cooper added: ‘These four new laws are bold measures designed to keep our children safe online as technologies evolve.’
The law reforms come after warnings by the Internet Watch Foundation (IWF) that sexual abuse images of children are increasingly being created.
IWF’s latest data shows reports of AI-generated child sexual abuse images have risen by 380%, with 245 confirmed reports in 2024, compared with 51 in 2023.
Each of these reports can contain thousands of images.
Some of the AI-generated content is so realistic that it is sometimes difficult to tell the difference between what is real abuse and what is fake, the charity said.
Derek Ray-Hill, interim chief executive of the IWF, said the steps ‘will have a concrete impact on online safety’.
The law reforms come after warnings by the Internet Watch Foundation (IWF) that sexual abuse images of children are increasingly being created (file pic)
He added: ‘The frightening speed with which AI imagery has become indistinguishable from photographic abuse has shown the need for legislation to keep pace with new technologies.
‘Children who have suffered sexual abuse in the past are now being made victims all over again, with images of their abuse being commodified to train AI models.
‘It is a nightmare scenario and any child can now be made a victim, with life-like images of them being sexually abused obtainable with only a few prompts, and a few clicks.’
In response to the Government’s announcement, Lynn Perry, chief executive of Barnardo’s, said: ‘We welcome the Government taking action to tackle the increase in AI-produced child sexual abuse imagery which normalises the abuse of children, putting more of them at risk, both on and offline.
‘It is vital that legislation keeps up with technological advances to prevent these horrific crimes.
‘Tech companies must make sure their platforms are safe for children.
‘They need to take action to introduce stronger safeguards, and Ofcom must ensure that the Online Safety Act is implemented effectively and robustly.’