Robert Booth and Amelia Gentleman 

Wave of Grok AI fake images of women and girls appalling, says UK minister

Liz Kendall calls on X to ‘deal with this urgently’ while expert criticises ‘worryingly slow’ government response
  
  

A Grok AI screen on a phone
Ofcom has said it is aware of serious concerns raised about Grok creating undressed images of people. Photograph: Future Publishing/Getty Images

The UK technology secretary has called a wave of images of women and children with their clothes digitally removed generated by Elon Musk’s Grok AI “appalling and unacceptable in decent society”.

After thousands of intimate deepfakes circulated online, Liz Kendall said X, Musk’s social media platform, needed to “deal with this urgently” and she backed the UK regulator Ofcom to “take any enforcement action it deems necessary”.

“We cannot and will not allow the proliferation of these demeaning and degrading images, which are disproportionately aimed at women and girls,” she said. “Make no mistake, the UK will not tolerate the endless proliferation of disgusting and abusive material online. We must all come together to stamp it out.”

Her comments came amid warnings that the Online Safety Act, which aims to tackle online harms and protect children, needs to be urgently toughened up despite pressure from the Trump administration to water it down.

One expert criticised the “tennis game” between platforms such as X and UK regulators when problems arose and called the government response “worryingly slow”.

Jessaline Caine, a survivor of child sexual abuse, called the government’s response “spineless” and told the Guardian that on Tuesday morning the chatbot was still obeying requests to manipulate an image of her as a three-year-old to dress her in a string bikini. Her identical requests made to ChatGPT and Gemini were rejected.

“Other platforms have these safeguards so why does Grok allow the creation of these images?” she said. “The images I’ve seen are so vile and degrading. The government has been very reactive. These AI tools need better regulation.”

On Monday, Ofcom said it was aware of serious concerns raised about Grok creating undressed images of people and sexualised images of children. It said it had contacted X and xAI “to understand what steps they have taken to comply with their legal duties to protect users in the UK” and would assess the need for an investigation based on the company’s response.

The pressure is growing on ministers to take a tougher line. The crossbench peer and online child safety campaigner Beeban Kidron has urged the government to “show some backbone” and called for the Online Safety Act regime to be “reassessed so it is swifter and has more teeth”.

Speaking about X, she said: “If any other consumer product caused this level of harm, it would already have been recalled.”

She said Ofcom needed to act “in days not years” and called for users to walk “away from products that show no serious intent to prevent harm to children, women and democracy”.

Ofcom has powers to fine tech platforms up to £18m or 10% of their qualifying global revenues, whichever is higher. The biggest penalty to date came last month when a porn provider that failed to carry out mandatory age checks was fined £1m.

Last month, ministers promised new laws to ban “nudification” tools, which use generative AI to turn images of real people into fake nude pictures and videos without their permission. It remains unclear when that ban will be enforced.

Sarah Smith, the innovation lead at the Lucy Faithfull Foundation, a charity that works to prevent child abuse, called for X to immediately disable Grok’s image-editing features “until robust safeguards are in place to stop this from happening again”.

X did not respond to a request for comment on Kendall’s remarks. It said on Monday: “We take action against illegal content on X, including child sexual abuse material, by removing it, permanently suspending accounts and working with local governments and law enforcement as necessary.”

Jake Moore, a global cybersecurity adviser at at the security software firm ESET, criticised the “tennis game” between platforms such as X and UK regulators and called the government response “worryingly slow”.

He said that as AI increasingly allowed faked images to be rendered as longer videos, the consequences for people’s lives would only become worse.

“It is unbelievable that this is able to occur in 2026,” he said. “We have to move forward with extreme regulation. Any grey area we offer will be abused. The government is not understanding the bigger picture here.”

It is already illegal to create or share non-consensual intimate images or child sexual abuse material, including sexual deepfakes created with AI. Fake images of people in bikinis may qualify as intimate images, as the definition in law includes the person having naked breasts, buttocks or genitals or having those parts only covered by underwear. Indecent images include those depicting children in erotic poses without sexual activity.

Lady Kidron said AI-generated pictures of children in bikinis may not be child sexual abuse material but they were contemptuous of children’s privacy and agency.

“We cannot live in a world in which a kid can’t post a picture of winning a race unless they are willing to be sexualised and humiliated,” she said.

 

Leave a Comment

Required fields are marked *

*

*