Grok AI generated about 3m sexualised images in less than two weeks, including 23,000 that appear to depict children, according to researchers who said it “became an industrial-scale machine for the production of sexual abuse material”.
The estimate has been made by the Center for Countering Digital Hate (CCDH) after Elon Musk’s AI image generation tool sparked international outrage when it allowed users to upload photographs of strangers and celebrities, digitally strip them to their underwear or into bikinis, put them in provocative poses and post the images on X.
The trend went viral over the new year, peaking on 2 January with 199,612 individual requests, according to analysis conducted by Peryton Intelligence, a digital intelligence company specialising in online hate.
A fuller assessment of the output from the feature, from its launch on 29 December 2025 until 8 January 2026, has now been made by the CCDH. It suggests the impact of the technology may have been broader than previously thought. Public figures identified in sexualised images it analysed include Selena Gomez, Taylor Swift, Billie Eilish, Ariana Grande, Ice Spice, Nicki Minaj, Christina Hendricks, Millie Bobby Brown, the Swedish deputy prime minister Ebba Busch, and the former US vice-president Kamala Harris.
The feature was restricted to paid users on 9 January and further restrictions followed after the UK prime minister, Keir Starmer, called the situation “disgusting” and “shameful”. Other countries, including Indonesia and Malaysia, announced blocks on the AI tool.
CCDH estimated that over the 11-day period, Grok was helping create sexualised images of children every 41 seconds. These included a selfie uploaded by a schoolgirl undressed by Grok, turning a “before school selfie” into an image of her in a bikini.
“What we found was clear and disturbing: in that period Grok became an industrial-scale machine for the production of sexual abuse material,” said Imran Ahmed, CCDH’s chief executive. “Stripping a woman without their permission is sexual abuse. Throughout that period Elon was hyping the product even when it was clear to the world it was being used in this way. What Elon was ginning up was controversy, eyeballs, engagement and users. It was deeply disturbing.”
He added: “This has become a standard playbook for Silicon Valley, and in particular for social media and AI platforms. The incentives are all misaligned. They profit from this outrage. It’s not about Musk personally. This is about a system [with] perverse incentives and no minimum safeguards prescribed in law. And until regulators and lawmakers do their jobs and create a minimum expectation of safety, this will continue to happen.”
X announced it had stopped its Grok feature from editing pictures of real people to show them in revealing clothes, including for premium subscribers, on 14 January.
X referred to its statement from last week, which said: “We remain committed to making X a safe platform for everyone and continue to have zero tolerance for any forms of child sexual exploitation, non-consensual nudity, and unwanted sexual content.
“We take action to remove high-priority violative content, including child sexual abuse material and non-consensual nudity, taking appropriate action against accounts that violate our X rules. We also report accounts seeking child sexual exploitation materials to law enforcement authorities as necessary.”