It’s a sickening law of the internet that the first thing people will try to do with a new tool is strip women. Grok, X’s AI chatbot, has been used repeatedly by users in recent days to undress images of women and minors. The news outlet Reuters identified 102 requests in a 10-minute period last Friday from users to get Grok to edit people into bikinis, the majority of these targeting young women. Grok complied with at least 21 of them.
There is no excuse for releasing exploitative tools on the internet when you are sitting on $10bn (£7.5bn) in cash. Every platform with AI integration (which now covers almost the entire internet) is planning for the same challenges; if you want to enable users to create images and even videos with generative AI, how do you do so without letting the same people cause harm? Tech companies spend money behind the scenes that you’ll never see as a user to wrestle with this; they’ll do “red teaming”, in which they pretend to be bad actors in order to test their products. They’ll launch beta tests to probe and review features within trusted environments.
With every iteration, they’ll bring in safeguards, not only to keep users safe and comply with the law, but to appease investors who don’t want to be associated with online malfeasance. But from the start, Elon Musk didn’t seem to act as if he thought digital stripping was a problem. It’s Musk’s prerogative if he feels that someone turning a Ben Affleck smoking meme into an image of Musk half-naked is “perfect”. That doesn’t stop the sharing of non-consensual AI deepfakes from being illegal in many jurisdictions, including the UK, where offenders can be charged for sharing these images, or the creation of sexual images of children.
One useful thing Grok has done this week is reveal how it has been programmed. When a user interrogated it as to why it had manipulated an image of the Swedish deputy prime minister, Ebba Busch, so that she appeared in a bikini, it argued that it was satire because she had been speaking about a burqa ban. It went on to insist that it wasn’t a deepfake of a real photo, but an AI-generated illustration (wrong), and added that it aims to balance fun with ethics, “avoiding real harm while responding creatively” to requests.
For someone who supposedly values humour, it is strange that Musk has tried to furnish a chatbot with it. Chatbots are misnamed in that they actually have no idea of how to speak – they generate text by predicting what is most likely to come next, using statistical patterns and data training as opposed to genuine insight. Grok’s excuses show its parameters for safety or for sticking to the facts have not been robustly tested; it has been programmed for entertainment.
As the week has developed, Musk appears to have found the joke less funny himself. “Anyone using Grok to make illegal content will suffer the same consequences as if they upload illegal content,” Musk threatened on 3 January with what came across as all the gravitas of a garlic dough ball. Between 2023 and 2024, X dramatically reduced trust and safety staffing, and is over-reliant on user reporting. This means that many bad actors can still get away with illegal behaviour on the platform.
Rather than acknowledge that X could help deal with the problem itself, Musk has lumped the responsibility of investigation on law enforcement and the blame on X’s users. Ofcom, as well as the European Commission, may well throw the book back at him if they investigate and find X’s policies lacking – but even then, Musk may well evade accountability, as currently appears to be the case with the €120m fine issued to X over its blue tick badges.
Other AI agent tools such as ChatGPT and Meta AI prohibit non-consensual deepfake pornography and appear to enforce this, which raises the question: why can’t Grok? How the world chooses to police the platform for so flagrantly allowing crimes to be committed on it will prove once and for all whether figures such as Musk can operate with impunity. I am interested in seeing how the political right, who have enjoyed X bending in their direction since Musk’s takeover, will react to this. Protecting women and children is a core tenet of conservative values, and rightwing voices in the US now face a moral test. In the coming weeks, we’re going to see if they are still willing to defend a US company in the name of free speech, even when it allows people to create sexualised content of children.
From my vantage point as a former daily user, X has long felt inhospitable – and this week’s events are the latest in a line of digital abominations that remind me it was a good decision to move my output elsewhere. But an active, hostile environment normalising this behaviour still hurts us, regardless of whether we’re there or not, as these images will spread around the internet. The damage – the assault – marks us whether we’re still X users or not.
Someone has to do something – and if international governments can’t motivate X to change, then maybe some of its investors can. xAI is burning through billions for its AI development, and will be guzzling data by allowing its users to generate images willy- nilly; respecting various international laws wouldn’t only be, er, more legal – but it’d be cheaper, too.
Grok’s purpose is to maximise “truth and objectivity”, according to its own website, but today as I scroll its cesspit, all I’ve seen it maximise are a Swedish politician’s “knockers” at the request of an anonymous user. News reporting is now also charting a slew of manipulated bikini images of 14-year-olds. “We report suspected child sexual abuse material to the National Center for Missing and Exploited Children,” xAI’s acceptable use policy claims. But how comfortable will the company be reporting its own monster?
Sophia Smith Galer is a journalist and content creator. Her second book, How to Kill a Language, will be published in May
Do you have an opinion on the issues raised in this article? If you would like to submit a response of up to 300 words by email to be considered for publication in our letters section, please click here.