Josh Taylor Technology reporter 

Child abuse material ‘systemic’ on Elon Musk’s X amid Grok scandal, Australian online safety regulator warned

Exclusive: eSafety commission pointed to Musk’s promise that ‘removing child exploitation is priority #1’ in letter obtained by Guardian Australia
  
  

The Grok AI logo appears on a smartphone screen in the Apple app store in this photo illustration
Australia’s eSafety commissioner wrote to X in January after its AI chatbot Grok was used to generate sexualised images of women and children online. Photograph: Thomas Fuller/NurPhoto via Getty Images

The Australian online safety regulator warned Elon Musk’s X amid the Grok sexualised image generation scandal that it found child abuse material was “particularly systemic” on X and more accessible than on “any other mainstream service”, correspondence obtained by Guardian Australia reveals.

The eSafety commissioner wrote to X in January after its chatbot, Grok, was used to generate sexualised images of women and children online, which the prime minister, Anthony Albanese, described as “abhorrent”.

In the letter, obtained by Guardian Australia under freedom of information laws, eSafety’s general manager of regulatory operations, Heidi Snell, pointed to Musk’s promise when taking over the platform in 2022 that “removing child exploitation is priority #1”, but said “the availability of CSEM [child sexual exploitation material] continues to appear particularly systemic on X”.

Sign up for the Breaking News Australia email

“eSafety has not identified CSEM to be as readily accessible on any other mainstream service,” Snell said.

eSafety had found that while action by X to tackle bot accounts in October 2025 had reduced use of some previously commonly used hashtags and terms to advertise CSEM, eSafety found hashtags to advertise the material still prevalent.

“We are concerned that apparently innocuous hashtags appear to be coopted to advertise CSEM, particularly when used together,” Snell said. “eSafety found CSEM amongst other material using combinations of the hashtags: [redacted]. The fact that some of these terms have innocuous uses means users are likely to be inadvertently exposed to CSEM despite seeking to use the X service in a legitimate manner.”

Snell said eSafety would also consider issuing removal notices to X for images generated by Grok of people being “undressed”, subject to X’s response. The regulator also noted that analysis by AI Forensics suggested Grok was also generating terrorist content and posting it on X.

Guardian Australia was not provided with X’s response in the FoI documents but, when approached for comment, X provided its response and its response to third-party consultation during the FoI request processing.

X said in its letter it has a “zero tolerance policy for any form of child sexual exploitation on the X platform, including AI-generated content”, has automated systems in place to detect such material and responds to user reports of abuse content. X said more than 99% of CSEM-related accounts are removed proactively before reports are received.

X said it was aware bad actors might co-opt innocuous terms and it continually evaluates keywords and new terms to add to bot defences and search blocklists.

“The terms referenced in your letter are not able to be used as ‘strong’ signals of [child sexual abuse material] on the X platform,” X said.

“Your letter makes serious allegations … but fails to specify the URLs or account handles for the content.”

On Grok, X said that “robust incident protocols” were triggered during the declothing incident, with “swift action” taken in any reported instances of violative content.

The company said between 1 January 2026 and 15 January 2026 it removed 4,500 pieces of Grok-generated content such as images of women in bikinis, and permanently suspended more than 674 accounts for violating X’s child sexual exploitation policy. The suspensions are understood to cover more than just those that requested Grok to generate child abuse material.

X warned eSafety that not releasing its response to the letter in the FoI request “would present an incomplete and potentially misleading account of the regulatory exchange”.

A spokesperson for eSafety said the regulator “is continuing to assess and investigate X’s compliance” with industry codes and standards in relation to CSEM.

On Monday, the parent company of X, xAI, was sued by three teenage girls, two of whom are minors in the US, alleging that Grok used photos of them to produce and distribute child sexual abuse material.

Musk has previously denied that Grok has been used to produce child sexual abuse material, claiming in January that he was “not aware of any naked underage images generated by Grok”.

Despite Albanese’s strong condemnation of X in January, he and government officials continued to post on the website, amid a growing number of scandals on the platform, including Grok referring to itself as “MechaHitler” and the massive amount of misinformation on the platform after the Bondi terror attack.

The federal government has also maintained spending on the platform in the past few years.

Data obtained by Guardian Australia for the first two years since Musk took over, between November 2022 and November 2024, shows Australian taxpayers paid X $4.26m for ads run on the platform.

The finance department refused a Guardian Australia freedom of information request for the 2025 spending data.

• In Australia, children, young adults, parents and teachers can contact the Kids Helpline on 1800 55 1800, or Bravehearts on 1800 272 831, and adult survivors can contact Blue Knot Foundation on 1300 657 380. In the UK, the NSPCC offers support to children on 0800 1111, and adults concerned about a child on 0808 800 5000. The National Association for People Abused in Childhood (Napac) offers support for adult survivors on 0808 801 0331. In the US, call or text the Childhelp abuse hotline on 800-422-4453. Other sources of help can be found at Child Helplines International

 

Leave a Comment

Required fields are marked *

*

*