Aisha Down 

Latest ChatGPT model uses Elon Musk’s Grokipedia as source, tests reveal

Guardian found OpenAI’s platform cited Grokipedia on topics including Iran and Holocaust deniers
  
  

The ChatGPT logo displayed on a mobile phone and on a laptop screen in Liverpool
ChatGPT cited Grokipedia when repeating information that the Guardian has debunked. Photograph: Adam Vaughan/EPA

The latest model of ChatGPT has begun to cite Elon Musk’s Grokipedia as a source on a wide range of queries, including on Iranian conglomerates and Holocaust deniers, raising concerns about misinformation on the platform.

In tests done by the Guardian, GPT-5.2 cited Grokipedia nine times in response to more than a dozen different questions. These included queries on political structures in Iran, such as salaries of the Basij paramilitary force and the ownership of the Mostazafan Foundation, and questions on the biography of Sir Richard Evans, a British historian and expert witness against Holocaust denier David Irving in his libel trial.

Grokipedia, launched in October, is an AI-generated online encyclopedia that aims to compete with Wikipedia, and which has been criticised for propagating rightwing narratives on topics including gay marriage and the 6 January insurrection in the US. Unlike Wikipedia, it does not allow direct human editing, instead an AI model writes content and responds to requests for changes.

ChatGPT did not cite Grokipedia when prompted directly to repeat misinformation about the insurrection, about media bias against Donald Trump, or about the HIV/Aids epidemic – areas where Grokipedia has been widely reported to promote falsehoods. Instead, Grokipedia’s information filtered into the model’s responses when it was prompted about more obscure topics.

For instance, ChatGPT, citing Grokipedia, repeated stronger claims about the Iranian government’s links to MTN-Irancell than are found on Wikipedia – such as asserting that the company has links to the office of Iran’s supreme leader.

ChatGPT also cited Grokipedia when repeating information that the Guardian has debunked, namely details about Sir Richard Evans’ work as an expert witness in David Irving’s trial.

GPT-5.2 is not the only large language model (LLM) that appears to be citing Grokipedia; anecdotally, Anthropic’s Claude has also referenced Musk’s encyclopedia on topics from petroleum production to Scottish ales.

An OpenAI spokesperson said the model’s web search “aims to draw from a broad range of publicly available sources and viewpoints”.

“We apply safety filters to reduce the risk of surfacing links associated with high-severity harms, and ChatGPT clearly shows which sources informed a response through citations,” they said, adding that they had ongoing programs to filter out low-credibility information and influence campaigns.

Anthropic did not respond to a request for comment.

But the fact that Grokipedia’s information is filtering – at times very subtly – into LLM responses is a concern for disinformation researchers. Last spring, security experts raised concerns that malign actors, including Russian propaganda networks, were churning out massive volumes of disinformation in an effort to seed AI models with lies, a process called “LLM grooming”.

In June, concerns were raised in the US Congress that Google’s Gemini repeated the Chinese government’s position on human rights abuses in Xinjiang and China’s Covid-19 policies.

Nina Jankowicz, a disinformation researcher who has worked on LLM grooming, said ChatGPT’s citing Grokipedia raised similar concerns. While Musk may not have intended to influence LLMs, Grokipedia entries she and colleagues had reviewed were “relying on sources that are untrustworthy at best, poorly sourced and deliberate disinformation at worst”, she said.

And the fact that LLMs cite sources such as Grokipedia or the Pravda network may, in turn, improve these sources’ credibility in the eyes of readers. “They might say, ‘oh, ChatGPT is citing it, these models are citing it, it must be a decent source, surely they’ve vetted it’ – and they might go there and look for news about Ukraine,” said Jankowicz.

Bad information, once it has filtered into an AI chatbot, can be challenging to remove. Jankowicz recently found that a large news outlet had included a made-up quote from her in a story about disinformation. She wrote to the news outlet asking for the quote to be removed, and posted about the incident on social media.

The news outlet removed the quote. However, AI models for some time continued to cite it as hers. “Most people won’t do the work necessary to figure out where the truth actually lies,” she said.

When asked for comment, a spokesperson for xAI, the owner of Grokipedia, said: “Legacy media lies.”

 

Leave a Comment

Required fields are marked *

*

*