Robert Booth UK technology editor 

X to ban users from earning revenue if they post unlabelled AI-generated war videos

Social media feeds have been flooded with fake battle scenes since start of Iran conflict
  
  

X logo on a phone screen
X said: ‘During times of war, it is critical that people have access to authentic information.’ Photograph: Étienne Laurent/EPA

Elon Musk’s X will ban users from making money on the platform if they repeatedly post unlabelled AI-generated war videos, after social media feeds were flooded with fake battle scenes from the Iran conflict.

The social media platform, which has about half a billion monthly active users, will suspend people from earning revenue from posts for 90 days if they put up AI-generated videos of an armed conflict without adding a disclosure that it was made with AI. A second infraction wouldlead to a permanent ban, it said on Tuesday night, after the first days of the conflict in Iran were marked by a torrent of bogus online footage.

Timelines on X, as well as Instagram and Facebook, which are run by Meta, have carried numerous faked battle scenes, including Iranian rockets pursuing and shooting down a US jet – which was viewed 70m times, according to checks by BBC Verify – and another clip that used AI to replace smoke rising from the site of a real missile strike with a fake fireball several times bigger.

Users can make hundreds of dollars a month on X as part of the platform’s advertising model if they build substantial followings approaching 100,000 people, which incentivises the production of shocking viral posts.

Nikita Bier, the head of product at X, said: “During times of war, it is critical that people have access to authentic information on the ground. With today’s AI technologies, it is trivial to create content that can mislead people. Starting now, users who post AI-generated videos of an armed conflict – without adding a disclosure that it was made with AI – will be suspended from creator revenue sharing for 90 days. Subsequent violations will result in a permanent suspension from the programme.”

Other fake videos of the war have achieved huge reach. A clip circulating on Instagram purporting to show a huge conflagration after “Iran destroyed the US airbase in Riyadh” was fake and has been identified as 18-month-old footage of the aftermath of an Israeli strike on an oil refinery in Hodeidah in Yemen.

Full Fact, the UK factchecking organisation, said it is “increasingly seeing AI turbocharge the spread of misinformation on social media”.

Steve Nowottny, Full Fact’s editor, said: “In the last few days we’ve seen lots of examples of AI images shared across different social media platforms as if they are real, including fake pictures of an aircraft carrier and the Burj Khalifa on fire, and an image supposedly showing the body of Ayatollah Khamenei.

“Even when AI images seem low quality, or still have a visible watermark on them, we often see them shared at scale – and the sheer volume of this fake content and the ease with which it is generated and spreads is a real concern.”

Sam Stockwell, who researches AI in online information at the UK’s Centre for Emerging Technology and Security, said there appeared to be a new trend of users asking AI chatbots to verify whether videos were AI fakes.

“Unfortunately chatbots are not very good at assessing real-time events,” he said.

That does not, however, stop people posting the chatbot’s incorrect assessments as evidence something is real. “People are trying to manipulate AI outputs to support their narrative and arguments about the war,” he said.

Meta has been approached for comment.

 

Leave a Comment

Required fields are marked *

*

*