Editorial 

The Guardian view on political deepfakes: voters can’t believe their own eyes

Disinformation campaigns and forgeries are an old problem – but AI poses new threats and needs a new response
  
  

Sadiq Khan wearing a poppy in his jacket lapel
‘Sadiq Khan … said that faked audio supposed to capture him making incendiary remarks about Remembrance weekend almost caused serious disorder’. Photograph: James Shaw/Rex/Shutterstock

One hundred years ago, just a few days before a British general election, the Daily Mail published a sensational letter purportedly written by Grigory Zinoviev, the head of the Communist International. The paper claimed that it revealed “a great Bolshevik plot to paralyse the British army and navy and to plunge the country into civil war”, and showed that the Labour government’s real masters were in Moscow. It probably contributed to Labour’s defeat. An official investigation in the 1990s found it was forged by an MI6 agent’s source.

Political disinformation campaigns bolstered by fakes are nothing new, and the digital revolution has turbocharged them. But AI is further democratising disinformation – making it easier, cheaper and quicker than ever to produce forgeries. What once looked like an approaching threat is now an immediate issue. Sadiq Khan, the mayor of London, said last week that faked audio supposed to capture him making incendiary remarks about Remembrance weekend almost caused serious disorder after it was widely shared by the far right. In January, a faked robocall, apparently from Joe Biden, urged Democrats not to vote in the New Hampshire primary. In a year that will see more than 40 national elections worldwide, the issue is likely to be evident as never before.

Forensic tools to detect faked or manipulated images or audio are trailing far behind the tools to create them. And even where material can be disproved, that will take time, when polls may be only days away. The improving quality of deepfakes means that viewers and listeners are being asked to suspend trust in their own senses. That increases the likelihood that even when “recordings” or images have been comprehensively discredited, it will not fully dispel the beliefs and sentiments they have generated. Some voters will simply refuse to believe that faked material is not genuine, suggesting it is the denials that are suspect. And a flood of fake content may simply drown out other issues for the public.

There’s also a risk that genuine material is more easily discredited, or cannot be verified sufficiently to be run by major news organisations. In 2016, Donald Trump privately suggested that the Access Hollywood audio tapes might not be authentic. How much easier it would now be for him to claim that publicly and aggressively.

Politicians and officials are still getting to grips with the challenges posed by AI, including in politics. Regulation is an essential part of the response, though must be done sensitively to protect civil rights. More companies are joining the Content Authenticity Initiative. OpenAI’s image generator has begun adding watermarks to image metadata. The unscrupulous will always find help somewhere, but since low cost and ease of creation is part of the problem, making production harder is part of the solution.

Forged material can quickly spread, via social media, to people who never read or watch news headlines. So tackling distribution is crucial, although this risks putting even more power in the hands of tech giants. Finally, it is essential to foster a discriminating attitude towards online material, just as towards spoken gossip and rumour. That means not only educating children to critically analyse what they see online, but also encouraging adults to do so – instead of ministers promoting conspiracy theories and undermining responsible media organisations for partisan purposes.

 

Leave a Comment

Required fields are marked *

*

*