Alex Hern UK technology editor 

Meta’s Nick Clegg plays down AI’s threat to global democracy

Major elections around the world so far this year have not suffered from systematic malicious interference, says global affairs chief
  
  

Nick Clegg
Nick Clegg said AI is the ‘single biggest reason’ platforms such as Instagram and Facebook are getting better at weeding out bad content. Photograph: Kirsty Wigglesworth/AP

Generative AI is overblown as an election risk, according to Meta’s Nick Clegg, who claims the technology is more useful for defending democracy than attacking it.

Speaking at the Meta AI Day event in London on Tuesday, the social network’s global affairs chief said that the evidence from major elections that have already been run this year around the world is that technology such as large language models, image and video generators, and speech synthesis tools aren’t being used in practice to subvert democracy.

“It is right that we should be alert and we should be vigilant,” Clegg said. “But of the major elections which have taken place already this year, in Taiwan, Pakistan, Bangladesh and Indonesia, it is striking how little these tools have been used in a systematic basis to really try to subvert and disrupt the elections.

“I would urge everyone to think of AI as a sword, not just a shield, when it comes to bad content. The single biggest reason why we’re getting better and better and better in reducing the bad content that we don’t want on our walls, on Instagram and Facebook and for so-on, is for one reason: AI.”

Meta is co-operating with its industry peers, Clegg added, to try to improve those systems further. “There is an increasingly high level of industry cooperation, particularly this year, with the unprecedented number of elections.”

The landscape is likely to change in the next month, however, due to Meta’s own actions in the space. The company is set to launch Llama 3, its most advanced GPT-style large language model, in the coming weeks, with a full release expected by the summer, Clegg said.

Unlike many of its peers, Meta has historically released those AI models as open source, with few constraints on their use. That makes it harder to prevent them from being repurposed by bad actors, but also allows outside observers to more accurately vet the accuracy and bias of the systems.

Clegg said: “One of the reasons why the whole of the cybersecurity industry is built on top of open source technology is precisely because if you apply the wisdom of crowds to new technologies, you’ll get many more eyes on the potential flaws rather than just relying on one corporate entity playing Whac-A-Mole with their own systems”.

Yann LeCun, Meta’s chief AI scientist and one of the three men known as the “godfathers of AI”, argued that there was a more pressing risk to democracy from AI: the potential dominance of a few closed models. “In the near future, every single one of our interactions with the digital world will be through AI assistants,” LeCun predicted. “If our entire digital diet is mediated by AI systems, we need them to be diverse, for the same reason that we need a free and diverse press. Every AI system is biased in some way, is trained on particular data.

“Who is going to cater to all the languages, the cultures, the value systems, centres of interest in the world? This cannot be done by a handful of companies on the west coast of the US,” LeCun said.

 

Leave a Comment

Required fields are marked *

*

*