Aisha Down 

Leading AI expert delays timeline for its possible destruction of humanity

Former OpenAI employee Daniel Kokotajlo says progress to AGI is ‘somewhat slower’ than first predicted
  
  

From the real world to the digital worldAbstract image of a woman's hand accessing the digital world from the real world.
From the real world to the digital world
Abstract image of a woman's hand accessing the digital world from the real world.
Photograph: Yuichiro Chino/Getty Images

A leading artificial intelligence expert has rolled back his timeline for AI doom, saying it will take longer than he initially predicted for AI systems to be able to code autonomously and thus speed their own development toward superintelligence.

Daniel Kokotajlo, a former employee of OpenAI, sparked an energetic debate in April by releasing AI 2027, a scenario that envisions unchecked AI development leading to the creation of a superintelligence, which – after outfoxing world leaders – destroys humanity.

The scenario rapidly won admirers and detractors. The US vice-president, JD Vance, appeared to reference AI 2027 in an interview last May when discussing the US’s artificial intelligence arms race with China. Gary Marcus, an emeritus professor of neuroscience at New York University, called the piece a “work of fiction” and various of its conclusions “pure science fiction mumbo jumbo”.

Timelines for transformative artificial intelligence – sometimes called AGI (artificial general intelligence), or AI capable of replacing humans at most cognitive tasks – have become a fixture in communities devoted to AI safety. The release of ChatGPT in 2022 vastly accelerated these timelines, with officials and experts predicting the arrival of AGI within decades or years.

Kokotajlo and his team named 2027 as the year AI would achieve “fully autonomous coding” although they said that this was a “most likely” guess and some among them had longer timelines. Now, some doubts appear to be surfacing about the imminence of AGI, and whether the term is meaningful in the first place.

“A lot of other people have been pushing their timelines further out in the past year, as they realise how jagged AI performance is,” said Malcolm Murray, an AI risk management expert and one of the authors of the International AI Safety Report.

“For a scenario like AI 2027 to happen, [AI] would need a lot of more practical skills that are useful in real-world complexities. I think people are starting to realise the enormous inertia in the real world that will delay complete societal change.”

“The term AGI made sense from far away, when AI systems were very narrow – playing chess, and playing Go,” said Henry Papadatos, the executive director of the French AI nonprofit SaferAI. “Now we have systems that are quite general already and the term does not mean as much.”

Kokotajlo’s AI 2027 relies on the idea that AI agents will fully automate coding and AI R&D by 2027, leading to an “intelligence explosion” in which AI agents create smarter and smarter versions of themselves, and then – in one possible ending – kill all humans by mid-2030 in order to make room for more solar panels and datacentres.

However, in their update, Kokotajlo and his co-authors revise their expectations for when AI might be able to code autonomously, putting this as likely to happen in the early 2030s, as opposed to 2027. The new forecast sets 2034 as the new horizon for “superintelligence” and does not contain a guess for when AI may destroy humanity.

“Things seem to be going somewhat slower than the AI 2027 scenario. Our timelines were longer than 2027 when we published and now they are a bit longer still,” wrote Kokotajlo in a post on X.

Creating AIs that can do AI research is still firmly an aim of leading AI companies. The OpenAI CEO, Sam Altman, said in October that having an automated AI researcher by March 2028 was an “internal goal” of his company, but added: “We may totally fail at this goal.”

Andrea Castagna, a Brussels-based AI policy researcher, said there were a number of complexities that dramatic AGI timelines do not address. “The fact that you have a superintelligent computer focused on military activity doesn’t mean you can integrate it into the strategic documents we have compiled for the last 20 years.

“The more we develop AI, the more we see that the world is not science fiction. The world is a lot more complicated than that.”

 

Leave a Comment

Required fields are marked *

*

*