Dan Milmo Global technology editor 

OpenAI ‘was working on advanced model so powerful it alarmed staff’

Reports say new model Q* fuelled safety fears, with workers airing their concerns to the board before CEO Sam Altman’s sacking
  
  

OpenAI CEO and founder Sam Altman has been reinstated as boss.
OpenAI CEO and founder Sam Altman has been reinstated as boss. Photograph: Jaap Arriens/NurPhoto/Shutterstock

OpenAI was reportedly working on an advanced system before Sam Altman’s sacking that was so powerful it caused safety concerns among staff at the company.

The artificial intelligence model triggered such alarm with some OpenAI researchers that they wrote to the board of directors before Altman’s dismissal warning it could threaten humanity, Reuters reported.

The model, called Q* – and pronounced as “Q-Star” – was able to solve basic maths problems it had not seen before, according to the tech news site the Information, which added that the pace of development behind the system had alarmed some safety researchers. The ability to solve maths problems would be viewed as a significant development in AI.

The reports followed days of turmoil at San Francisco-based OpenAI, whose board sacked Altman last Friday but then reinstated him on Tuesday night after nearly all the company’s 750 staff threatened to resign if he was not brought back. Altman also had the support of OpenAI’s biggest investor, Microsoft.

Many experts are concerned that companies such as OpenAI are moving too fast towards developing artificial general intelligence (AGI), the term for a system that can perform a wide variety of tasks at human or above human levels of intelligence – and which could, in theory, evade human control.

Andrew Rogoyski, of the Institute for People-Centred AI at the University of Surrey, said the ability to solve maths problems not included in a model’s training set would be a significant development.

“A lot of generative AI regurgitates or reshapes existing knowledge, whether text, images or maths, including libraries of known maths solutions. If you can create an AI that can solve a problem where you know it hasn’t already seen the solution somewhere in its vast training sets, then that’s a big deal, even if the maths is relatively simple. Solving complex maths, unseen, would be even more exciting.”

Callout

Speaking on Thursday last week, the day before his surprise sacking, Altman indicated that the company behind ChatGPT had made another breakthrough.

In an appearance at the Asia-Pacific Economic Cooperation (Apec) summit, he said: “Four times now in the history of OpenAI, the most recent time was just in the last couple weeks, I’ve gotten to be in the room, when we sort of push the veil of ignorance back and the frontier of discovery forward, and getting to do that is the professional honour of a lifetime.”

OpenAI was founded as a nonprofit venture with a board that governs a commercial subsidiary, run by Altman. Microsoft is the biggest investor in the for-profit business. As part of the agreement in principle for Altman’s return, OpenAI will have a new board chaired by Bret Taylor, a former co-chief executive of software company Salesforce.

The ChatGPT developer states that it was established with the goal of developing “safe and beneficial artificial general intelligence for the benefit of humanity” and that the for-profit company would be “legally bound to pursue the nonprofit’s mission”.

The emphasis on safety at the nonprofit led to speculation that Altman had been sacked for endangering the company’s core mission. However, his brief successor as interim chief executive, Emmett Shear, wrote this week that the board “did *not* remove Sam over any specific disagreement on safety”.

OpenAI has been approached for comment.

• This article was amended on 27 November 2023 to replace a quote from Andrew Rogoyski with one that sets out more clearly the significance of an AI potentially being able to solve a problem it had not seen in its training datasets.

 

Leave a Comment

Required fields are marked *

*

*