John Naughton 

Sam Altman was the trusted face of AI. His firm, though, is much more complex

The conflicts of interest built into OpenAI’s corporate structure may be a bigger story than the loss of its leader
  
  

Sam Altman, gersturing with both hands as he talks while seated
Altman’s departure caused a wave of excited, if not entirely well-informed, speculation in the tech commentariat. Photograph: Eric Risberg/AP

The news on Friday that Sam Altman, the chief executive of OpenAI, had been abruptly sacked by the company’s board came as a shock to the tech industry.

“Mr Altman’s departure,” said the ponderous announcement, “follows a deliberative review process by the board, which concluded that he was not consistently candid in his communications with the board, hindering its ability to exercise its responsibilities. The board no longer has confidence in his ability to continue leading OpenAI.”

Given that, ever since ChatGPT took the world by storm last December, Altman has been the photogenic poster-boy for generative AI – the darling of the mainstream media and an honoured invitee to the corridors of western power – news of his sudden fall from grace launched a torrent of excited speculation in the tech commentariat. Nobody, it seems, actually knew anything, but there was a consensus that Something Was Up.

No doubt we will get to the bottom of the mystery in due course, but for now a more productive line of inquiry might be into the corporate history of OpenAI. For if one wanted to design an ownership structure with conflicts of interest and of responsibility built into it, its byzantine arrangements would be hard to beat.

It was set up in 2015 as a non-profit organisation whose mission was “to ensure that artificial general intelligence benefits all of humanity”.

Among the founders with this benevolent interest in humanity were Altman and Reid Hoffman, the co-founder of LinkedIn, but they also included Amazon Web Services, Indian IT firm Infosys, rightwing tech billionaire Peter Thiel and Elon Musk.

The founders collectively pledged $1bn to the venture, though it’s not clear whether they all actually delivered on the pledge.

In 2019, OpenAI “transitioned” into two organisations: a “capped-profit” organisation called OpenAI Global LLC (in which the return on any investment was capped at 100 times the original amount); and OpenAI Inc, the non-profit sole controlling shareholder in OpenAI Global LLC. Which means that the profit-making business owes a fiduciary responsibility to its non-profit owner.

You can see where this is heading. In 2019 and 2021, Microsoft invested substantial sums (more than $1bn) in OpenAI Global to cement a “partnership” between the two companies, and this year Microsoft extended the partnership with an investment of about $10bn (possibly consisting mostly of free access to its Azure cloud computing system). It is believed that Microsoft’s willingness to support OpenAI’s mission to provide safe and beneficial artificial general intelligence (AGI) played a “crucial role” in their partnership.

But – there’s always a “but” in these things – if OpenAI were to succeed in actually building a superintelligent (that is, human-capacity) machine, all bets would be off.

Why? Because, as Benson Mawira writes on the technology news site Cryptopolitan, “once AGI is achieved, OpenAI’s commercial agreements, including intellectual property licences, will no longer apply to post-AGI technology. Microsoft’s investment may face uncertainty if OpenAI’s board decides to prioritise non-profit interests over for-profit ones.”

And who decides if AGI has arrived? Why, none other than the six members of the non-profit’s board.

Now, of course it is possible that Altman’s abrupt defenestration had nothing to do with AGI or this rat’s nest of conflicting interests. Only time will tell. But whatever the explanation turns out to be, two sobering thoughts remain.

One is that, as the distinguished AI expert Gary Marcus put it: “The fact that the governance of one of the most visible AI companies in the world can change literally overnight should be a reminder that we can’t make our judgments about a company’s trustworthiness based simply on a vibe about their CEO.”

The second is that there’s no point in asking ChatGPT itself for an explanation for Altman’s dismissal. “As of my last update in January 2022,” the chatbot replies, “Sam Altman has not been fired from any prominent position … If this is recent news … you might want to check the latest news sources for any recent developments.”

Clearly, artificial general intelligence is still some way off.

 

Leave a Comment

Required fields are marked *

*

*