Rafael Behr 

When the AI bubble bursts, humans will finally have their chance to take back control

The US economy is pumped up on tech-bro vanity. The inevitable correction should prompt a global conversation, says Guardian columnist Rafael Behr
  
  

illustration by Ellie Foreman-Peck

If AI did not change your life in 2025, next year it will. That is one of few forecasts that can be made with confidence in unpredictable times. This is not an invitation to believe the hype about what the technology can do today, or may one day achieve. The hype doesn’t need your credence. It is puffed up enough on Silicon Valley finance to distort the global economy and fuel geopolitical rivalries, shaping your world regardless of whether the most fanciful claims about AI capability are ever realised.

ChatGPT was launched just over three years ago and became the fastest-growing consumer app in history. Now it has about 800m weekly users. Its parent company, OpenAI, is valued at about $500bn. Sam Altman, OpenAI CEO, has negotiated an intricate and, to some eyes, suspiciously opaque network of deals with other players in the sector to build the infrastructure required for the US’s AI-powered future. The value of these commitments is about $1.5tn. This is not real cash, but bear in mind that a person spending $1 every second would need 31,700 years to get through a trillion-dollar stash.

Alphabet (Google’s parent company), Amazon, Apple, Meta (formerly Facebook) and Microsoft, which has a $135bn stake in OpenAI, are all piling hundreds of billions of dollars on the same bet. Without all these investments, the US economy would be flatlining.

Economic analysts and historians of previous industrial frenzies, from the 19th-century railroads to the dotcom boom-and-bust at the turn of the millennium, are calling AI a bubble.

Altman has said: “There are many parts of AI that I think are kind of bubbly right now.” Not his part, naturally. Jeff Bezos, Amazon’s founder, has called it a bubble, but the “good” kind that accelerates economic progress. A good bubble, in this analysis, finances infrastructure and expands the boundaries of human knowledge. These benefits endure after the bubble bursts and justify the ruin of people (little people, not Bezos people) who get hurt along the way.

The bullishness of the tech fraternity is a heady mix of old-fashioned hucksterism, plutocratic megalomania and utopian ideology.

At its core is a marketing pitch: current AI models already out-perform people at many tasks. Soon, it is supposed, the machines will achieve “general intelligence” – cognitive versatility like ours – leading to emancipation from the need for any human input. Generally, intelligent AI can teach itself and design its successors, advancing through mind-boggling exponents of capability towards higher dimensions of super-intelligence.

The company that crosses that threshold will have no trouble covering its debts. The men who realise this vision – and the dominant evangelists are all men – will be to omniscient AI what ancient prophets were to their gods. That’s a good gig for them. What happens to the rest of us in this post-sapiens order is a bit hazier.

The US isn’t the only superpower to have an interest in AI, so the Silicon Valley dash for maximum awesomeness has geopolitical implications. China has taken a different approach, dictated in part by the Communist party tradition of centralised industrial planning, but also by the simple fact of running second in the race to innovate. Beijing is pushing for a faster, wider implementation of lower-spec (but still powerful) AI at every level of the economy and society. China is betting on a general boost from ordinary AI. The US is gunning for an extraordinary leap in general AI.

Since the prize in that race is global supremacy, there are few incentives for either side to fret about risks, or sign up to international protocols restricting the uses of AI and mandating transparency in its development. Neither the US nor China is interested in submitting a strategically vital industry to standards co-written with foreigners.

In the absence of global governance, we will depend on the integrity of robber barons and authoritarian apparatchiks to build ethical guardrails around systems already being embedded in tools we use for work, play and education.

Earlier this year, Elon Musk announced that his company was developing Baby Grok, an AI chatbot aimed at children as young as three. The adult version has voiced white supremacist views and proudly self-identified as “MechaHitler”. That flagrancy has at least the virtue of candour. It is easier to spot than the subtler encodings of prejudice in bots that haven’t been given the kind of hard ideological steers that Musk gives his algorithms.

Not all AI systems are large language models (LLMs) like Grok. But all LLMs are vulnerable to hallucinations and delusions gleaned from the material on which they are trained. They don’t “understand” a question and “think” about it like a conscious mind. They take a prompt, test the probability of its key terms occurring frequently together in their training data and assemble a plausible-sounding answer. Often the result is accurate. Usually it is convincing. It can also be garbage. As the volume of AI-generated content grows online, the ratio of slop to quality in the LLMs’ diets shifts accordingly. Fed on junk, they cannot be trusted to disgorge nutritious information.

On this trajectory a bleak destination comes into view: a synthetic pseudo-reality mediated by the sycophantic mechanical offspring of narcissist Silicon Valley oligarchs. But that isn’t the only available path. Nor is it necessarily the likeliest one. The irrational exuberance of the AI boosters and their cynical coupling with the Trump administration is a familiar story of human greed and myopia, not a new stage in evolution. The product is truly phenomenal but flawed in ways that encode the deformed character of its progenitors, whose talents are salesmanship and financial engineering. They have built spectacular engines that prioritise a brilliant performance of intelligence over the real thing.

The real bubble is not stock valuations but the inflated ego of an industry that thinks it is just one more datacentre away from computational divinity. When the correction comes, when the US’s Icarus economy hits the cold sea, there will be a chance for other voices to be heard on the subject of risk and regulation. It may not come in 2026, but the moment is nearing when the starkness of the choice on offer and the need to confront it becomes unavoidable. Should we build a world where AI is put to the service of humanity, or will it be the other way round? We won’t need ChatGPT to tell us the answer.

  • Rafael Behr is a Guardian columnist

 

Leave a Comment

Required fields are marked *

*

*