Editorial 

The Guardian view on AI: safety staff departures raise worries about industry pursuing profit at all costs

Editorial: Cash-hungry Silicon Valley firms are scrambling for revenue. Regulate them now before the tech becomes too big to fail
  
  

The OpenAI logo on a black keyboard
‘Even firms founded on restraint are struggling to resist the same pull of profits.’ Photograph: Dmitrii Melnikov/Alamy

Hardly a month passes without an AI grandee cautioning that the technology poses an existential threat to humanity. Many of these warnings might be hazy or naive. Others may be self-interested. Calm, level-headed scrutiny is needed. Some warnings, though, are worth taking seriously.

Last week, some notable ground-level AI safety researchers quit, warning that firms chasing profits are sidelining safety and pushing risky products. In the near term, this suggests a rapid “enshittification” in pursuit of short-term revenue. Without regulation, public purpose gives way to profit. Surely AI’s expanding role in government and daily life – as well as billionaire owners’ desire for profits – demand accountability.

The choice to use agents – chatbots – as the main consumer interface for AI was primarily commercial. The appearance of conversation and reciprocity promotes deeper user interaction than a Google search bar. The OpenAI researcher Zoë Hitzig has warned that introducing ads into that dynamic risks manipulation. OpenAI says ads do not influence ChatGPT’s answers. But, as with social media, they may become less visible and more psychologically targeted – drawing on extensive private exchanges.

It is worth noting that Fidji Simo, who built Facebook’s ad business, joined OpenAI last year. And OpenAI recently fired its executive Ryan Beiermeister for “sexual discrimination”. Several reports say she had strongly opposed the rollout of adult content. Together, these moves suggest that commercial pressures are shaping the firm’s direction – and probably that of the wider industry. The way Elon Musk’s AI Grok tools were left active long enough to generate misuse, then restricted behind paid access before finally being halted after investigations in the UK and EU, raises questions about monetising harm.

It is harder to evaluate more specialised systems being built for social purposes such as education and government. But since the frenetic pursuit of profit tends to introduce irresistible bias to every human system we have, the same will be true of AI.

This is not a problem within a single company. A more vague resignation letter by the Anthropic safety researcher Mrinank Sharma warned of a “world in peril”, and that he had “repeatedly seen how hard it is to truly let our values govern our actions”. OpenAI was once ostensibly entirely non-profit; after it committed to commercialisation starting in 2019, Anthropic emerged promising to be the safer, more cautious alternative. Mr Sharma’s departure suggests that even firms founded on restraint are struggling to resist the same pull of profits.

The cause of this realignment is clear. Firms are burning through investment capital at historic rates, their revenues aren’t growing fast enough and, despite impressive technical results, it’s not clear yet what AI can “do” to generate profits. From tobacco to pharmaceuticals, we have seen how profit incentives can distort judgment. The 2008 financial crisis showed what happens when essential systems are driven by short-term needs and weak oversight.

Strong state regulation is needed to solve this problem. The recent International AI Safety Report 2026 offered a sober assessment of real risks – from faulty automation to misinformation – and a clear blueprint for regulation. Yet despite it being endorsed by 60 countries, the US and UK governments declined to sign it. That is a worrying sign that they are choosing to shield industry rather than bind it.

 

Leave a Comment

Required fields are marked *

*

*