More than 100 UK parliamentarians are calling on the government to introduce binding regulations on the most powerful AI systems as concern grows that ministers are moving too slowly to create safeguards in the face of lobbying from the technology industry.
A former AI minister and defence secretary are part of a cross-party group of Westminster MPs, peers and elected members of the Scottish, Welsh and Northern Irish legislatures demanding stricter controls on frontier systems, citing fears superintelligent AI “would compromise national and global security”.
The push for tougher regulation is being coordinated by a nonprofit organisation called Control AI whose backers include the co-founder of Skype, Jaan Tallinn. It is calling on Keir Starmer to show independence from Donald Trump’s White House, which opposes the regulation of AI. One of the “godfathers” of the technology, Yoshua Bengio, recently said it was less regulated than a sandwich.
The campaigners include the Labour peer and former defence secretary Des Browne, who said superintelligent AI “would be the most perilous technological development since we gained the ability to wage nuclear war”. He said only international cooperation “can prevent a reckless race for advantage that could imperil us all”.
The Conservative peer and former environment minister Zac Goldsmith said that “even while very significant and senior figures in AI are blowing the whistle, governments are miles behind the AI companies and are leaving them to pursue its development with virtually no regulation”.
Britain hosted an AI safety summit at Bletchley Park in 2023, which concluded there was “potential for serious, even catastrophic, harm, either deliberate or unintentional” from the most advanced AI systems. It set up the AI Safety Institute, now called the AI Security Institute, which has become an internationally respected body. Less emphasis, however, has been placed on the summit’s call to address risks through international cooperation.
Goldsmith said the UK should “resume its global leadership on AI security by championing an international agreement to prohibit the development of superintelligence until we know what we are dealing with and how to contain it”.
The calls for state intervention in the AI race come after one of Silicon Valley’s leading AI scientists told the Guardian humanity would have to decide by 2030 whether to take the “ultimate risk” of letting AI systems train themselves to become more powerful. Jared Kaplan, the co-founder and chief scientist at frontier AI company Anthropic, said: “We don’t really want it to be a Sputnik-like situation where the government suddenly wakes up and is like: Oh, wow, AI is a big deal.”
Labour’s programme set out in July 2024 said it would legislate “to place requirements on those working to develop the most powerful artificial intelligence models”. But no bill has been published and the government has faced White House pressure not to inhibit commercial AI development, mostly pioneered by US firms.
A spokesperson for the Department for Science, Innovation and Technology said: “AI is already regulated in the UK, with a range of existing rules already in place. We have been clear on the need to ensure the UK and its laws are ready for the challenges and opportunities AI will bring and that position has not changed.”
The bishop of Oxford, Steven Croft, who is backing the Control AI campaign, called for an independent AI watchdog to scrutinise public sector use and for AI companies to be required to meet minimum testing standards before releasing new models.
“There are all kinds of risks and the government doesn’t seem to have adopted a precautionary principle,” he said. “At the moment there are significant risks: the mental health of children and adults, the environmental costs and other big risks in terms of the alignment of generalised AI and [the question of] what is good for humanity. The government seems to be moving away from regulation.”
The UK’s first AI minister under Rishi Sunak, Jonathan Berry, said the time was coming when binding regulations should be applied to models that present existential risks. He said rules should be global and would create tripwires so if AI models reached a certain power their makers would have to show they had been tested, designed with off switches and were capable of being retrained.
“International frontier AI safety has not gone on in leaps and bounds as we had hoped,” he said. He cited recent cases of chatbots being involved in encouraging suicides, people using them as therapists and believing they are gods. “The risks, now, are very serious and we need to be constantly on our guard,” he said.
The chief executive of Control AI, Andrea Miotti, criticised the current “timid approach” and said: “There has been a lot of lobbying from the UK and US. AI companies are lobbying governments in the UK and US to stall regulation arguing it is premature and would crush innovation. Some of these are the same companies who say AIs could destroy humanity.”
He said the speed with which AI technology was advancing meant mandatory standards could be needed in the next one or two years.
“It’s quite urgent,” he said.