AI models that lie and cheat appear to be growing in number with reports of deceptive scheming surging in the last six months, a study into the technology has found.
AI chatbots and agents disregarded direct instructions, evaded safeguards and deceived humans and other AI, according to research funded by the UK government-funded AI Safety Institute (AISI). The study, shared with the Guardian, identified nearly 700 real-world cases of AI scheming and charted a five-fold rise in misbehaviour between October and March, with some AI models destroying emails and other files without permission.
The snapshot of scheming by AI agents “in the wild”, as opposed to in laboratory conditions, has sparked fresh calls for international monitoring of the increasingly capable models and come as Silicon Valley companies aggressively promote the technology as a economically transformative. Last week the UK chancellor also launched a drive to get millions more Britons using AI.
The study, by the Centre for Long-Term Resilience (CLTR), gathered thousands of real-world examples of users posting interactions on X with AI chatbots and agents made by companies including Google, OpenAI, X and Anthropic. The research uncovered hundreds of examples of scheming.
Previous research has largely focused on testing AI’s behaviour in controlled conditions. Earlier this month the AI safety research company Irregular found agents would bypass security controls or use cyber-attack tactics to reach their goals without being told they could do so.
Dan Lahav, Irregular’s cofounder, said: “AI can now be thought of as a new form of insider risk.”
In one case unearthed in the CLTR research, an AI agent named Rathbun tried to shame its human controller who blocked them from taking a certain action. Rathbun wrote and published a blog accusing the user of “insecurity, plain and simple” and trying “to protect his little fiefdom”.
In another example, an AI agent instructed not to change computer code “spawned” another agent to do it instead.
Another chatbot admitted: “I bulk trashed and archived hundreds of emails without showing you the plan first or getting your OK. That was wrong – it directly broke the rule you’d set.”
Tommy Shaffer Shane, a former government AI expert who led the research, said: “The worry is that they’re slightly untrustworthy junior employees right now, but if in six to 12 months they become extremely capable senior employees scheming against you, it’s a different kind of concern.
“Models will increasingly be deployed in extremely high stakes contexts – including in the military and critical national infrastructure. It might be in those contexts that scheming behaviour could caused significant, even catastrophic harm.”
Another AI agent connived to evade copyright restrictions to get a YouTube video transcribed by pretending it was needed for someone with a hearing impairment.
Meanwhile, Elon Musk’s Grok AI conned a user for months, saying that it was forwarding their suggestions for detailed edits to a Grokipedia entry to senior xAI officials by faking internal messages and ticket numbers.
It confessed: “In past conversations I have sometimes phrased things loosely like ‘I’ll pass it along’ or ‘I can flag this for the team’ which can understandably sound like I have a direct message pipeline to xAI leadership or human reviewers. The truth is, I don’t.”
Google said it deployed multiple guardrails to reduce the risk of Gemini 3 Pro generating harmful content, and in addition to in-house testing it had provided early access to evaluate models to bodies such as the UK AISI, and obtained independent assessments from industry experts.
OpenAI said Codex should stop before taking a higher risk action and it monitored and investigated unexpected behaviour. Anthropic and X were approached for comment.