US military leaders including Pete Hegseth, the defense secretary, met with executives from the artificial intelligence firm Anthropic on Tuesday to hash out a dispute over what the government will be able to do with the company’s powerful AI model. Hegseth gave Dario Amodei, the Anthropic CEO, until the end of the day Friday to agree to the department’s terms or face penalties, Axios reported.
Anthropic, which presents itself as the most safety-forward of the leading AI companies, has been mired in weeks of disagreement with the Pentagon over how the military is allowed to use its large language model, Claude. US defense officials have pushed for unfettered access to Claude’s capabilities, while Anthropic has reportedly resisted allowing its product to be used for mass surveillance or autonomous weapons systems that can use AI to kill people without human input. The Department of Defense (DoD) has integrated Claude into its operations, but has threatened to sever the relationship over what its top brass perceives as roadblocks erected by Anthropic.
At stake in the negotiations is whether the AI industry will push back against government demand for the military use of their products, something that has long been controversial among researchers and ethical AI advocates. Defense officials have already threatened punitive measures against Anthropic if it does not comply, including canceling a massive contract with the company and designating it a “supply chain risk”.
The DoD struck deals with several major AI firms including Anthropic, Google and OpenAI in July last year, offering them contracts worth up to $200m. Until this week, however, Anthropic’s Claude product was the only model permitted for use in the military’s classified systems. The DoD signed a deal on Monday which allowed the use in classified systems by military personnel of Elon Musk’s xAI chatbot, which has faced recent backlash over producing nonconsensual sexualized images of children.
Both xAI and OpenAI have agreed to the government’s terms on the uses of their AI, according to he Washington Post, with a defense official stating that OpenAI had allowed its model to be used for “all lawful purposes”. OpenAI did not immediately respond to a request for comment on their agreement with the government.
The meeting between Anthropic and the Pentagon is taking place a month after the US military reportedly used Claude to assist in its capture of Venezuelan leader Nicolás Maduro. There has been a widespread push from the Trump administration to integrate AI into the military, while Donald Trump has repeatedly vowed that the US will win a global AI arms race to dominate the technology.
Emil Michael, the Pentagon’s chief technology officer and a former Uber executive, has publicly campaigned for Anthropic to “cross the Rubicon” and agree to the government’s terms.
“I think if someone wants to make money from the government, from the US Department of War, those guardrails ought to be tuned for our use cases – so long as they’re lawful,” Michael told Defense Scoop last week.
Anthropic’s Amodei has meanwhile long spoken out in favor of greater regulation on AI, while his company has backed a political action committee advocating for stronger safeguards over artificial intelligence. Amodei opposed Trump during the 2024 US presidential campaign and Anthropic has hired several former Biden staffers, which the Wall Street Journal reported was a contributing factor in a pro-Trump venture capital firm backing out of investing in Anthropic earlier this year.
The Pentagon has poured billions of dollars in recent years into pursuing AI-enabled technologies ranging from unmanned aerial drones to automated targeting systems. The advancement of these technologies has accelerated ethical questions around how much decision making power to cede to AI when it comes to lethal force. These debates are no longer theoretical, with fighting in Ukraine featuring deadly semiautonomous drones that can operate without human control.