Joseph Gedeon in Washington 

Microsoft backs AI firm Anthropic in legal battle against Pentagon

Tech company filed amicus brief in support of Anthropic’s effort to overturn an aggressive Pentagon designation
  
  

The Microsoft four-square logo
Microsoft has thrown its weight behind Anthropic’s legal challenge against the US Pentagon. Photograph: Joan Cros/NurPhoto/Shutterstock

Microsoft has thrown its weight behind Anthropic’s legal challenge against the US Pentagon, filing a court brief in support of the AI company’s effort to overturn an aggressive designation that effectively bars it from government work.

In an amicus brief submitted to a federal court in San Francisco this week, Microsoft, which integrates Anthropic’s AI tools into systems it provides to the US military, argued that a temporary restraining order was necessary to prevent serious disruption to suppliers whose products rely on the AI company’s technology. Google, Amazon, Apple and OpenAI have also signed on to a brief in support of Anthropic.

In a statement to the Guardian, Microsoft said: “The Department of War needs reliable access to the country’s best technology. And everyone wants to ensure AI is not used for mass domestic surveillance or to start a war without human control. The government, the entire tech sector, and the American public need a path to achieve all these goals together.”

Microsoft is one of the Pentagon’s most deeply embedded tech partners, holding a share of the military’s Joe Biden-era $9bn Joint Warfighting Cloud Capability contract alongside Amazon, Google and Oracle, as well as separate software and enterprise services deals worth several billion dollars more. Microsoft’s contracts with the government span defense, intelligence and civilian agencies, and under the Trump administration in September, Microsoft struck another multibillion dollar deal to help usher along cloud services and AI advancement in the federal government.

The filing comes after Anthropic launched two lawsuits on Monday – one in federal court in California and one in the DC circuit court of appeals – challenging the Pentagon’s decision to label it a supply-chain risk, a designation that has never previously been applied to a US company.

The dispute stems from collapsed contract negotiations last month over a $200m deal to deploy Anthropic’s AI on classified military systems just as the US readied for its war on Iran.

Talks fell apart after Anthropic insisted its technology should not be used for mass surveillance of US citizens or to power autonomous lethal weapons, which led to Pete Hegseth, the defense secretary, dubbing them a supply-chain risk. Last week, the Pentagon formally notified Anthropic of the decision, and the company says government contracts have already begun to be cancelled. On Thursday, the Pentagon’s chief technology officer, Emil Michael, told CNBC “there’s no chance” the agency renegotiates with Anthropic after the designation.

In its complaint, Anthropic explained the limits and their hesitations behind its own technology. “Anthropic currently does not have confidence, for example, that Claude would function reliably or safely if used to support lethal autonomous warfare,” the filing read. “These usage restrictions are therefore rooted in Anthropic’s unique understanding of Claude’s risks and limitations.”

The company also said its first amendment rights were under attack, arguing the Pentagon had used the supply-chain risk designation – typically reserved for firms with ties to foreign adversaries such as China – as ideological punishment for its public stance on AI safety.

Meanwhile, an ongoing Pentagon investigation into the Tomahawk military strike on an Shajarah Tayyebeh elementary school that reportedly killed at least 175 people, according to Iranian officials, has reportedly found in its preliminary examination that Washington was responsible for the killings. It’s unclear if AI was used in the strikes, which appear to have been a targeting mistake based on outdated data from the defense intelligence agency, people familiar told the New York Times.

 

Leave a Comment

Required fields are marked *

*

*