Microsoft and a group of retired military leaders are throwing their weight behind Anthropic in asking a federal court to block the Trump administration's designation of the artificial intelligence company as a supply chain risk.
Microsoft, in a legal filing, is challenging Defense Secretary Pete Hegseth's action last week to shut Anthropic out of military work by labeling its AI products as posing a threat to national security.
So are a group of 22 former high-ranking U.S. military officials, some of whom were secretaries of the Air Force, Army and Navy and a head of the Coast Guard. They allege in their own court filing that Hegseth's actions are a misuse of government authority for “retribution against a private company that has displeased the leadership.”
The Pentagon took the action against Anthropic after an unusually public dispute over the company's refusal to allow unrestricted military use of its AI model Claude. President Donald Trump also said he was ordering all federal agencies to stop using Claude.
“The use of a supply chain risk designation to address a contract dispute may bring severe economic effects that are not in the public interest,” Microsoft, a major government contractor, said in its Tuesday filing in the San Francisco federal court, where Anthropic sued the Trump administration on Monday.
The Pentagon's action “forces government contractors to comply with vague and ill-defined directions that have never before been publicly wielded against a U.S. company,” Microsoft's legal brief says.
It asks for a judge to order a temporary lifting of the designation to allow for more “reasoned discussion” between Anthropic and the Trump administration.
The Pentagon declined to comment, saying it does not remark on matters in litigation.
Microsoft's filing also expressed support for Anthropic's two ethical red lines that were a sticking point in the contract negotiations after the Pentagon insisted the company must allow for “all lawful” uses of its AI.
“Microsoft also believes that American AI should not be used to conduct domestic mass surveillance or start a war without human control,” the company said. “This position is consistent with the law and broadly supported by American society, as the government acknowledges.”
The software giant's court filing followed others supporting Anthropic, including one from a group of AI developers at Google and OpenAI, and another from a group of organizations such as the Cato Institute and the Electronic Frontier Foundation.
Recommended for you
A fourth such filing came from the group of retired military chiefs that includes former CIA director Michael Hayden, who's also a retired Air Force general, and retired Coast Guard Adm. Thad Allen, who led the government response to Hurricane Katrina.
“Far from protecting U.S. national security, the Secretary’s conduct here threatens the rule-of-law principles that have long strengthened our military,” said their filing.
U.S. District Judge Rita Lin is presiding over the case in federal court in San Francisco, where Anthropic is headquartered. Anthropic has also filed a separate and more narrow case in the federal appeals court in Washington, D.C.
Lin, who was nominated to the bench by President Joe Biden in 2022, has scheduled a March 24 hearing.
Neither legal filing mentions the war in Iran, which started shortly after Trump and Hegseth announced they were punishing Anthropic, but the ex-military officials warn that the “sudden uncertainty” of targeting a technology widely embedded in military platforms could disrupt planning and put soldiers at risk during ongoing operations.
The current commander of U.S. Central command confirmed in a video posted to social media Wednesday about U.S. strikes on Iran that the military was using “advanced AI tools” to “sift through vast amounts of data in seconds,” though he didn't specifically name which tools.
Adm. Brad Cooper said these AI tools are enabling leaders to make smarter decisions faster but stressed that “humans will always make final decisions on what to shoot and what not to shoot and when to shoot.”
Anthropic was, until recently, the only one of its peers approved for use in classified military networks. But as a result of the dispute, military officials have said they're looking to shift that work to competitors Google, OpenAI and Elon Musk's xAI.
—-
AP writer Konstantin Toropin contributed to this report.

(0) comments
Welcome to the discussion.
Log In
Keep the discussion civilized. Absolutely NO personal attacks or insults directed toward writers, nor others who make comments.
Keep it clean. Please avoid obscene, vulgar, lewd, racist or sexually-oriented language.
Don't threaten. Threats of harming another person will not be tolerated.
Be truthful. Don't knowingly lie about anyone or anything.
Be proactive. Use the 'Report' link on each comment to let us know of abusive posts.
PLEASE TURN OFF YOUR CAPS LOCK.
Anyone violating these rules will be issued a warning. After the warning, comment privileges can be revoked.