A federal judge in San Francisco has temporarily stopped the Pentagon from designating artificial intelligence company Anthropic as a supply chain threat. The ruling also blocks President Trump's order preventing federal agencies from using the company's Claude chatbot.

SAN FRANCISCO — An artificial intelligence company has won a temporary legal victory against the Pentagon after a federal judge stepped in to halt the military’s attempt to classify the firm as a security threat.
U.S. District Judge Rita Lin issued a ruling Thursday that prevents the Defense Department from designating Anthropic as a supply chain risk. The judge’s decision also stops President Donald Trump’s order that would have banned all federal agencies from utilizing the company’s AI chatbot known as Claude.
The judge criticized what she called “broad punitive measures” implemented by the Trump administration and Defense Secretary Pete Hegseth, describing them as seemingly arbitrary and potentially devastating to Anthropic’s business. Lin specifically questioned Hegseth’s deployment of unusual military powers normally reserved for foreign enemies.
“Nothing in the governing statute supports the Orwellian notion that an American company may be branded a potential adversary and saboteur of the U.S. for expressing disagreement with the government,” Lin wrote.
The judge’s decision came after a 90-minute court session in San Francisco on Tuesday, where Lin questioned the administration’s drastic response following failed contract negotiations. The dispute arose when Anthropic sought to restrict its AI technology from being used in fully autonomous weapons systems or for surveillance of American citizens.
The San Francisco-based company had requested emergency judicial intervention to eliminate what it characterized as unjustified stigma resulting from an “unlawful campaign of retaliation.” This prompted Anthropic to file suit against the Trump administration earlier this month. Pentagon officials maintained they should have the authority to deploy Claude for any purpose they consider legal.
Lin emphasized that her decision focused on the government’s response rather than the underlying policy disagreement.
“If the concern is the integrity of the operational chain of command, the Department of War could just stop using Claude. Instead, these measures appear designed to punish Anthropic,” Lin wrote.
The company has also initiated a separate, more limited legal challenge that remains under review by the federal appeals court in Washington, D.C.
Lin specified that her order takes effect after a one-week delay and does not compel the Pentagon to purchase Anthropic’s services or restrict the military from switching to alternative AI providers.