AI Company Anthropic Files Federal Lawsuit Against Trump Pentagon Decision

Saturday, March 14, 2026 at 1:36 AM

San Francisco-based AI firm Anthropic has filed two federal lawsuits challenging the Pentagon's designation of the company as a "supply chain risk." The dispute stems from Anthropic's refusal to allow unrestricted military use of its Claude AI chatbot technology.

A major artificial intelligence company has taken the Trump administration to federal court over a Pentagon ruling that blocks the firm from defense contracting work.

Anthropic, the San Francisco-based creator of the Claude AI chatbot, filed dual federal lawsuits on Monday challenging the Defense Department’s classification of the company as a “supply chain risk.” One case was filed in California federal court, while the other was submitted to the federal appeals court in Washington, D.C.

The legal battle emerged after Anthropic refused Pentagon demands for unrestricted military access to its AI technology. The company had attempted to limit two specific applications: mass surveillance of American citizens and fully autonomous weapon systems.

“These actions are unprecedented and unlawful,” Anthropic’s lawsuit says. “The Constitution does not allow the government to wield its enormous power to punish a company for its protected speech. No federal statute authorizes the actions taken here. Anthropic turns to the judiciary as a last resort to vindicate its rights and halt the Executive’s unlawful campaign of retaliation.”

Defense Secretary Pete Hegseth and other military leaders had publicly demanded that Anthropic accept “all lawful uses” of Claude technology and warned of consequences for non-compliance. The Defense Department declined to provide comment Monday, stating they do not discuss ongoing litigation.

The Pentagon’s supply chain risk designation represents an unprecedented move – marking the first known instance of the federal government applying this label to an American company. This classification, typically reserved for foreign adversaries that might threaten national security systems, effectively bars Anthropic from defense contract work.

President Trump has also directed federal agencies to cease using Claude technology, though he provided the Pentagon with a six-month timeline to phase out the system from classified military operations, including those involved in the Iran conflict.

The lawsuits also target additional federal departments, including Treasury and State, after those agencies instructed their personnel to discontinue Anthropic’s services.

While pursuing legal action, Anthropic has worked to reassure its broader customer base that the Trump administration’s penalties specifically impact only military contracting work with the Defense Department.

This clarification carries significant financial implications for the privately-held company, which anticipates $14 billion in revenue this year primarily from business and government clients using Claude for programming and other non-military applications. The company reports over 500 customers paying at least $1 million annually for Claude access, contributing to Anthropic’s recent $380 billion valuation.

In a Monday statement, Anthropic emphasized that “seeking judicial review does not change our longstanding commitment to harnessing AI to protect our national security, but this is a necessary step to protect our business, our customers, and our partners.”

More from TV Delmarva Channel 33 News