Defense Department personnel are pushing back against orders to eliminate Anthropic's Claude AI system from military operations. The Pentagon designated the company a security risk in March, but users say the technology is superior to alternatives and difficult to replace.

Military personnel and defense contractors are resisting Pentagon directives to eliminate Anthropic’s artificial intelligence systems from their operations, citing the technology’s superiority over competing platforms.
Defense Secretary Pete Hegseth classified Anthropic as a supply-chain security threat on March 3, mandating a six-month timeline for the Pentagon and its contractors to cease using the company’s AI products. The designation followed disagreements between Anthropic and military officials regarding usage restrictions for the artificial intelligence technology.
However, the directive faces significant pushback from users who are delaying implementation or preparing to return to Anthropic’s systems once the conflict resolves.
“Career IT people at DoD hate this move because they had finally gotten operators comfortable using AI,” an IT contractor explained. “They think it’s stupid.” The contractor praised Anthropic’s Claude AI model as “the best,” while criticizing xAI’s Grok for delivering inconsistent responses to identical questions.
The transition away from Anthropic’s technology presents substantial logistical challenges. According to one contractor, obtaining new security certifications for replacement systems could require several months of work.
Multiple Pentagon personnel, officials, and contractors provided information anonymously due to restrictions on public statements. The Defense Department, Anthropic, and xAI declined to provide comments.
Artificial intelligence has become integral to military operations, supporting weapon targeting, operational planning, classified information handling, and data analysis tasks.
Following a $200 million defense contract announcement in July 2025, Anthropic rapidly integrated into military workflows. Claude achieved the distinction of being the first AI system authorized for classified military networks, with sources reporting widespread adoption. Federal agencies generally regarded Anthropic’s capabilities as superior to competitor offerings.
Previous Reuters reporting revealed that Pentagon forces utilized Claude technology during Iranian conflict operations, with sources confirming continued usage despite the prohibition. One expert characterized this ongoing use as “the clearest signal” of the Pentagon’s reliance on the platform.
“It’s a substantial cost to replace those models with alternatives,” stated Joe Saunders, CEO of government contractor RunSafe Security. Saunders noted that alternative systems must undergo extensive certification processes for classified and military network deployment.
Replacing existing systems with new technology could require 12 to 18 months for certification completion, according to Saunders.
“It’s not just costly, it’s a loss of productivity,” Saunders added, drawing from his experience helping military organizations implement AI chatbot technology.
While Claude elimination orders circulate throughout the Pentagon, one official reported compliance driven by career preservation concerns, describing the transition as wasteful.
Functions previously managed by Claude, including large dataset queries, now require manual completion using tools like Microsoft Excel, the official noted. Pentagon developers extensively relied on Anthropic’s Claude Code tool for software programming, according to multiple sources.
The loss of coding capabilities has frustrated developers, though another senior official emphasized they shouldn’t depend on single tools.
Claude removal represents a massive operational challenge.
Palantir’s Maven Smart Systems, which provides intelligence analysis and weapons targeting software to military forces, built multiple prompts and workflows using Anthropic’s Claude Code, according to two knowledgeable sources. Palantir holds Maven-related contracts with the Defense Department and national security agencies potentially worth over $1 billion, requiring the company to substitute Claude with alternative AI models and reconstruct software components.
Some personnel are “slow-rolling” Claude replacement while actively using it for workflow creation, which involves automated task sequences, a Pentagon technologist revealed.
Developer frustration stems from losing custom AI agents designed for processing massive data volumes when transitioning to new systems.
The Defense Department has directed contractors, including major defense companies, to evaluate their Anthropic dependencies and begin phase-out procedures. Officials and contractors now face strategic decisions about quickly adopting OpenAI, Google, or xAI alternatives, or gradually unwinding Anthropic usage to enable rapid restoration if Pentagon policies change.
One federal agency chief information officer plans to delay the phase-out, anticipating government-Anthropic negotiations will reach resolution before the six-month deadline.
“What we are seeing play out here is the tension of adoption, both inside the Pentagon as well as the political level,” observed Roger Zakheim, director of the Ronald Reagan Presidential Foundation and Institute.
Route 9 Construction Causes Lane Closures on Christina Avenue Until 5 PM
Congo, Rwanda Officials Meet in US to Ease Eastern Congo Conflict
Construction Closes Southbound Lane on Janice Road Until 5 PM Today
Construction Closes Southbound Lane on Kenton Road Until Late Afternoon