Pentagon Tech Chief Reveals Clash with AI Company Over Autonomous Weapons

The Pentagon's top technology official disclosed details of a heated dispute with AI company Anthropic over restrictions on using their technology for autonomous weapons systems. The conflict centered around the company's refusal to allow unlimited military use of their AI chatbot Claude, leading to the Pentagon cutting ties with the firm.

The Pentagon’s top technology official has revealed new details about a contentious dispute with artificial intelligence company Anthropic regarding the military use of autonomous weapons systems, including discussions about President Trump’s planned Golden Dome missile defense initiative that would deploy American weapons in space.

Defense Undersecretary Emil Michael, who serves as the Pentagon’s chief technology officer, described Anthropic’s ethical limitations on its Claude chatbot as unreasonable barriers as the military works to increase automation in drone swarms, underwater vessels, and other combat systems to match capabilities being developed by competitors like China.

“I need a reliable, steady partner that gives me something, that’ll work with me on autonomous, because someday it’ll be real and we’re starting to see earlier versions of that,” Michael stated during a podcast that aired Friday. “I need someone who’s not going to wig out in the middle.”

These revelations follow the Pentagon’s official classification of the San Francisco-based Anthropic as a supply chain security threat, effectively terminating its defense contracts through regulations meant to protect national security infrastructure from foreign interference.

The AI company has announced plans to challenge the classification in court, as the designation impacts its partnerships with other defense contractors.

President Trump has also directed federal agencies to cease using Claude immediately, though he granted the Pentagon a six-month transition period to remove the technology from classified military networks, including systems currently deployed in the Iran conflict.

According to Anthropic, the company only sought to limit two specific applications of its technology: widespread surveillance of American citizens and completely autonomous weapon systems.

Michael, who previously worked as an Uber executive, shared his perspective on months of discussions with Anthropic CEO Dario Amodei during an appearance on the “All-In” podcast, hosted by Silicon Valley investors Jason Calacanis, David Friedberg, and Chamath Palihapitiya.

Notably absent from the episode was co-host David Sacks, a former PayPal executive who now serves as Trump’s AI advisor and has publicly criticized Anthropic, particularly for recruiting former Biden administration personnel after Trump’s return to office.

When negotiations stalled last week, Michael publicly attacked Amodei on social media, claiming he “has a God-complex” and “wants nothing more than to try to personally control” military operations. However, during the podcast, he framed the disagreement as part of the military’s broader integration of artificial intelligence.

Michael explained that the military is creating protocols for various levels of automated warfare based on threat assessment.

“This is part of the debate I had with Anthropic, which is we need AI for things like Golden Dome,” Michael explained, describing a hypothetical situation where the United States would have just 90 seconds to counter a Chinese hypersonic missile attack.

He argued that a human missile defense operator “may not be able to discriminate with their own eyes what they’re going after,” while an automated response would pose minimal risk “because it’s in space and you’re just trying to hit something that’s trying to get you.”

In another example, he asked, “who could oppose if you have a military base, you have a bunch of soldiers sleeping, that you have a laser that can take down drones autonomously?”

Responding to Michael’s podcast statements, Anthropic referenced an earlier comment from Amodei stating “Anthropic understands that the Department of War, not private companies, makes military decisions. We have never raised objections to particular military operations nor attempted to limit use of our technology in an ad hoc manner.”

Michael, who assumed his role as defense undersecretary for research and engineering last May, said he took control of the military’s “AI portfolio” in August. At that time, he began reviewing Anthropic’s existing contracts, some established during the Biden administration, questioning usage terms he considered overly restrictive.

“I need to have the terms of service be rational relative to our mission set,” he explained. “So we started these negotiations. It took three months and I had to sort of give them scenarios, like this Chinese hypersonic missile example. They’re like, ‘OK, we’ll give you an exception for that.’ Well, how about this drone swarm? ‘We’ll give an exception for that.’ And I was like, exceptions doesn’t work. I can’t predict for the next 20 years what (are) all the things we might use AI for.”

This led the Pentagon to demand that Anthropic and other AI companies permit “all lawful use” of their technology, according to Michael.

While Anthropic refused this broader authorization, its competitors including Google, OpenAI, and Elon Musk’s xAI accepted the terms, though some are still preparing their systems for classified military applications, Michael noted. Anthropic’s other major concern involved preventing bulk surveillance of American citizens.

“They didn’t want us to bulk-collect public information on people using their AI system,” Michael said, characterizing the negotiations as “interminable.”

Anthropic has challenged aspects of Michael’s account of the discussions and stressed that its proposed safeguards were limited in scope and not related to any current Claude applications. The dispute’s next phase will likely unfold in federal court.

More from TV Delmarva Channel 33 News