AI Company Clashes with Pentagon Over Military Use, Sparks Consumer Backlash

Anthropic's refusal to allow military use of its Claude AI chatbot has led to a government ban and legal battle. The ethical stance has boosted Claude's popularity while damaging competitor ChatGPT's reputation among consumers.

A major artificial intelligence company’s ethical battle with the Pentagon is reshaping how Americans view AI technology in warfare while highlighting serious questions about whether these systems are reliable enough for military operations.

Claude, the AI chatbot created by Anthropic, surpassed OpenAI’s ChatGPT in mobile app downloads across America for the first time this week, according to data from Sensor Tower research firm. The surge appears connected to public support for Anthropic’s refusal to compromise its ethical guidelines regarding military applications.

The Trump administration declared Claude a supply chain threat on Friday and ordered federal agencies to discontinue its use after Anthropic CEO Dario Amodei maintained his company’s restrictions against autonomous weapons development and domestic surveillance programs. Anthropic plans to fight the Pentagon’s decision in federal court once it receives official notification of the sanctions.

While military analysts and human rights advocates have praised Amodei’s principled position, some experts criticize the AI industry’s previous aggressive marketing that convinced government officials to deploy this technology in critical situations.

“He caused this mess,” stated Missy Cummings, a former Navy fighter pilot who currently leads the robotics and automation center at George Mason University. “They were the No. 1 company to push ridiculous hype over the capabilities of these technologies. And now, all of a sudden, they want to be for real. They want to tell people, ‘Oh, wait a minute. We really shouldn’t be using these technologies in weapons.'”

Anthropic representatives did not respond to requests for comment. Pentagon officials declined to discuss whether Claude remains in use for operations, including the Iran conflict, citing security protocols.

In a December research paper presented at a leading AI conference, Cummings advocated for government restrictions on generative AI systems “to control, direct, guide or govern any weapon.” Her concerns center not on AI becoming too intelligent, but on the frequent errors—known as hallucinations or confabulations—that make large language models “inherently unreliable and not appropriate in environments that could result in the loss of life.”

“You’re going to kill noncombatants,” Cummings told The Associated Press during a Tuesday interview. “You’re going to kill your own troops. I’m not clear whether the military truly understands the limitations.”

Defending his company’s position last week, Amodei emphasized these technological shortcomings, stating that “frontier AI systems are simply not reliable enough to power fully autonomous weapons. We will not knowingly provide a product that puts America’s warfighters and civilians at risk.”

Among major AI developers, Anthropic had been uniquely authorized for classified military systems, working alongside data analysis firm Palantir and additional defense contractors. President Trump announced Friday that the Pentagon has six months to eliminate Anthropic’s military applications, coinciding with his approval of Saturday’s Iran strikes.

Cummings, who previously advised Palantir, suggested Claude may have already contributed to military strike planning.

“I just fundamentally hope that there were humans in the loop,” she explained. “A human has to babysit these technologies very closely. You can use them to do these things, but you need to verify, verify, verify.”

This approach contradicts messaging from AI companies suggesting their technology approaches human-like intelligence, she noted.

“If there’s culpability here, I’d say half is Anthropic’s for driving the hype and half is the Department of War’s fault for firing all the people that would have otherwise advised them against stupid uses of technology,” Cummings observed.

One social media user described Anthropic’s government troubles as a “Hype Tax”—a post shared by President Trump’s senior AI advisor David Sacks, who frequently criticizes the company.

Despite potential legal complications that could harm Anthropic’s defense contractor relationships, the controversy has enhanced its reputation as an ethics-focused AI developer.

“It’s applaudable that a company stood up to the government in order to maintain what it felt were its ethics and were its business choices, even in the face of these potentially crippling policy responses,” said Jennifer Huddleston, a senior fellow at the libertarian-leaning Cato Institute.

Consumer response has been immediate, with Claude downloads surging to become the top iPhone application on Saturday and leading all mobile platforms nationwide by Monday, Sensor Tower reported. This success came at ChatGPT’s expense, as OpenAI’s consumer standing suffered following Friday’s announcement of a Pentagon partnership to replace Anthropic in classified environments.

Apple App Store data showed ChatGPT’s one-star reviews—the lowest possible rating—increased by 775% on Saturday and continued climbing into the week, prompting OpenAI to implement crisis management measures.

“We shouldn’t have rushed to get this out on Friday,” OpenAI CEO Sam Altman acknowledged in a Monday social media statement. “The issues are super complex, and demand clear communication. We were genuinely trying to de-escalate things and avoid a much worse outcome, but I think it just looked opportunistic and sloppy.”

Altman scheduled an “all-hands” employee meeting for Tuesday to address the situation.

“There are many things the technology just isn’t ready for, and many areas we don’t yet understand the tradeoffs required for safety,” Altman stated. “We will work through these, slowly, with the (Pentagon), with technical safeguards and other methods.”

More from TV Delmarva Channel 33 News

  • New Seed Planting Technology Unveiled for Delmarva Farmers

    DELMARVA — Delaware farmers may soon have access to new planting technology designed to improve crop establishment across the region. PTx unveiled their ArrowTube seed planting system at this year’s Commodity Classic, promising better results for agricultural operations. According to Caleb Stuber from PTx’s strategic marketing division, the system manages speed and uses strategically positioned […]

  • Houston Texans Cut Veteran Safety Jimmie Ward After Troubled Season

    The Houston Texans have parted ways with 34-year-old safety Jimmie Ward following a season where he didn't play due to legal troubles and injury. Ward faced domestic violence charges earlier this year, though a grand jury declined to indict him in September.

  • Cardinals QB Kyler Murray Posts Farewell Message, Faces March Release

    Arizona Cardinals quarterback Kyler Murray issued what appears to be a goodbye message to fans on social media Tuesday, apologizing for not bringing the team a Super Bowl. The former Heisman Trophy winner is expected to be released March 11 if no suitable trade offer emerges.

  • Trump Administration Prepares Criminal Charges Against Venezuela’s New Leader

    The Trump administration is building a criminal case against Venezuelan interim president Delcy Rodriguez, including draft corruption and money laundering charges. Federal prosecutors in Miami are crafting the indictment as leverage to ensure Rodriguez complies with U.S. demands following the January capture of former leader Nicolas Maduro.