Pentagon dispute bolsters Anthropic reputation but raises questions
about AI readiness in military
[March 04, 2026] By
MATT O'BRIEN
Anthropic's moral stand on U.S. military use of artificial intelligence
is reshaping the competition between leading AI companies but also
exposing a growing awareness that maybe chatbots just aren't capable
enough for acts of war.
Anthropic's chatbot Claude, for the first time, outpaced rival ChatGPT
in phone app downloads in the United States this week, a signal of
growing interest from consumers siding with Anthropic in its standoff
with the Pentagon, according to market research firm Sensor Tower.
The Trump administration on Friday ordered government agencies to stop
using Claude and designated it a supply chain risk after Anthropic CEO
Dario Amodei refused to bend his company's ethical safeguards preventing
the technology from being applied to autonomous weapons and domestic
mass surveillance. Anthropic has said it will challenge the Pentagon in
court once it receives formal notice of the penalties.
And while many military and human rights experts have applauded Amodei
for standing up for ethical principles, some are also frustrated by
years of AI industry marketing that persuaded the government to apply
the technology to high-stakes tasks.
“He caused this mess,” said Missy Cummings, a former Navy fighter pilot
who now directs the robotics and automation center at George Mason
University. “They were the No. 1 company to push ridiculous hype over
the capabilities of these technologies. And now, all of a sudden, they
want to be for real. They want to tell people, ‘Oh, wait a minute. We
really shouldn’t be using these technologies in weapons.’”

Anthropic didn't immediately respond to a request for comment. The
Defense Department declined to comment on whether it is still using
Claude, including in the Iran war, citing operational security.
Cummings published a paper at a top AI conference in December arguing
that government agencies should prohibit the use of generative AI “to
control, direct, guide or govern any weapon.” Not because AI is so smart
that it could go rogue, but because the large language models behind
chatbots like Claude make too many mistakes — called hallucinations or
confabulations — and are “inherently unreliable and not appropriate in
environments that could result in the loss of life.”
“You’re going to kill noncombatants,” Cummings said in an interview
Tuesday with The Associated Press. “You’re going to kill your own
troops. I’m not clear whether the military truly understands the
limitations.”
Amodei sought to emphasize those limitations in defending Anthropic's
ethical stance last week, arguing that “frontier AI systems are simply
not reliable enough to power fully autonomous weapons. We will not
knowingly provide a product that puts America’s warfighters and
civilians at risk.”
Anthropic, until recently, was the only one of its peers to have
approval for use in classified military systems, where it has partnered
with data analysis company Palantir and other defense contractors.
President Donald Trump said Friday, around the same time he was
approving Saturday's military strikes on Iran, that the Pentagon would
have six months to phase out Anthropic's military applications.
Cummings, a former Palantir adviser, said it's possible that Claude has
already been used in military strike planning.
[to top of second column] |

Pages from the Anthropic website and the company's logo are
displayed on a computer screen in New York on Thursday, Feb. 26,
2026. (AP Photo/Patrick Sison)
 “I just fundamentally hope that
there were humans in the loop,” she said. “A human has to babysit
these technologies very closely. You can use them to do these
things, but you need to verify, verify, verify.”
She said that's a contrast to the messaging from AI
companies that have suggested that their technology is evolving to
the point where it is “almost sentient.”
“If there’s culpability here, I’d say half is Anthropic's for
driving the hype and half is the Department of War’s fault for
firing all the people that would have otherwise advised them against
stupid uses of technology,” Cummings said.
One social media commentator this week described Anthropic's
government problems as a “Hype Tax” — a message that was reposted by
President Donald Trump's top AI adviser, David Sacks, a frequent
critic of the company.
And while it has caused legal hassles that could jeopardize
Anthropic's business partnerships with other military contractors,
it has also bolstered its reputation as a safety-minded AI
developer.
“It’s applaudable that a company stood up to the government in order
to maintain what it felt were its ethics and were its business
choices, even in the face of these potentially crippling policy
responses,” said Jennifer Huddleston, a senior fellow at the
libertarian-leaning Cato Institute.
Consumers have already spoken, leading to a surge of Claude
downloads that made it the most popular iPhone app starting on
Saturday and for all phone systems in the U.S. on Monday, according
to Sensor Tower. That's come at the expense of OpenAI's ChatGPT,
which saw its consumer reputation damaged when it announced a Friday
deal with the Pentagon to effectively replace Anthropic with ChatGPT
in classified environments.
In the Apple store, the number of 1-star reviews — the worst rating
— of ChatGPT grew by 775% on Saturday and continued to grow early
this week, reflecting a backlash that forced OpenAI to do damage
control.
“We shouldn’t have rushed to get this out on Friday,” OpenAI CEO Sam
Altman said in a social media post Monday. “The issues are super
complex, and demand clear communication. We were genuinely trying to
de-escalate things and avoid a much worse outcome, but I think it
just looked opportunistic and sloppy.”

Altman gathered employees for an “all-hands” meeting on Tuesday to
discuss next steps.
“There are many things the technology just isn’t ready for, and many
areas we don’t yet understand the tradeoffs required for safety,”
Altman said on X. “We will work through these, slowly, with the
(Pentagon), with technical safeguards and other methods.”
All contents © copyright 2026 Associated Press. All rights reserved |