Trump orders US agencies to stop using Anthropic technology in clash
over AI safety
[February 28, 2026]
By MATT O'BRIEN and KONSTANTIN TOROPIN
WASHINGTON (AP) — The Trump administration on Friday ordered all U.S.
agencies to stop using Anthropic’s artificial intelligence technology
and imposed other major penalties, escalating an unusually public clash
between the government and the company over AI safety.
President Donald Trump, Defense Secretary Pete Hegseth and other
officials took to social media to chastise Anthropic for failing to
allow the military unrestricted use of its AI technology by a Friday
deadline, accusing it of endangering national security after CEO Dario
Amodei refused to back down over concerns the company's products could
be used in ways that would violate its safeguards.
“We don’t need it, we don’t want it, and will not do business with them
again!” Trump said on social media.
Hegseth also deemed the company a “supply chain risk,” a designation
typically stamped on foreign adversaries that could derail the company’s
critical partnerships with other businesses.
In a statement issued Friday evening, Anthropic said it would challenge
what it called an unprecedented and legally unsound action “never before
publicly applied to an American company.”
Anthropic had said it sought narrow assurances from the Pentagon that
its AI chatbot Claude would not be used for mass surveillance of
Americans or in fully autonomous weapons. The Pentagon said it was not
interested in such uses and would only deploy the technology in legal
ways, but it also insisted on access without any limitations.
“No amount of intimidation or punishment from the Department of War will
change our position on mass domestic surveillance or fully autonomous
weapons,” the company said. “We will challenge any supply chain risk
designation in court.”

The government’s effort to assert dominance over the internal
decision-making of the company comes amid a wider clash over AI’s role
in national security and concerns about how increasingly capable
machines could be used in high-stakes situations involving lethal force,
sensitive information or government surveillance.
OpenAI strikes deal with Pentagon hours after Anthropic was punished
Hours after its competitor was punished, OpenAI CEO Sam Altman announced
on Friday night that his company struck a deal with the Pentagon to
supply its AI to classified military networks, potentially filling a gap
created by Anthropic’s ouster.
But Altman said that the same red lines that were the sticking point in
Anthropic’s dispute with the Pentagon are now enshrined in OpenAI’s new
partnership.
“Two of our most important safety principles are prohibitions on
domestic mass surveillance and human responsibility for the use of
force, including for autonomous weapon systems,” Altman wrote, adding
that the Defense Department “agrees with these principles, reflects them
in law and policy, and we put them into our agreement.”
Altman also said he hopes the Pentagon will “offer these same terms to
all AI companies” as a way to “de-escalate away from legal and
governmental actions and toward reasonable agreements.”
Trump and others lash out at Anthropic
Trump said Anthropic made a mistake trying to strong-arm the Pentagon.
He wrote on Truth Social that most agencies must immediately stop using
Anthropic's AI but gave the Pentagon a six-month period to phase out the
technology that is already embedded in military platforms.
“The United States of America will never allow a radical left, woke
company to dictate how our great military fights and wins wars!” he
wrote in all caps.
Months of private talks exploded into public debate this week and hit a
stalemate when Amodei said his company “cannot in good conscience
accede” to the demands.
[to top of second column]
|

Pages from the Anthropic website and the company's logos are
displayed on a computer screen in New York on Thursday, Feb. 26,
2026. (AP Photo/Patrick Sison)

Anthropic can afford to lose the contract. But the government's
actions posed broader risks at the peak of the company’s meteoric
rise from a little-known computer science research lab in San
Francisco to one of the world’s most valuable startups.
The president’s decision was preceded by hours of top Trump
appointees from the Pentagon and the State Department taking to
social media to criticize Anthropic, but their complaints posed
contradictions.
Top Pentagon spokesman Sean Parnell said Anthropic’s unwillingness
to go along with the military’s demands was “jeopardizing critical
military operations and potentially putting our warfighters at
risk.” Hegseth said the Pentagon “must have full, unrestricted
access to Anthropic’s models for every LAWFUL purpose in defense of
the Republic.”
Trump's social media post said the company “better get their act
together, and be helpful” during the phase-out period or there would
be “major civil and criminal consequences to follow.”
However, Hegseth’s choice to designate Anthropic a supply chain risk
uses an administrative tool that has been designed for companies
owned by U.S. adversaries to prevent them from selling products that
are harmful to American interests.
Virginia Sen. Mark Warner, the top Democrat on the Senate
Intelligence Committee, noted that this dynamic, “combined with
inflammatory rhetoric attacking that company, raises serious
concerns about whether national security decisions are being driven
by careful analysis or political considerations.”
Dispute shakes up Silicon Valley
The dispute stunned AI developers in Silicon Valley, where venture
capitalists, prominent AI scientists and a large number of workers
from Anthropic’s top rivals — OpenAI and Google — voiced support for
Amodei’s stand in open letters and other forums.
The moves could benefit OpenAI's ChatGPT as well as Elon Musk’s
competing chatbot, Grok, which the Pentagon also plans to give
access to classified military networks. It could serve as a warning
to Google, which has a still-evolving contract to supply its AI
tools to the military.
Musk sided with Trump’s administration, saying on his social media
platform X that “Anthropic hates Western Civilization.” Altman took
a different approach, expressing solidarity with Anthropic's
safeguards and opposing the government's “threatening” approach
while also working to secure OpenAI's deal with the Pentagon. It
marked the latest twist in OpenAI's longtime and sometimes
acrimonious rivalry with Anthropic, which was founded by a group of
ex-OpenAI leaders in 2021.
Retired Air Force Gen. Jack Shanahan, a former leader of the
Pentagon’s AI initiatives, wrote on social media this week that the
government “painting a bullseye on Anthropic garners spicy
headlines, but everyone loses in the end.”
Shanahan said Claude is already being widely used across the
government, including in classified settings, and Anthropic’s red
lines were “reasonable.” He said the AI large language models that
power chatbots like Claude, Grok and ChatGPT are also “not ready for
prime time in national security settings,” particularly not for
fully autonomous weapons.
Anthropic is “not trying to play cute here,” he wrote on LinkedIn.
“You won’t find a system with wider & deeper reach across the
military.”
___
O'Brien reported from Providence, Rhode Island.
All contents © copyright 2026 Associated Press. All rights reserved |