In a move that has sent shockwaves through Silicon Valley and the global defense industry, the Pentagon—recently renamed the Department of War—has officially designated Anthropic PBC as a “supply chain risk.” The unprecedented move was followed immediately by an executive directive from President Donald Trump, ordering all federal agencies to cease the use of Anthropic’s technology, including its flagship AI model, Claude.
The decision marks the culmination of a months-long, high-stakes standoff between the Trump administration and Anthropic CEO Dario Amodei over the ethical boundaries of AI in warfare. As of February 28, 2026, the company that once positioned itself as the “safety-first” leader in the AI race finds itself in the same legal category as foreign adversaries like Huawei.
The Conflict: “Red Lines” vs. “Unrestricted Access”
The heart of the dispute lies in two specific “red lines” that Anthropic refused to cross. During intense negotiations with Secretary of War Pete Hegseth, Anthropic maintained that its AI models should not be used for:
Fully Autonomous Lethal Weapons: Systems that can “pull the trigger” without direct human intervention.
Mass Domestic Surveillance: The use of AI to analyze vast amounts of data to profile or spy on American citizens.
Secretary Hegseth, however, characterized these guardrails as “corporate virtue-signaling” that prioritizes “Silicon Valley ideology” over American lives. In a scathing post on X, Hegseth stated that the military must have “full, unrestricted access” to AI models for every lawful purpose. You can find the full breakdown of the Pentagon’s standoff with Anthropic and the subsequent blacklist details in our latest report.
For a deeper look at how this shift is affecting the 2026 tech economy, UStorie offers exclusive insights into the rapidly changing relationship between Washington and the AI giants.
“Contractual Nuclear War”: The Impact on Contractors
The “supply chain risk” designation is a devastating blow. Under 10 USC 3252, this label doesn’t just end Anthropic’s $200 million government contract; it effectively bars any defense contractor—from Boeing to Lockheed Martin—from using Claude in any capacity related to military work. Experts have described the move as “the contractual equivalent of nuclear war.”
President Trump’s directive includes a six-month phase-out period for critical agencies like the CIA and NSA, which have relied heavily on Claude for intelligence analysis since 2024. However, the president warned that he would use “the full power of the presidency” to ensure compliance, hinting at potential civil or criminal consequences if the company fails to cooperate with the transition.
Detailed analysis of the legal precedents being set by this administration can be found in our US News section, where we track the “Department of War’s” new aggressive stance on tech procurement.
OpenAI and xAI Step Into the Vacuum
As Anthropic is forced out, competitors are moving in. Hours after the blacklist was announced, OpenAI CEO Sam Altman confirmed a new deal with the Pentagon to deploy models on classified networks. Interestingly, Altman claimed that OpenAI’s agreement actually includes the very safety principles—prohibitions on autonomous lethal force and mass surveillance—that Anthropic fought for.
Meanwhile, Elon Musk’s xAI (Grok) has also been cleared for classified use, with Musk publicly criticizing Anthropic’s stance. To stay updated on the “AI Arms Race” and how these companies are vying for dominance in the 2026 defense sector, visit our Sports and industry hub.
The Legal Battle Begins
Anthropic has confirmed it will challenge the “supply chain risk” designation in federal court, calling the move “legally unsound” and “unprecedented for an American company.” The outcome of this lawsuit will likely define the boundaries of state power over private technology for decades to come.
For now, the message from the White House is clear: In the 2026 era of national security, the government—not the developers—will decide the “fate of the country.”





