Pentagon Declares Anthropic a Supply Chain Risk, Cuts Ties with AI Firm

    The label the US Department of Defense just pinned on Anthropic is one normally reserved for foreign adversaries suspected of sabotage or espionage. Not American AI startups. On March 5, 2026, a senior Pentagon official confirmed that the Department — now referred to internally by Defense Secretary Pete Hegseth as the Department of War — had officially informed Anthropic leadership that the company and its products are deemed a supply chain risk, effective immediately. The designation forces defense contractors to certify they are not using Anthropic's Claude models in any work related to military contracts.

    This is not routine. No domestic American company is known to have previously received this designation. The only publicly reported comparable case involved Acronis AG, a Swiss cybersecurity company with alleged Russian ties. What pushed the Pentagon to apply the same framework to a San Francisco AI lab built by former OpenAI researchers is a story that gets more complicated the closer you look at it.

    How the Dispute Actually Started

    The conflict traces back to a $200 million contract Anthropic signed with the Pentagon in July 2025, making Claude the first frontier AI model approved for deployment on classified military networks. That deal came with conditions: Anthropic's acceptable use policy, which the Pentagon agreed to at signing. The problems started when the Defense Department sought to renegotiate those terms, pushing for broader language that would allow Claude to be used for, in their framing, 'all lawful purposes.'

    Anthropic drew two firm lines. It would not permit its technology to be used in fully autonomous lethal weapons — systems that select and engage targets without meaningful human oversight. And it would not allow Claude to be deployed for mass domestic surveillance of American citizens. CEO Dario Amodei said publicly that he could not in good conscience agree to the Pentagon's requested contract modifications. The Pentagon's position was blunt: a private contractor does not get to restrict how the military uses a critical capability.

    Negotiations ran for weeks. By late February, with no agreement reached, President Trump directed all federal agencies to immediately cease using Anthropic's technology and gave the Defense Department a six-month phaseout period. Hegseth then announced the supply chain risk designation publicly on social media before Anthropic had received any formal written notice — a sequence that itself raised procedural eyebrows among legal experts.

    The Pentagon's designation of Anthropic marks an unprecedented use of supply chain security law against a domestic technology firm.
    The Pentagon's designation of Anthropic marks an unprecedented use of supply chain security law against a domestic technology firm.

    What the Designation Actually Does — and Doesn't Do

    Hegseth's initial framing was sweeping: no contractor, supplier, or partner that does business with the US military could conduct any commercial activity with Anthropic. That language alarmed a lot of people, including Palantir, which had built a significant partnership with Anthropic and counts government contracts for roughly 60 percent of its US revenue.

    But Anthropic's legal team pushed back, and the actual statutory authority appears to be narrower than Hegseth implied. The relevant supply chain law limits the designation's reach to Claude's use as a direct part of military contracts. Amodei clarified in a public statement that even for defense contractors, the designation cannot restrict uses of Claude or business relationships with Anthropic that are unrelated to their specific Pentagon work. Microsoft, Google, and Amazon all issued similar statements confirming that Claude remains available through their platforms for non-defense customers.

    Legal analysts have gone further, with several arguing the designation won't survive court scrutiny at all. The statutes governing supply chain risk designations include procedural requirements — including notice and opportunity to respond — that Hegseth's immediate, unilateral action appears to have bypassed. Anthropic has already said it will challenge the designation in court, calling it legally unsound and unprecedented.

    The Political Undercurrents Are Hard to Ignore

    Several observers have noted that the timing and tone of the Pentagon's actions look less like a national security determination and more like political retaliation. Amodei reportedly told staff in an internal memo that the administration dislikes Anthropic in part because he has not donated to Trump or offered what he called 'dictator-style praise.' He did not attend Trump's inauguration. David Sacks, the White House's AI and crypto adviser, had previously accused Anthropic of running a 'regulatory capture strategy based on fear-mongering.'

    The contrast with OpenAI is stark. Hours after the Pentagon announced Anthropic's blacklisting, OpenAI announced a deal to replace Anthropic's Claude in classified military environments. OpenAI president Greg Brockman had recently donated $25 million to a pro-Trump super PAC. OpenAI's initial deal with the Pentagon included language about 'all lawful purposes' — the exact phrasing Anthropic refused — though OpenAI subsequently added additional protections after employee pushback.

    Critics from across the political spectrum have called the designation troubling. Dean Ball, a former Trump White House AI adviser, described it publicly as a sign of the government abandoning strategic clarity in favor of what he called 'thuggish' behavior that treats domestic innovators worse than foreign adversaries. A group of retired defense officials and policy leaders wrote to Congress defending Anthropic and calling the move a dangerous precedent. Hundreds of employees from OpenAI and Google also urged the Pentagon to reverse course.

    The Strangest Detail: Claude Is Still Being Used in Iran Operations

    Perhaps the most confounding aspect of this entire episode is that even as the Pentagon labeled Anthropic a national security risk, it was simultaneously using Claude to support US military operations in Iran. CNBC reported that Anthropic's models continued to be used in active conflict support after the blacklisting. Experts cited in coverage noted the obvious contradiction: if Claude genuinely posed a supply chain threat, continuing to rely on it during active military operations makes little strategic sense.

    Amodei acknowledged that ongoing military access to Claude during the Iran operations was itself a priority — he said ensuring warfighters were not deprived of important tools mid-operation was something Anthropic took seriously, even amid the dispute. That detail complicates any clean narrative about this being a straightforward national security call.

    An Unexpected Upside: Consumer Surge

    While the defense and contractor fallout is real, Anthropic has seen a striking surge in consumer support. The company reported that more than one million people signed up for Claude each day during the week of the dispute — pushing it past ChatGPT and Google's Gemini as the top AI app in more than 20 countries on Apple's App Store. Public sentiment, at least among tech-aware consumers, appears to have read Anthropic's stance as principled rather than obstructionist.

    How long that goodwill translates into retained users is a different question. But it does suggest that the Pentagon's move to pressure Anthropic into compliance may have backfired in a meaningful way — hardening public perception of Anthropic as the company that refused to let AI be used for autonomous killing and mass domestic surveillance, regardless of who was asking.

    Where This Goes From Here

    Talks between Anthropic and the Pentagon are reportedly ongoing, according to the Financial Times, suggesting neither side has fully closed the door. The six-month transition period gives both parties room to negotiate, even if the public posturing has been aggressive. Amodei has said he wants to find ways Anthropic can serve the military within its two narrow ethical limits, and that ensuring a smooth transition — if no agreement is reached — remains a priority.

    The legal challenge will likely be the decisive front. If courts find the designation procedurally flawed or beyond the Pentagon's statutory authority, it collapses on its own. Legal experts who have analyzed the applicable statutes are skeptical the designation will hold. The broader question it raises — whether a government can use supply chain security law as leverage against a domestic company over a policy disagreement — is one that will matter well beyond this particular dispute, regardless of how it resolves.

    Love this story? Explore more trending news on anthropic

    Share this story

    Read More