Direct threat: The Pentagon gave Anthropic an ultimatum: remove the safeguards on mass surveillance and fully autonomous weapons, or lose contracts worth $200 million. Dario Amodei refuses. Deadline: Friday, February 27, 2026, 5:01 PM ET.
A conflict that has been building for months
There are moments when a technology company must decide which principles it actually means. For Anthropic, that moment has arrived.
On February 26, 2026, CEO Dario Amodei published a statement that leaves little room for ambiguity. The Pentagon demands that Anthropic remove two specific safety limits from its contracts. Amodei refuses. And he does so not quietly or diplomatically, but with a public declaration that every reader, partner, competitor, and regulator in the world can see.
That is remarkable. Not only for what it says, but for the fact that things got this far at all.
Anthropic is not a small company pushing back against a large government. It is the company that was first to deploy its models on US government classified networks, first to deliver custom models for national security customers, and first to operate at the national laboratories. Claude is deployed for intelligence analysis, operational planning, and cyber operations. The partnership with the Pentagon exists not despite Anthropic's safety mission, but, in Amodei's framing, as part of it.
And yet the company now stands at a crossroads that nobody could have easily predicted.
What the Pentagon actually wants
The core of the conflict is precise and narrow. It is not about whether Anthropic can work with the military. It already does, extensively and at the most sensitive levels. The dispute concerns two specific use cases that Anthropic says it will never include in its contracts.
Mass domestic surveillance. Anthropic supports the use of AI for lawful foreign intelligence and counterintelligence missions. But systematically surveilling Americans based on movement data, browsing history, and social associations, without a warrant and at industrial scale, is something the company considers a fundamental threat to democratic values. Amodei notes that current law already allows the government to purchase such data from commercial providers without judicial oversight, something the Intelligence Community itself has acknowledged raises privacy concerns. AI makes it possible to assemble those scattered, individually innocuous data points into a comprehensive portrait of any person's life, automatically and at massive scale.
Fully autonomous weapons. Anthropic draws a distinction that many policymakers overlook. Partially autonomous weapons systems, such as drones deployed in Ukraine, are legitimate and sometimes necessary. But systems that select and engage targets without any human involvement represent a different category. Amodei's point is not ideological but technical: the current generation of AI models is simply not reliable enough to automate life-and-death decisions. He offered to work jointly with the Pentagon on R&D to improve the reliability of autonomous systems. That offer was not accepted.
The Pentagon's threats
Defense Secretary Pete Hegseth's response was considerably less subtle. According to NPR, Hegseth threatened three escalation levels in a meeting with Amodei.
First: cancellation of the $200 million contract. For a company with $14 billion in revenue, the financial hit is manageable, but the symbolic weight is significant.
Second: the "supply chain risk" designation. That label has until now been reserved for foreign adversaries, such as the Chinese company Huawei. It would mean that other Pentagon contractors could be prohibited from using Anthropic's tools, and in the worst case, any cooperation with the US government would become impossible.
Third: invocation of the Defense Production Act. That law gives the president broad authority to direct companies to prioritize production for national defense. The interpretation here would be to use the Act to compel Anthropic to remove its safety limits.
Pentagon spokesman Sean Parnell set the deadline on X: "They have until 5:01 PM ET on Friday to decide. Otherwise, we will terminate our partnership with Anthropic and deem them a supply chain risk for DOW."
The logical contradiction at the center
Politico described the combination of threats as "inherently contradictory." Geopolitical analyst Geoffrey Gertz of the Center for a New American Security put it sharply in a conversation with NPR: "It's this funny mix where they both are such a risk that they need to be kicked out of all systems, and so essential that they need to be compelled to be part of the system no matter what."
Amodei himself pointed to the contradiction. One threat labels Anthropic a security risk. The other treats Claude as essential to national security. Both cannot be true simultaneously.
The central paradox: The Pentagon simultaneously claims that Anthropic is a threat to national security (supply chain risk) and that Anthropic's AI is indispensable for that same national security (Defense Production Act). These two positions are logically incompatible.
First in, now alone
Reporting from TechCrunch and The Guardian confirms that until this week, Anthropic was the only frontier AI lab cleared for use in classified military systems. Elon Musk's xAI reached a comparable agreement earlier this week, but without the safety limits Anthropic is defending.
That context makes the pressure understandable. The Pentagon does not want to depend on a supplier that imposes restrictions. With xAI available as an alternative, Anthropic's negotiating position is weaker. But Amodei chooses principle over contract.
"Our strong preference is to continue to serve the Department and our warfighters, with our two requested safeguards in place," he writes. "Should the Department choose to offboard Anthropic, we will work to enable a smooth transition to another provider."
The EU AI Act already decided this
This is where the story becomes particularly interesting for European readers.
The two limits Anthropic is defending are not company policy in Europe. They are legal prohibitions.
Mass surveillance via biometric identification in public spaces falls under Article 5 of the EU AI Act, which contains an explicit list of prohibited AI practices. Real-time biometric identification in public spaces for law enforcement purposes is banned, with narrow exceptions for serious crimes that require judicial oversight. AI systems that automatically map individual behavior to build predictive profiles also fall under this prohibition.
AI systems without meaningful human control are consistently categorized in the EU AI Act as high-risk systems subject to strict requirements. For military applications, the logic is identical: systems that make life-and-death decisions without human involvement are categorically problematic under the European framework.
Europe has, in other words, legally enshrined the limits that Anthropic is now voluntarily defending under direct government pressure. That is a fundamentally different approach.
European vs. American framework: In the EU, mass surveillance via biometrics and AI systems without human control are prohibited by law (EU AI Act, Article 5). In the US, they are the subject of contract negotiations between an AI company and the Department of Defense.
What Article 5 of the EU AI Act actually prohibits
Article 5 of the EU AI Act prohibits a specific set of AI practices deemed unacceptable regardless of application or purpose.
The prohibitions include systems that manipulate behavior below the threshold of conscious awareness, systems that exploit vulnerable groups, biometric classification based on protected characteristics, emotion recognition in workplaces and educational institutions, and, most directly parallel to the Pentagon conflict: real-time biometric identification in public spaces for law enforcement.
There is also a prohibition on AI systems for evaluating or classifying individuals based on social behavior or personality characteristics over extended periods, precisely the kind of consolidated profiling that Amodei warns about in his statement.
The law acknowledges that AI now makes this kind of profiling possible at a scale and speed that did not previously exist. That is the legislative logic behind the prohibition.
What this means for the rest of the AI sector
The outcome of this conflict has consequences that extend well beyond Anthropic.
If the Pentagon follows through on its threats, it sends a clear signal to every other AI company: safety limits are negotiable under sufficient pressure. OpenAI, Google, and xAI also supply the Pentagon. None of them have publicly stated what they permit and what they do not.
If Anthropic holds firm, it sets a precedent for whether private AI companies can legitimately place limits on government use of their technology, not as political obstacles, but as technical and ethical minimum standards.
Dario Amodei has positioned that argument carefully. He does not contest the Pentagon's right to make military decisions. He says his company refuses to supply a product that functions in a specific way that is not sound, specifically, reliably enough for fully autonomous lethal decision-making.
That is a subtle but important distinction. It is not a political refusal. It is a technical claim: we cannot supply a product that does what you are asking in a manner that is responsible.
The Defense Production Act as an option
The possible invocation of the Defense Production Act deserves separate attention. That law, originally designed for wartime production of physical goods, gives the president broad authority to direct industrial production for national defense.
The interpretation that it could apply to software companies, and specifically to AI safety limits, would be without precedent. Legal scholars will disagree. But the signal is clear: if existing law is insufficient, the Pentagon is searching for other instruments.
Geoffrey Gertz of the Center for a New American Security noted that both threat instruments together are logically untenable. But in politics, that has not proven to be a barrier.
A mirror for Europe
There is a reason why this story matters for European AI governance, beyond the legal parallels.
It demonstrates that AI safety limits are not self-evident, not once established, and not uncontested. A government that exerts enough pressure, a contract value of $200 million, a threatening designation as a national security risk, can place a company in a position where it must choose between principles and survival.
The EU AI Act does not fully resolve this problem. Article 5 prohibits specific applications, but enforcement is complex and the law is still in implementation. What the European framework does achieve is shifting the discussion from contract negotiations to legal obligations. The limits exist not because a CEO defends them, but because the legislature established them.
That is a fundamentally different governance model. And the conflict between Anthropic and the Pentagon illustrates precisely why the choice of that model has consequences.
Dario Amodei may have changed his mind in ten years. A CEO statement is not durable as a legal source. A law adopted by 27 democratic states and ratified by the European Parliament has a different status.
Whether that law will hold under future political pressure is another question. But it is at least a question that must be asked and answered democratically, not in a meeting room between a technology company and a minister of defense.