Who Governs the Machines That Now Shape War?
The Pentagon-Anthropic dispute isn’t about a contract. It’s about whether democratic governance can keep pace with the technologies now shaping how America fights.
The call didn’t come from a congressional committee or a federal courtroom. It came through a back channel — one tech executive to another — and it set off a chain of events that now threatens to reshape how the United States governs artificial intelligence in war.
According to The Washington Post, after the January 3rd raid that captured Venezuelan President Nicolás Maduro — an operation in which scores of Venezuelan security personnel were killed — an executive from Anthropic, the maker of the Claude AI model, contacted an executive at the defense firm Palantir to ask whether Claude had been used in the operation. Palantir relayed the inquiry to the Pentagon, where officials interpreted it as an expression of disapproval. That moment, quiet as it was, became the fracture line in what is now one of the most consequential technology disputes in American defense policy.
This isn’t a story about a contract negotiation gone sideways. It’s about a structural question the republic has been slow to answer:
When advanced AI shifts from helping write job descriptions to supporting operations in which people die, who holds the authority to set the rules — and who holds them accountable when things go wrong?
What $200 Million Bought — and Didn’t
Until recently, Anthropic occupied an enviable position within the defense establishment. The company held a contract worth up to $200 million, and its Claude model was, by multiple accounts, the only frontier AI system authorized for use on the Pentagon’s classified networks — deployed through a partnership with Palantir’s Maven Smart System, as Brandi Vincent reported for DefenseScoop. Claude was also integrated into the broader GenAI.mil platform the Department of Defense launched in December to more than 3 million military members, civilian employees, and contractors.
But the relationship began to fracture after the Maduro raid. Pentagon leaders questioned whether a company that appeared to second-guess the operational use of its technology could be fully relied upon. One administration official told The Washington Post that Anthropic “expressed concern over the Maduro raid, which is a huge problem for the department.” Anthropic has pushed back on that characterization, telling The Post it had not discussed specific operations with the Pentagon or expressed concerns to industry partners beyond routine technical matters. Whatever the precise exchange, the damage was done. Trust — the invisible currency of any defense partnership — was suddenly in question.
Who Sets the Rules?
The dispute runs deeper than any single operation. At its core are two competing conceptions of who should define the boundaries of AI in military settings.
The Pentagon’s position is straightforward: if the government purchases an AI tool with taxpayer dollars, the military should be free to use it for any lawful purpose across the full spectrum of its missions.
Defense Secretary Pete Hegseth has made speed the watchword of his AI agenda. In a January 2026 directive, he ordered the department to move from “campaign planning to kill chain execution” and wrote that the military “must approach risk tradeoffs, ‘equities,’ and other subjective questions as if we were at war,” according to The Washington Post. Emil Michael, the Pentagon’s Undersecretary of Defense for Research and Engineering and chief technology officer, put the matter bluntly to DefenseScoop: “You can’t have an AI company sell AI to the Department of War and then not let it do Department of War things.”
Undersecretary Michael went further, urging Anthropic to make what he called an irreversible choice. “I believe and hope that they will ‘cross the Rubicon,’” he told DefenseScoop — language that carries its own weight. When a senior defense official invokes Caesar’s march on Rome to describe what he expects from a technology partner, the subtext is not subtle: capitulate or be left behind.
Anthropic’s position rests on two explicit red lines embedded in its usage policy: no mass surveillance of Americans, and no fully autonomous weapons systems without meaningful human control.
These aren’t afterthoughts. CEO Dario Amodei laid them out publicly in a January essay — published just two weeks after Hegseth’s directive — in which he warned about the dangers of AI-enabled drone swarms and the risk, as The Washington Post reported, that “democratic governments turning them against their own people to seize power.” Amodei has argued that democracies should use AI for national defense, with one critical caveat: not in ways that would make them indistinguishable from the autocracies they oppose.
The Washington Post reported that other leading AI firms — OpenAI, Google, and Elon Musk’s xAI — have agreed to let the Pentagon use their models for “all lawful purposes” on unclassified networks and are working on agreements for classified systems. That context matters. Anthropic is not being asked to do something unprecedented; it’s being asked to do what its competitors have already accepted. But its refusal — or at least its hesitation — has made it the test case for whether principled guardrails carry a price in the defense marketplace. And the price being discussed is severe.
Punishing the Company That Showed Up
Senior Pentagon officials have signaled they are considering designating Anthropic a “supply chain risk” — a classification normally reserved for foreign adversaries like Huawei and Kaspersky — as Axios first reported. Such a designation wouldn’t just end Anthropic’s Pentagon contract. It would require all defense contractors to certify they don’t use any Anthropic model, effectively blacklisting the company from the entire defense technology base.
As Alan Z. Rozenshtein, a law professor at the University of Minnesota and senior editor at Lawfare, noted in a recent analysis, it’s far from clear that such a designation would be legally sound. The relevant statutes — designed to address foreign sabotage and subversion — were never intended for a domestic company that openly restricts certain uses through a license agreement. The only time a similar order has been issued was against a Swiss cybersecurity firm with reported ties to Russia. Anthropic, whatever one thinks of its policies, is not that.
The move would also be strategically counterproductive. Anthropic was the first frontier AI lab to deploy on classified networks. It showed up when others hadn’t. Punishing the company that leaned in — while rewarding those that simply agreed to fewer constraints — sends a signal that will not be lost on the next generation of technology firms considering defense work.
A Gap Where Law Should Be
But here’s what troubles me most about this dispute, and what I think should trouble you:
The rules governing how the most powerful technology of this century gets used in war are being set through ad hoc negotiation between an executive branch official and a startup CEO, with no durable statutory framework and no meaningful democratic input.
Professor Rozenshtein put the deeper problem precisely: the issue isn’t who wins this particular negotiation — it’s that the negotiation is happening at all in place of legislation. If Anthropic holds firm, the Pentagon simply gets unconstrained AI from someone else. Only congressional action creates constraints that survive a change of vendor or a change of administration.
Congress already regulates military acquisition extensively. It imposes conditions on weapons systems, intelligence collection, and contractor behavior through standing procurement law and annual defense authorization. It has the tools to specify which AI applications the military can and cannot pursue, what companies must build into — or be forbidden from building into — systems sold to the government, and what transparency and reporting requirements give the public visibility into how these tools are actually used. What it hasn’t done is use them.
Full disclosure: I served in the Biden Administration, so I have a direct stake in what comes next. The 2023 policy requiring human decision-making authority over AI-enabled use of force remains in effect — but The Washington Post reports it “will be reviewed as needed.” That quiet caveat deserves attention. Existing guardrails are under institutional pressure even though they have not been formally repealed.
Who Pays the Cost?
In the policy debate, it’s easy to lose sight of the people at the center of it. Emelia Probasco, a senior fellow at Georgetown University’s Center for Security and Emerging Technology, offered a reminder in her comments to DefenseScoop. She characterized the supply chain risk threat as counterproductive and the dispute as fundamentally a “tussle over control and power,” not a genuine security concern. But her sharpest observation was about who pays the cost of unresolved governance: “Ultimately, the person I worry about is the operators who are being asked to do incredibly dangerous, incredibly complex operations in a world that is adopting AI. We need to figure this out for them.”
Frank Kendall, who served as Air Force Secretary under the Biden administration and oversaw the development of autonomous warplanes, was characteristically direct in his comments to The Washington Post.
“The military’s function is the application of violence,” he said, “and if you’re going to give anything to the Defense Department, it’s likely going to be used to help kill people.”
That candor is clarifying. It strips away the euphemism and forces the question: if that’s the function, shouldn’t the democratic process — not backroom deals — determine the rules that govern how AI serves it?
Building Without Blueprints
This dispute will shape defense procurement for a generation. It will determine whether ethical commitments in military AI are treated as principled positions or competitive liabilities — and whether Congress can muster the urgency to write durable rules before the boundaries of AI in warfare are entirely set by executive edicts and corporate capitulation.
We’re building the infrastructure of future conflict right now, in real time, without blueprints.
The question isn’t which technologies we choose. It’s whether democratic authority governs them — not as an afterthought, but as the architecture.
If the infrastructure of war increasingly resides in private hands, how should a democratic society assert its authority over the technologies that may decide life and death in its name?
That’s a republic-level question. And it’s ours to answer.
Be Intrepid — Tony Johnson
Reconnecting the Republic
February 2026


