Who's Setting the Rules for Military AI? Not Congress
The Pentagon's showdown with Anthropic reveals a constitutional gap that only Congress can close.
The Governance Gap Is Operational
Editor’s Note: An earlier version of my previous article, “Who Governs the Machines That Now Shape War?”, indicated that Claude was available on the Department of Defense’s GenAI.mil platform. According to DefenseScoop’s reporting on the December 9, 2025, launch, Google Cloud’s Gemini for Government was the first — and at launch, only — AI product deployed on GenAI.mil. The earlier piece has been corrected. Accuracy matters, especially when the subject is institutional accountability.
This week, the argument I’ve been making in this series stopped being an analysis and became breaking news.
On Tuesday, Secretary of Defense Pete Hegseth gave Anthropic CEO Dario Amodei a Friday deadline: remove the guardrails on the Claude AI system — specifically, the company’s restrictions on mass domestic surveillance and fully autonomous weapons — or lose a $200 million Pentagon contract. Hegseth also threatened to invoke the Defense Production Act and designate Anthropic a supply chain risk, a designation that could effectively blacklist the company from work across the federal government. As of this writing, Anthropic has not budged.
If you want to understand why this matters beyond the contract dispute itself, stay with me. Because what’s happening isn’t really about Anthropic versus the Pentagon. It’s about who governs “the machines/AI” that now shape war — and what happens when the institution constitutionally responsible for answering that question has gone largely silent.
Lawful Use Is the Right Starting Point — But It’s Not the Finish Line
Let’s give credit where it’s due. The Pentagon’s core principle — that the military, having purchased a capability with taxpayer dollars, should be free to deploy it for any lawful purpose across its authorized missions — is defensible. It reflects the basic structure of civil-military authority in a constitutional democracy.
When DoD’s own AI Ethics Principles, adopted in 2020, call for AI systems that are lawful, ethical, governable, traceable, reliable, and equitable — the department is right to insist that those principles, not a vendor’s business preferences, govern how its tools are used.
Pentagon CTO Emil Michael said it plainly in a February 19 meeting with reporters: “You can’t have an AI company sell AI to the Department of War and don’t let it do Department of War things.” That’s a reasonable operational frustration. I’ve sat in rooms where that kind of friction is maddening.
But ‘lawful use’ is only as strong as the laws defining it. And here is where my argument must extend well beyond what most commentary has been willing to go.
Congress Hasn't Shown Up
The hard truth is that Congress has not written the rules. Not comprehensively, not durably, and not in time. The Biden administration’s 2024 National Security Memorandum on AI (previously available online but is now suddenly no longer available) and updated DoD directives on autonomous weapons exist, but they are executive instruments — reversible, narrow in scope, and not a substitute for statute.
There is no comprehensive legislative framework governing which AI applications the military can and cannot pursue, what companies must build into — or be forbidden from building into — systems sold to the government, or what transparency and reporting requirements give the public genuine visibility into how these tools are actually used.
That is a legislative gap. And it has consequences that are now playing out in public.
Lawfare published an analysis today making precisely this point: Congress — not the Pentagon, not Anthropic — should set the rules for military AI. As the piece notes, the Pentagon originally agreed to Anthropic’s contractual guardrails on autonomous weapons and mass surveillance. It is now threatening to use a Korean War-era statute — the Defense Production Act — to compel compliance. That is what happens when the institution responsible for writing durable rules declines to do so: the executive branch improvises, and private companies become the last line of accountability.
That is not a system. It is a series of workarounds waiting to fail.
The Single-Vendor Problem
There is a detail in this dispute that hasn’t gotten the attention it deserves. According to multiple reports, Claude is currently the only frontier AI model authorized for use on DoD’s classified networks. The department has no classified-ready backup. This matters enormously for the governance argument: the Pentagon’s aggressive posture — ultimatums, DPA threats, supply chain risk designations — is at least partly a function of its own failure to avoid single-vendor dependency, a condition it was already warned about in the late Biden administration’s AI directives.
A department that built its classified AI infrastructure around a single provider, accepted that provider’s usage restrictions as contractual terms, and now wants to retroactively rewrite those terms under threat of law is not demonstrating institutional strength. It is demonstrating institutional improvisation. And improvisation under pressure is not governance.
Convenient Vacuums
There is something else worth naming — even if it’s uncomfortable. Governance vacuums are sometimes convenient for those operating within them. Oversight creates friction. Reporting requirements generate work. Legislative mandates constrain operational flexibility. When Congress is not asking hard questions, the day-to-day work of deploying new capabilities can move faster and with fewer interruptions.
I understand that logic. I’ve lived inside institutions where it operates. And I am not suggesting that the career professionals working on military AI are acting in bad faith. Most are not. They are doing serious work under serious pressure, with inadequate guidance from the branch of government constitutionally responsible for providing it.
But ‘Congress isn’t watching’ is not a governance strategy. It is a condition that eventually produces exactly the kind of public rupture we are witnessing — where the absence of clear rules means every friction point becomes a crisis, and every crisis becomes a political confrontation, rather than a policy problem with a policy solution.
The pattern is not unique to AI. Defense contractors working in surveillance and data analytics have faced similar tensions — congressional scrutiny of lawful-use boundaries, civil liberties objections, and pushback from oversight committees. The Anthropic dispute isn’t a novelty in the recurring story about the civil-military-corporate relationship. What’s new is the speed, the stakes, and the absence of any statutory framework designed to handle it.
The Fox guarding the Henhouse
My argument in this series has been consistent: the question of who governs military AI cannot be answered by the Secretary of Defense alone. Not because the department lacks competence or good faith — but because the fox-and-henhouse problem is structural, not personal.
Institutions asked to govern themselves without external accountability tend, over time, to govern in their own interests. That is not a criticism. It is institutional physics.
Congress must act. Not to micromanage operations, but to establish the durable legal architecture that makes legitimate oversight possible: statutory definitions of prohibited AI applications in warfare, transparency and reporting requirements with meaningful enforcement mechanisms, and independent review processes with actual authority. The National Defense Authorization Act is one vehicle. There are others. The point is that the authority must be legislative — not merely executive — and it must be exercised, not merely delegated.
Until that happens, we will keep having the wrong argument — vendors versus the Pentagon, ethics versus operations, Friday deadlines and DPA threats — when the right argument is between the democratic institutions responsible for governing both.
The Republic's Next Move
Strategic literacy means being able to see the architecture beneath the headlines.
This week’s dispute is not, at its core, about Anthropic’s values or the Pentagon’s frustration. It is about whether the republic’s governing institutions are fulfilling their constitutional responsibilities in a domain that is moving faster than their oversight functions and habits.
The answer, for now, is no. But the answer can change — if citizens understand what’s at stake well enough to demand it.
That is why this work matters.
Be Intrepid — Tony
February 2026
A Republic aware of its fractures | An American committed to its repair.
Sources & Further Reading
DefenseScoop, “DOD initiates large-scale rollout of commercial AI models and emerging agentic tools,” Dec. 9, 2025: https://defensescoop.com/2025/12/09/genai-mil-platform-dod-commercial-ai-models-agentic-tools-google-gemini/
DefenseScoop, “Pentagon CTO urges Anthropic to ‘cross the Rubicon’ on military AI use cases amid ethics dispute,” Feb. 19, 2026: https://defensescoop.com/2026/02/19/pentagon-anthropic-dispute-military-ai-hegseth-emil-michael/
CNN, “Pentagon threatens to make Anthropic a pariah if it refuses to drop AI guardrails,” Feb. 24, 2026: https://www.cnn.com/2026/02/24/tech/hegseth-anthropic-ai-military-amodei
TechCrunch, “Anthropic won’t budge as Pentagon escalates AI dispute,” Feb. 24, 2026: https://techcrunch.com/2026/02/24/anthropic-wont-budge-as-pentagon-escalates-ai-dispute/
Lawfare, “What the Defense Production Act Can and Can’t Do to Anthropic,” Feb. 26, 2026: https://www.lawfaremedia.org/article/what-the-defense-production-act-can-and-can’t-do-to-anthropic
NBC News, “Tensions between the Pentagon and AI giant Anthropic reach a boiling point,” Feb. 20, 2026: https://www.nbcnews.com/tech/security/anthropic-ai-defense-war-venezuela-maduro-rcna259603
Department of Defense, AI Ethics Principles, Feb. 24, 2020: https://www.defense.gov/News/Releases/Release/Article/2091996/dod-adopts-ethical-principles-for-artificial-intelligence/
The “disappeared”
White House, National Security Memorandum on AI, Oct. 24, 2024: https://www.whitehouse.gov/briefing-room/presidential-actions/2024/10/24/memorandum-on-advancing-the-united-states-leadership-in-artificial-intelligence/
DoD Directive 3000.09, Autonomy in Weapon Systems (updated 2023): https://www.esd.whs.mil/portals/54/documents/dd/issuances/dodd/300009p.pdf
Congressional Research Service, AI and National Security: https://crsreports.congress.gov/product/pdf/R/R45178
H.R. 2670, National Defense Authorization Act for FY2024, 118th Congress: https://www.congress.gov/bill/118th-congress/house-bill/2670


