A War Without Rules, and a Bill with Three
Senator Slotkin’s AI Guardrails Act is the right move—and a first step. The governance gap it reveals runs deeper than three prohibitions can close.
On Tuesday, Senator Elissa Slotkin introduced the AI Guardrails Act—a five-page bill establishing three clear prohibitions on how the Department of War may use artificial intelligence: no AI involvement in nuclear launch decisions, no AI-enabled mass surveillance of Americans, and no autonomous lethal systems operating without a human in the decision chain.
She introduced it the same week a federal judge in California is preparing to hear arguments in Anthropic v. Department of Defense.
That timing is not a coincidence. It’s a diagnosis.
From Ultimatum to Federal Court
When I wrote about this dispute in February, the Pentagon had issued an ultimatum. Since then, there’s been significant movement on the issue.
Let’s get into it.
On March 4, the Department of Defense formally designated Anthropic a supply chain risk—a classification previously reserved for foreign adversaries like Huawei—effectively blacklisting the company across the federal government. Anthropic sued. President Trump ordered all federal agencies off Claude. Microsoft, a coalition of retired military chiefs including former CIA Director Michael Hayden, AI researchers at Google and OpenAI, and civil liberties organizations from the Cato Institute to the Electronic Frontier Foundation have all filed supporting briefs in Anthropic’s defense.
The first court hearing is set for March 24th.
And here is the detail that strips away every abstraction: according to NBC News, the U.S. military is currently using Palantir’s AI systems—which rely in part on Anthropic’s Claude—to help identify potential targets in ongoing airstrikes in Iran.
The “War Department” that designated a company a national security threat is simultaneously dependent on that company’s technology in active combat operations.
That’s not a procurement dispute. It’s a governance failure with real-time operational consequences.
Discipline, Not Hesitation
Senator Slotkin’s bill is deliberately restrained.
It does not attempt to comprehensively regulate military AI.
It does not slow the Department’s adoption of the technology or second-guess commanders' operational judgments.
It picks three lines and draws them clearly—the same three lines that Anthropic tried to hold through contractual terms and that the Pentagon is now fighting in court to erase.
That framing matters. The bill’s premise is not rooted in hesitation. It’s focused and disciplined. It recognizes that some limits are not constraints on military advantage—they are the conditions under which democratic militaries remain distinguishable from the adversaries they oppose.
Slotkin put the argument plainly in a phone call with NBC News:
“The Pentagon was able to target Anthropic in this case and is going to spend the next year and God knows how many millions of dollars ripping out Anthropic from all the classified systems—something that’s going to cost the taxpayer an enormous amount of money over a dispute that could have been handled if we just had law.”
She’s right. And she’s describing, precisely, the argument I have been making in my writing on governance for the use of AI warfare.
The Loophole the Bill Doesn’t Close
There’s something the bill’s three prohibitions illuminate that deserves more attention than it has received.
At the center of the surveillance dispute is what privacy experts call a “data broker loophole.” According to reporting by Axios and Bloomberg, the Pentagon sought the ability to use Claude not merely to analyze classified intelligence—Anthropic had agreed to that—but to process unclassified commercial bulk data on Americans: geolocation records, web browsing histories, credit card transactions. Data that is legally purchasable. Data that would require a warrant to collect directly. The only thing that has changed is the machine doing the analysis.
That distinction—legal to buy, unconstitutional to collect—sits at the heart of a governance architecture that predates this dispute and will outlast it. The Intelligence Community has its own framework for handling commercially available information, developed over years of difficult internal deliberation. I know that framework well; I worked to align DoD policy with it during my time as the Intelligence Advisor to the Deputy Secretary of Defense. That architecture has real teeth—but it wasn’t designed for AI operating at scale across aggregated datasets. How AI transforms the legal purchase of commercial data into something that, in practice, functions like warrantless surveillance is a structural question this bill does not yet answer. I’ll return to that point later, in a dedicated piece, because it deserves the full treatment.
What I can say here is this:
Senator Slotkin’s bill closes part of the gap, and that’s postive and necessary step in the right direction — good news. But… it doesn’t close the gap. The danger is that a beginning, without a commitment to the rest of the structure, can become an excuse to stop building.
The Cost of Waiting
The operators in the field—the analysts sorting intelligence in real time, the commanders integrating AI outputs into targeting decisions—aren’t waiting for Congress to finish its deliberations. They’re working now, inside a legal and institutional framework that’s being contested simultaneously in a California federal court, a Senate committee room, and the Secretary of War’s office.
Senator Slotkin identified the problem directly in a recent Armed Services Committee hearing:
“It’s really up to the humans, and in this case the Secretary of Defense, to ensure that there’s human redundancy for the foreseeable future—and that is what we just don’t have confidence in.”
That observation deserves our attention. We should sit with it and seriously consider the implications.
Human redundancy is not a technical specification. It is a constitutional commitment.
And right now, it rests on the judgment of a single cabinet official operating without statutory guidance.
From First Step to Framework
Congress stepping in changes the dynamic—but only if it goes further. Drawing red lines is necessary. Building the durable oversight architecture that makes those lines enforceable is the work that determines whether governance keeps pace with the technology itself.
Our republic has two converging opportunities this week: a bill that begins to set limits, and a court case that forces the question of institutional authority into public view. Used together, they could generate the kind of statutory momentum that neither litigation nor executive policy alone can sustain.
Senator Slotkin has drawn three lines. Congress now has to decide whether those lines mark a beginning — or the extent of its ambition.
The Republic Question: If the rules governing AI in warfare are being contested simultaneously in a federal courtroom and a Senate committee room—while those same tools are used in active combat—what does democratic governance of military technology actually require? And who is responsible for building it?
Be Intrepid — Tony Johnson
A Republic aware of its fractures | An American committed to its repair.
#StrategicLiteracy #ReconnectingTheRepublic


