The Pentagon-Anthropic dispute isn’t about a contract. It’s about whether democratic governance can keep pace with the technologies now shaping how America fights.
Tony, you are highlighting some of the bigger and broader fundamental issues for our democracy that stem from this particular case - thank you for illuminating these.
Question: what should we be seeing from Congress on this topic, how is it sufficient (or not), and who specifically within Congress needs to step up and take more courageous action to exercise their oversight/legal/appropriation powers?
Erik — thank you for reading my article, and for this comment! You've put your finger on exactly the right pressure point, and it's a question I wrestled with as I worked through this piece and the broader argument I'm making.
Here's my honest answer: AI is a paradigm shift technology. It's not tanks, bombers, artillery, or even cruise missiles. It's a technology that can be weaponized to completely change the character and destructive power of warfare. In that way, AI more closely resembles chemical/biological and nuclear weapons -- tech with the power to end civilization as we know it. We have law, policy, and strict procedures that govern and guide the use of nukes. We should treat the potential of AI-enabled warfighting capabilities with the seriousness it deserves. We cannot leave decisions about how it is used to one person and to profit-driven CEOs.
Americans deserve a democratic and open debate on how we use this powerful new tool in defense of the United States. We must not wait until we experience/unleash a Hiroshima-like event to have that debate. But we can’t have companies telling the government how to use the tech either. Our Adversaries will not be so constrained. Neither can we.
Congress must enact clear and durable statutory authorities to govern the use of AI in military operations — the kind of framework we built, however imperfectly, around nuclear weapons. What we're getting instead is silence punctuated by occasional hearing room theater.
As I argued in my piece, the tools are already there. Congress regulates military acquisition, imposes conditions on weapons systems, and shapes contractor behavior through defense authorization and appropriations every year. It can specify which AI applications the military can and cannot pursue, mandate transparency requirements, and define the human decision-making thresholds that must remain intact. It just hasn't used those tools here — not with any seriousness or urgency.
As for who needs to step up: the Senate Armed Services Committee and the House counterpart are the logical leads, particularly members with jurisdiction over defense procurement and emerging technologies. The Intelligence Committees have a role too, given how quickly these systems move from battlefield applications to domestic surveillance adjacency.
But leadership matters more than committees. What's missing isn't authority — it's will.
Congressional action is the only way to create constraints that survive a change in vendor or administration. That's the ask. And so far, very few members are treating it with the seriousness it demands.
Tony, you are highlighting some of the bigger and broader fundamental issues for our democracy that stem from this particular case - thank you for illuminating these.
Question: what should we be seeing from Congress on this topic, how is it sufficient (or not), and who specifically within Congress needs to step up and take more courageous action to exercise their oversight/legal/appropriation powers?
Erik — thank you for reading my article, and for this comment! You've put your finger on exactly the right pressure point, and it's a question I wrestled with as I worked through this piece and the broader argument I'm making.
Here's my honest answer: AI is a paradigm shift technology. It's not tanks, bombers, artillery, or even cruise missiles. It's a technology that can be weaponized to completely change the character and destructive power of warfare. In that way, AI more closely resembles chemical/biological and nuclear weapons -- tech with the power to end civilization as we know it. We have law, policy, and strict procedures that govern and guide the use of nukes. We should treat the potential of AI-enabled warfighting capabilities with the seriousness it deserves. We cannot leave decisions about how it is used to one person and to profit-driven CEOs.
Americans deserve a democratic and open debate on how we use this powerful new tool in defense of the United States. We must not wait until we experience/unleash a Hiroshima-like event to have that debate. But we can’t have companies telling the government how to use the tech either. Our Adversaries will not be so constrained. Neither can we.
Congress must enact clear and durable statutory authorities to govern the use of AI in military operations — the kind of framework we built, however imperfectly, around nuclear weapons. What we're getting instead is silence punctuated by occasional hearing room theater.
As I argued in my piece, the tools are already there. Congress regulates military acquisition, imposes conditions on weapons systems, and shapes contractor behavior through defense authorization and appropriations every year. It can specify which AI applications the military can and cannot pursue, mandate transparency requirements, and define the human decision-making thresholds that must remain intact. It just hasn't used those tools here — not with any seriousness or urgency.
As for who needs to step up: the Senate Armed Services Committee and the House counterpart are the logical leads, particularly members with jurisdiction over defense procurement and emerging technologies. The Intelligence Committees have a role too, given how quickly these systems move from battlefield applications to domestic surveillance adjacency.
But leadership matters more than committees. What's missing isn't authority — it's will.
The deeper problem, as Alan Rozenshtein framed it in his Lawfare analysis, is that the governance of the most consequential technology of this century is currently being negotiated in ad hoc conversations between executive branch officials and startup CEOs (https://www.lawfaremedia.org/article/congress-not-the-pentagon-or-anthropic-should-set-military-ai-rules).
Congressional action is the only way to create constraints that survive a change in vendor or administration. That's the ask. And so far, very few members are treating it with the seriousness it demands.
Also recommending this very helpful Rosenshtein Lawfare article, “What the Defense Production Act Can and Can’t Do to Anthropic.” (https://www.lawfaremedia.org/article/what-the-defense-production-act-can-and-can't-do-to-anthropic)