News commentary

The information space around military AI is being weaponized against us

Weaponized · Caroline Orr Bueno, PhD · last updated

The current controversy between Anthropic and the Pentagon has become a flashpoint in the national debate over military artificial intelligence, but the understandable focus on fully autonomous weapons systems is distorting the information space in this area and artificially narrowing the scope of discussion. The resulting impacts on our information environment—agenda narrowing, issue substitution, and complexity reduction—are usually associated with outcomes of narrative warfare, not healthy, organic debate, so it’s worth taking a closer look at what’s actually going on here.

According to recent reports, the Pentagon is pressuring AI company Anthropic to loosen the safety guardrails on its flagship AI system, Claude, warning that failure to comply could jeopardize the company’s $200 million Defense Department contract and result in its classification as a “supply chain risk.” Thus far, public discussion has centered almost entirely on one dominant question: should AI be deployed in military settings without a human “in the loop”?

Weaponized is a reader-supported publication. To receive new posts and support my work, consider becoming a free or paid subscriber.

But that framing is narrowing the debate at a critical moment. The focus on human involvement in autonomous systems—while absolutely a critical issue that needs to be discussed—has drowned out broader questions about whether advanced AI should be embedded into military decision-making at all, as well as who should control its deployment, how oversight should be structured, and what constitutional processes are being bypassed as the Pentagon pushes forward with AI integration.

The controversy is reshaping the information space, concentrating attention on one company and one question while minimizing the governance structures around them and all but shutting out the rest of the country—including voters and lawmakers. This is something we should confront head-on, not relegate to a peripheral issue.

Let’s look at how this information distortion is actually taking place so you can spot it as it’s happening.

Issue Substitution

The phrase “human in the loop” has become the shorthand proxy for safety in many operational settings. Department of Defense Directive 3000.09 requires “appropriate levels of human judgment” over autonomous weapon systems, while international humanitarian law scholars often frame the issue around “meaningful human control.”

This framing has political appeal because it offers a familiar safeguard and implies some degree of continuity with existing military doctrine. It also conveniently avoids confronting more disruptive questions.

Yet decades of research complicate the reassurance supposedly provided by that phrase. Studies of automation bias show that humans supervising automated systems frequently defer to machine outputs, even when they’re wrong. One landmark study found that, when working with seemingly highly reliable automation systems, operators detected only about 30 percent of system failures. In another series of studies, researchers documented that roughly 65 percent of participants followed incorrect automated directives. A similar study found that 39 of 40 participants followed faulty automated recommendations despite the ability to verify them independently.

The human-in-the-loop debate, therefore, fails to resolve deeper questions about delegation of authority, acceleration of decision cycles, and institutional accountability. By centering the discussion on whether a human remains present, the current controversy sidelines the question of whether advanced AI systems should structure military decision pipelines in the first place.

A war-game simulation described this week by New Scientist found that large language models chose nuclear options (literally, nuclear strikes) in approximately 95 percent of test runs when objectives were loosely constrained.

The integration of AI into military systems also alters the speed of decision-making. Research in strategic stability and crisis management has long emphasized the stabilizing role of deliberation and careful, slow review. Automation often encourages the opposite.

A war-game simulation described this week by New Scientist found that large language models chose nuclear options (literally, nuclear strikes) in approximately 95 percent of test runs when objectives were loosely constrained and the model was trying to choose a decisive action that would lead to victory on the battlefield. The finding illustrates how AI models operating with ambiguous goals can produce extreme outputs while still following directions.

Currently, the media and most of the public is focused on the question of whether humans will approve those outputs. Less attention is being paid to how AI integration changes the timing and framing of the options themselves. If AI systems generate rapid threat assessments and routinely recommend escalation at any cost, the menu of choices presented to decision-makers will be artificially narrowed before human review even begins. And even with a full range of options, the research shows that human oversight doesn’t necessarily correct machine errors.

Again, by focusing so much of our attention on questions about fully autonomous AI systems in military settings, we are implicitly and uncritically accepting the legitimacy of human-in-the-loop systems and overlooking extremely important questions about what role—if any—AI should play in lethal decision-making at all. This dynamic mirrors a common feature of narrative warfare called issue substitution, which describes the process of substituting foundational issues or questions with narrower, more manageable proxies.

Agenda Narrowing

The Constitution assigns Congress the power to declare war and regulate the armed forces. Yet the integration of frontier AI systems into military infrastructure is proceeding primarily through executive branch contracting and internal policy guidance.

The Anthropic standoff has focused attention on the Secretary of Defense and a single company’s leadership, which has resulted in a complete lack of substantive public debate about congressional authorization, statutory guardrails, and procurement oversight specific to autonomous AI systems.

Congress has held hearings on AI and national security, but no comprehensive statutory framework governing autonomous lethal systems has been enacted. The National Defense Authorization Acts of recent years include AI funding and research directives, yet they do not establish detailed deployment constraints comparable to those governing nuclear command authority.

Left largely untouched is the question of whether any single private firm should have such leverage over the backbone of military AI capability, and whether a single unelected CEO or handful of CEOs should be given the power to shape current and future military strategy for an entire nation.

These gaps have resulted in an alarming amount of power being handed over to the executive branch by default, effectively allowing it to unilaterally make sweeping decisions about the current and future trajectory of the US military and foreign policy with no oversight. While we’re focused on negotiations between the executive branch and a private company, issues such as congressional authorization, legislative design, and oversight—as well as the will of the public—are being pushed out of the frame entirely.

Another dimension receiving limited scrutiny concerns concentration of influence. A small number of AI companies—Anthropic, OpenAI, Google DeepMind, and others—are shaping the technical trajectory of systems that may be integrated into defense operations that will influence the direction of our military for years to come. The debate has thus far focused on whether Anthropic should comply with Pentagon requests. Left largely untouched is the question of whether any single private firm should have such leverage over the backbone of military AI capability, and whether a single unelected CEO or handful of CEOs should be given the power to shape current and future military strategy for an entire nation.

In narrative warfare, the dynamics described above are categorized as agenda narrowing, which occurs when, out of many relevant questions or issues, only a small subset are given sustained attention or addressed at all.

AI Surveillance and Civil Liberties

While the Anthropic controversy has largely centered on battlefield and targeting scenarios, AI integration into defense systems also vastly expands surveillance capabilities, and the Pentagon’s recent demands suggest that it has plans to use AI for both foreign and domestic surveillance.

AI-driven pattern recognition, anomaly detection, and large-scale data analysis can dramatically increase the scope of surveillance. The Foreign Intelligence Surveillance Act (FISA) provides statutory guardrails for certain intelligence activities, yet AI-enhanced surveillance raises new questions about scale and inference capabilities.

Thanks for reading Weaponized! This post is public so feel free to share it.

Share

Scholars of surveillance technology have noted that advanced analytics can infer sensitive attributes from non-sensitive data, effectively expanding surveillance beyond explicit collection boundaries. This means that even if the raw data collected for surveillance purposes remains the same, the meaning extracted from it could expand dramatically with the use of AI. This is critically important, as current laws primarily govern the collection of data, but not things like inference capabilities or analytic tactics such as network restructuring, predictive modeling, or behavioral clustering—all of which have the potential to expand exponentially with the integration of AI into surveillance systems.

Furthermore, since even those involved in developing AI models often don’t fully understand how these models work, this would introduce the potential for surveillance activities to produce inferences that are unexplainable, even if correct. In other words, an AI system could deliver accurate inferences yet not be able to explain how it did so. Hence, it would be effectively impossible to audit these systems to check for things like bias or privacy violations.

Most importantly, given that AI is expected to completely transform surveillance capabilities beyond anything we can predict currently, there is an urgent need to formulate new laws and policies in order to ensure that our civil liberties are not sacrificed along the way. These safeguards will need to be developed and implemented on a continuing basis as the technology develops further and we learn more about its capabilities, but the process and framework for doing so should obviously be put in place before the AI is. I’m not hearing much, if any, discussion of that right now.

Information Space Distortion

There’s a reason that politicians often like controversies: they simplify things. Information ecosystems also tend to prioritize identifiable, explainable conflict (e.g., Secretary of Defense vs. CEO) over messier, more complicated dilemmas. Media coverage reinforces that over-simplification because two-sided disputes are easier to narrate than undefined governance questions.

By artificially narrowing the boundaries of discussion, our discourse has gotten stuck on the question of whether humans should remain in the loop, rather than asking whether the loop itself is expanding beyond constitutional limits.

As a result of this complexity reduction, discourse tends to narrow around technical details, personality clashes, and narrative cliches (e.g., David and Goliath; good vs. evil) while broader questions—congressional authorization, procurement transparency, war powers, surveillance expansion, etc—receive little or no sustained public scrutiny.

Ultimately, this shapes public understanding in dramatic, yet often unseen ways. By artificially narrowing the boundaries of discussion, our discourse has gotten stuck on the question of whether humans should remain in the loop, rather than asking whether the loop itself is expanding beyond constitutional limits.

We have seen these dynamics emerge in our information environment over and over again during critical periods of time throughout modern history, particularly in areas related to technology, the military, and abuses of power. For example, after the Edward Snowden revelations, public debate rapidly shifted from sweeping constitutional questions to technical details of specific programs like Section 215 metadata collection, resulting in a discourse focused on teasing apart nuances like metadata vs. content instead of questioning things like executive surveillance authority and the existence of secret courts. (Of course, these bigger picture discussions took place to varying extents, but that was in spite of, not because of, mainstream media coverage and public statements by politicians). Similarly, after 9/11, debates over the Patriot Act frequently centered on specific surveillance tools, while broader issues—like permanent expansion of executive emergency authority—received much less sustained public attention.

We are seeing the terms of discussion set for us by the media, politicians, powerful corporations, and interest groups that stand to benefit immensely from you being unaware that there is even a debate to be had about not-fully-autonomous AI weapons systems.

More recently, during discussions about whether TikTok should be banned in the U.S. due to it being a national security risk, media coverage primarily emphasized concerns surrounding the company’s data storage location, the lack of algorithmic transparency, and the corporate ownership structure, rather than the broader precedent for executive authority to ban communication platforms, First Amendment implications, and the consequences of allowing that type of national security determination to be made without judicial review.

All of these discussions are important and all of these issues should be laid out on the table, but right now that’s not happening with the current debate over the use of AI in military decision-making. Instead, we are seeing the terms of discussion set for us by the media, politicians, powerful corporations, and interest groups that stand to benefit immensely from you being unaware that there is even a debate to be had about not-fully-autonomous AI weapons systems.

While the Anthropic controversy has illuminated a fracture line, it has also narrowed our field of vision. The question of whether a human should stay in the loop is clearly a very important one, and nothing in this article is meant to suggest otherwise. However, the broader debate over military AI will shape the next generation of state power, and we cannot let this be collapsed into a single question, even if that single question is extremely critical. Expanding the conversation beyond a single company’s guardrails is necessary if democratic governance is to keep pace with technological capability. Yet that is the exact thing that the current discourse is discouraging us from doing, and it’s worth asking why—and who benefits.

Weaponized is about to expand, but I need your support to make it happen. Please consider becoming a paid subscriber today.

Related stories