AI Warfare Is Outpacing Our Ability to Control It
Public unease around the United States’ and Israel’s use of artificial intelligence in the war in Iran is growing. People are asking the questions that governments should have answered long before deploying these systems in combat: What role does algorithmic targeting play? Who bears responsibility when something goes catastrophically wrong, and innocent civilians are killed?
A troubling pattern is emerging. Governments are racing to integrate AI into warfare without fully understanding its accuracy, limitations, or consequences. While the incident has been blamed primarily on human error, the school bombing that killed nearly 200 children and teachers in Minab reinforces the danger of faulty intelligence being acted on quickly. Meanwhile, disputes between AI companies and governments over the military use of advanced systems, and the public backlash against those who support their use, reveal widespread concern. The US government’s attempted retaliation against Anthropic for trying to put guardrails around how its systems are used in warfare further underscores how the wrong issues are being emphasized by governments keen to find advantage on the battlefield.
This concern is entirely justified. We have already seen how, when targeting data is outdated, misinterpreted, or unverified, the consequences are unconscionable. American officials have already confirmed the use of AI technology to assist in airstrikes in Iran. Despite their claims of increased precision, unarmed civilians have been killed during these attacks. When the US was pressed for answers on reported AI-enabled strikes leading to civilian deaths in Iraq in 2024, the Department of War said it is not possible to determine if AI was used or not.