News commentary

AI and the Dangerous Fiction of ‘All Lawful Use’

Tech Policy Press · Dunstan Allison-Hope, Iain Levine · last updated

Much has transpired since Anthropic and the United States Department of Defense went public in February with their dispute over whether the US government should be able to require AI companies to permit “all lawful use” of their technologies by the government. While the “all lawful use” framing may seem reasonable at first glance, its adoption as a universal principle for government use of AI risks widespread violations of international human rights law (IHRL) and international humanitarian law (IHL).

Anthropic and the DOD failed to reach an agreement over two particular questions: mass surveillance of US citizens and the development of lethal autonomous weapons operating without human control. The government designated Anthropic a “supply chain risk,” which the company is challenging in court.

In recent weeks, as the international community and human rights activists have reacted to the US and Israeli offensive against Iran and Lebanon, even more serious concerns have been raised about AI-supported decision-making by humans and its impact on the failure to protect civilians as defined by the laws of war.