How AI firm Anthropic wound up in the Pentagon’s crosshairs
Standoff with DoD over Claude chatbot reignites debate over how AI will be used in war – and who will be held accountable
Until recently, Anthropic was one of the quieter names in the artificial intelligence boom. Despite being valued at about $350bn, it rarely generated the flashy headlines or public backlash associated with Sam Altman’s OpenAI or Elon Musk’s xAI. Its CEO and co-founder Dario Amodei was an industry fixture but hardly a household name outside of Silicon Valley, and its chatbot Claude lagged in popularity behind ChatGPT.
That perception has shifted as Anthropic has become the central actor in a high-profile fight with the Department of Defense over the company’s refusal to allow Claude to be used for domestic mass surveillance and autonomous weapons systems that can kill people without human input. Amid tense negotiations, the AI firm rejected a Pentagon deadline for a deal last week, in a move that led Pete Hegseth, the defense secretary, to accuse Anthropic of “arrogance and betrayal” of its home country while demanding that any companies that work with the US government cease all business with the AI firm.