There are no heroes in the world of GenAI, although it seems Anthropic CEO Dario Amodei has at least some boundaries he won’t cross when it comes to the use of his company’s AI in the military. Despite Antropic’s AI Claude already being used widely by the Department of War for intelligence analysis, cyber operations and the like, according to Reuters (thanks PC Gamer), Anthropic has been pressured for months by the U.S. government to allow it to also be used in mass domestic surveillance and fully autonomous weapons—which is to say, to spy on U.S. citizens, and to be able to “decide” to kill people without human involvement. Now Amodei has put out a statement saying that his company will not be backing down.
Amodei’s release makes clear that Anthropic is not against the use of its AI by the U.S. military, explaining that “Claude is extensively deployed across the Department of War and other national security agencies for mission-critical applications, such as intelligence analysis, modeling and simulation, operational planning, cyber operations, and more.” He also boasts that he’s turned down hundreds of millions of dollars in contracts from the Chinese Communist Party on the grounds that it might be used militarily against the U.S. But there is a line, and while Amodei says they “have never raised objections to particular military operations nor attempted to limit use of our technology in an ad hoc manner,” what he won’t allow is the use of their product to spy on American citizens, nor for it to be entirely in charge of weaponry.
As farcical as it might sound to hear a CEO saying that he has no problem with his hallucinating generative-AI being used to spy on foreign citizens, and to be part of partially autonomous weapons, but then take moral objection to tweaks on this, it does remain the case that this is a defiant stance against the U.S. government’s pressure, coming from both the Department of War and the Pentagon. That pressure is to remove safeguards from the AI, saying they will remove Claude from military operations if those safeguards are maintained and, thus, contracts will be lost. Amodei says Anthropic has been told it will be designated as “a supply chain risk” if it doesn’t back down, which he claims is a term “reserved for U.S. adversaries, never before applied to an American company,” and that the government may invoke the Defense Production Act to force the safeguards’ removal.
“Regardless,” says Amodei, “these threats do not change our position: we cannot in good conscience accede to their request.”
It’s perhaps somewhat telling that Amodei is well aware that a delusional AI cannot ever be put in autonomous control of weaponry, as frightening as it might be to realize just how entwined this collection of LLMs already is in the military.
In reaction to this, employees at both Google and OpenAI—two of Anthropic’s main rivals—have signed an open letter supporting the company in defiance of the Department of War’s threats, saying they “stand together to continue to refuse the Department of War’s current demands for permission to use our models for domestic mass surveillance and autonomously killing people without human oversight.”
.png)
1 hour ago
1






![ELDEN RING NIGHTREIGN: Deluxe Edition [FitGirl Repack]](https://i5.imageban.ru/out/2025/05/30/c2e3dcd3fc13fa43f3e4306eeea33a6f.jpg)


English (US) ·