Inside Anthropic’s Killer-Robot Dispute With the Pentagon
Right up until the moment that Pete Hegseth moved to terminate the government’s relationship with the AI company Anthropic, its leaders believed that they were still on track for a deal. The Pentagon had unilaterally insisted on renegotiating its contract with Anthropic, the company whose AI model is the only one currently allowed into the federal government’s classified systems, in order to remove ethical restrictions that the company had placed on it.
According to a source familiar with the negotiations, on Friday morning, Anthropic received word that Hegseth’s team would make a major concession. The Pentagon had kept trying to leave itself little escape hatches in the agreements that it proposed to Anthropic. It would pledge not to use Anthropic’s AI for mass domestic surveillance or for fully autonomous killing machines, but then qualify those pledges with loopholey phrases like as appropriate—suggesting that the terms were subject to change, based on the administration’s interpretation of a given situation.
[Read: What happens to Anthropic now?]
Anthropic’s team was relieved to hear that the government would be willing to remove those words, but one big problem remained: On Friday afternoon, Anthropic learned that the Pentagon still wanted to use the company’s AI to analyze bulk data collected from Americans. That could include information such as the questions you ask your favorite chatbot, your Google search history, your GPS-tracked movements, and your credit-card transactions, all of which could be cross-referenced with other details about your life. Anthropic’s leadership told Hegseth’s team that was a bridge too far, and the deal fell apart. Soon after, Hegseth directed the U.S. military’s contractors, suppliers, and partners to stop doing business with Anthropic. The list of companies that contract with the military is extensive, and includes Amazon, the company that supplies much of Anthropic’s computing infrastructure. The Department of Defense did not respond to a request for comment. A spokesperson for Anthropic referred me to the company’s statement addressing Hegseth’s remarks.
My source, whom I am granting anonymity because they are not authorized to talk about the negotiations, also shed further light on the disagreement between Anthropic and the Pentagon over autonomous weapons, machines that can select and engage targets without a human making the final call. The U.S. military has been developing these systems for years and has budgeted $13.4 billion for them in fiscal year 2026 alone. They run the gamut from individual drones to whole swarms that can be used in the air and at sea.
Anthropic had not argued that such weapons should not exist. To the contrary, the company had offered to work directly with the Pentagon to improve their reliability. Just as self-driving cars are now in some cases safer than those driven by humans, killer drones may some day be more accurate than a human operator, and less likely to kill bystanders during an attack. But for now, Anthropic’s leaders believe that their AI hasn’t yet reached that threshold. They worry that the models could lead the machines to fire indiscriminately or inaccurately, or otherwise endanger civilians or even American troops themselves.
According to my source, at one point during the negotiation, it was suggested that this impasse over autonomous weapons could be resolved if the Pentagon would simply promise to keep the company’s AI in the cloud, and out of the weapons themselves. The argument was that the models could be kept outside so-called edge systems, be they drones or other kinds of autonomous weapons. They might synthesize intelligence before an operation, but they wouldn’t actually be making kill decisions. The AI’s hands would be clean of any deadly errors that the drones made.
But Anthropic wasn’t satisfied by this solution. The company reasoned that in modern military AI architectures, the distinction between the cloud and the edge is no longer all that defined. It’s less a wall and more of a gradient. Drones on the battlefield can now be orchestrated through mesh networks that include cloud data centers. And while they’re designed to survive on their own, the military’s impulse will always be to maintain as much connectivity between them and the most powerful models in the cloud; the better the connection, the more intelligent the machine.
[Read: Anthropic is at war with itself]
Indeed, the Pentagon has been working hard to keep the cloud as involved as possible. Part of the goal of its Joint Warfighting Cloud Capability is to push computing resources closer to the fight. The AI may be sitting in an Amazon Web Services server in Virginia rather than a war zone overseas, but if it’s making battlefield decisions, from an ethical standpoint, that’s a distinction without much difference. Anthropic ended up discarding the idea that the cloud provision could resolve the problem. It didn’t take much analysis, according to the source close to the talks.
Anthropic’s leaders might have hoped that other AI companies would hold a similar line. Earlier in the week, they had reason to believe that OpenAI might. CEO Sam Altman had said that like Anthropic, OpenAI would also refuse to allow its models to be used in autonomous weapon systems. But as he made those statements, Altman was in the midst of negotiating a new deal with the Pentagon, which was announced just hours after Anthropic’s deal fell apart. (Altman did not respond to a text message requesting comment.) Yesterday, OpenAI (which has a corporate partnership with The Atlantic) released a statement that describes the broad contours of the agreement and touts the fact that the company’s AI will be deployed only in the cloud.
OpenAI’s employees may be curious to know what, if anything, has changed since Altman originally expressed his solidarity with Anthropic. As of this afternoon, nearly 100 of them had signed an open letter indicating that they supported the same red lines as Anthropic as far as mass domestic surveillance and autonomous weapons were concerned. If on Monday, Altman finds himself face-to-face with them in the office, he may have to explain why this idea that Anthropic quickly dismissed out of hand proved so compelling to him.
