Anthropic CEO Dario Amodei is talking to the U.S. Department of Defense again. This comes just days after negotiations collapsed and President Donald Trump banned federal agencies from using the company’s AI tools. Amodei is currently meeting with defense official Emil Michael to try and hammer out a new agreement for the military to use Anthropic’s Claude AI models.
The original talks fell apart last Friday. Following the breakdown, Defense Secretary Pete Hegseth threatened to label Anthropic a national security risk. Michael even took to social media to call Amodei a “liar” with a “God complex.”
The two sides are fighting over how the military can use the AI. Anthropic previously won a $200 million contract to put Claude on classified government networks, and the military reportedly used the tech during conflicts with Iran. However, the company demanded strict rules to continue the partnership. They want to stop the government from using their tools for domestic spying or building autonomous weapons. The Pentagon pushed back, demanding the freedom to use the technology for any lawful purpose.
In a memo to his staff, Amodei explained why he walked away. He said the Defense Department offered to accept his rules only if he deleted a specific phrase banning the analysis of bulk acquired data. Amodei refused, noting that mass data analysis was his biggest fear.
Right after talks broke down, rival company OpenAI swooped in and signed its own deal with the Pentagon. This move sparked massive public anger. People flocked to download Anthropic’s app while deleting OpenAI’s ChatGPT. Facing the backlash, OpenAI CEO Sam Altman admitted his company rushed the deal and promised to add his own safety limits.
Altman later went online to defend Anthropic, stating the government should not label his rival a security risk. Other tech giants agree. A major industry group representing companies like Google and Nvidia sent a letter to Hegseth to protest the idea of blacklisting an American tech company.
Former OpenAI employees founded Anthropic in 2021 after arguing over the company’s direction. They built Anthropic specifically to serve as a safety-first alternative in the tech world. Government officials, however, have repeatedly criticized the startup for caring too much about AI safety limits.











