Advertise With Us Report Ads

Judge Blocks Trump Administration’s Ban on AI Startup Anthropic

LinkedIn
Twitter
Facebook
Telegram
WhatsApp
Email
Anthropic
From research to real-world applications, Anthropic drives responsible AI innovation. [SoftwareAnalytic]

A federal judge in San Francisco just handed a major win to the artificial intelligence startup Anthropic. Judge Rita Lin granted a preliminary injunction that stops the Trump administration from blacklisting the company or banning its Claude AI models from federal agencies. This ruling follows a high-stakes court hearing where the startup challenged the government’s recent crackdown.

ADVERTISEMENT
3rd party Ad. Not an offer or recommendation by atvite.com.

Judge Lin did not mince words in her written order. She described the administration’s actions as “classic illegal First Amendment retaliation.” She specifically criticized the government for trying to brand an American company as a national security threat simply because it disagreed with federal policies. She called the government’s logic “Orwellian” and argued that companies should not be punished for speaking out.

The legal battle began after President Trump and Defense Secretary Pete Hegseth labeled Anthropic a “supply chain risk.” This label is usually reserved for foreign adversaries, making Anthropic the first American company to face such a designation. Following the label, Trump ordered agencies to phase out the company’s technology, claiming the startup was run by “radical” individuals who didn’t understand the real world.

The tension actually stems from a failed $200 million contract negotiation. Anthropic wanted guarantees that the Pentagon would not use its AI for domestic mass surveillance or fully autonomous weapons. When the two sides could not reach an agreement, the government moved to blacklist the company. Judge Lin noted that while the military is free to choose other vendors, it cannot break the law to punish a company it dislikes.

This injunction provides immediate relief for Anthropic. Before this ruling, major partners like Amazon and Microsoft were being forced to certify that they weren’t using Claude in their military projects. The judge’s order now prevents the administration from enforcing those rules while the broader lawsuit continues to move through the court system.

ADVERTISEMENT
3rd party Ad. Not an offer or recommendation by softwareanalytic.com.

Anthropic leaders stated they are grateful for the court’s quick decision. They emphasized that while the lawsuit was necessary to protect their reputation and partners, they still want to work productively with the government. They maintained that their primary goal remains building safe and reliable AI for all Americans.

ADVERTISEMENT
3rd party Ad. Not an offer or recommendation by softwareanalytic.com.
ADVERTISEMENT
3rd party Ad. Not an offer or recommendation by softwareanalytic.com.