Anthropic officially unveiled a sneak peek of its newest artificial intelligence model on Tuesday. The AI startup called the new program “Mythos” and claims it represents one of the most powerful software tools the company has ever built. Right now, everyday users cannot access the system. Instead, Anthropic gave early access to a very small, exclusive group of partner organizations. These massive tech companies will use the new AI specifically to hunt down hidden security flaws and protect critical computer networks from hackers.
The limited release of Mythos kicks off a brand new security program that Anthropic calls “Project Glasswing.” Under this specific initiative, twelve major partner organizations will deploy the new AI model to handle heavy defensive security work. The software will scan massive libraries of open-source computer code and search deep inside complex software systems to find hidden vulnerabilities. Surprisingly, Anthropic noted that engineers did not train Mythos specifically for cybersecurity. However, the model is so smart that it excels at finding hidden coding mistakes that human engineers missed.
The early testing results look incredibly impressive. Anthropic claims that over the past few weeks alone, Mythos successfully discovered thousands of “zero-day vulnerabilities.” In the cybersecurity world, a zero-day vulnerability is a critical software flaw that hackers can exploit immediately because the software creator does not know it exists. Anthropic revealed that many of the critical flaws Mythos found have been hidden in popular software for over 10 or even 20 years.
Mythos acts as a massive upgrade for the broader Claude AI family. The company describes it as a general-purpose “frontier model.” This means the software can handle highly complex tasks, like writing original computer code or reasoning through difficult, multi-step logic problems. The companies currently testing the software read like a “who’s who” of the technology industry.
The Project Glasswing partnership includes major corporations such as Amazon, Apple, Broadcom, Cisco, CrowdStrike, Microsoft, and Palo Alto Networks. These partners promised to share whatever they learn from using AI so that the rest of the tech industry can better protect its own software. While the general public will not get access anytime soon, Anthropic plans to let roughly 40 other organizations test the preview version.
Anthropic also mentioned that it is currently in discussions with federal government officials about how they might use the Mythos software. However, those conversations likely feel very tense right now. Anthropic and the Trump administration are currently fighting a major legal battle. The Pentagon recently labeled the AI startup a severe supply-chain risk. The government hit the company with this label because Anthropic flatly refuses to let the military use its software for autonomous targeting or to spy on United States citizens.
The public actually found out about Mythos long before Tuesday’s official announcement. Last month, a massive data leak ruined the company’s secret launch plans. Security researchers stumbled upon a publicly accessible database containing a draft blog post detailing the new AI. Back then, the company used the secret code name “Capybara.” The leaked document boasted that the new software tier was significantly larger and vastly more intelligent than any model the company had ever built. Anthropic later admitted a simple human error caused the embarrassing leak.
The leaked document also contained a very serious warning. Anthropic admitted that because the new AI is so incredibly good at finding software bugs, it could pose a massive security threat if it ever fell into the wrong hands. While the partner companies will use the software to find and fix bugs, a malicious hacker could weaponize the same technology to find those hidden flaws and exploit them before anyone else notices. This massive potential for danger highlights exactly why Anthropic is keeping the software locked down tightly with its trusted corporate partners.
This is not the first time Anthropic has struggled with keeping its own data secure. Just last month, the company accidentally leaked nearly 2,000 of its own source code files during a botched software update. When engineers panicked and tried to clean up the mess, they accidentally caused thousands of unrelated code repositories on GitHub to go offline. Despite these embarrassing internal mistakes, the company hopes its new Mythos model will eventually make the entire internet a much safer place.










