Skip to content
AI SecurityAnthropicLLMDual UseVulnerability Research

Anthropic's Most Powerful Model Is Being Released for Security Work. That's the Point, and Also the Problem.

4 min read
Share

Anthropic previewed Mythos through Project Glasswing yesterday: controlled access for 12 partner organizations to use its most capable model for defensive security work. The partners read like a who's who of enterprise security: Amazon, Apple, Cisco, CrowdStrike, Microsoft, Palo Alto Networks, the Linux Foundation. Forty additional organizations get access beyond the core partnership, with up to $100 million in usage credits on the table.

The stated goal is vulnerability scanning: give these organizations the ability to systematically review first-party and open-source code at a scale and depth that wasn't previously possible. Anthropic says Mythos "far exceeds" its current public models on coding, academic reasoning, and cybersecurity tasks.

They also said it could be a serious offensive weapon if weaponized.

The Dual-Use Tension, Stated Plainly

Most AI safety disclosures are carefully hedged. Anthropic's framing on this one is unusually direct. A model this capable at understanding code, identifying vulnerability patterns, and reasoning through exploitation chains is not neutral infrastructure. It's a force multiplier for whoever uses it.

The logic behind Project Glasswing is that getting the capability into defenders' hands first matters. If Mythos can scan a codebase for vulnerabilities faster and more thoroughly than existing tools, then organizations running it on their own infrastructure before attackers get access to equivalent capability are in a better position than those who wait.

That logic holds, as long as the access stays controlled. Twelve partners today. Forty more organizations. Usage credits that will be spent, results that will be logged, model behaviors that will be documented. The window between "limited preview" and "available on the gray market" is narrower than most companies plan for.

I've watched this pattern with other dual-use tools. The access policies that look tight on day one tend to develop gaps by month six. Someone's contractor has access. Someone shares credentials. A company in the partner list gets acquired. The model capability doesn't stay inside a clean perimeter forever.

What This Means for Defenders

If you're doing security architecture or running a SOC at a company that isn't in the Glasswing partner list, this announcement matters for a few reasons.

First, the bar for automated vulnerability discovery just moved. Mythos-level capabilities will eventually be available more broadly, whether through Anthropic's own product expansion or through competitors catching up. The window where skilled human analysts are the primary mechanism for code review is getting shorter.

Second, the offensive side is watching. Any public disclosure about a model's capabilities on vulnerability discovery is also a roadmap for how attackers should be thinking about their own AI toolchain. The gap between offensive and defensive AI capability in practice is mostly access and financing, not fundamental technical know-how.

Third, and most practically: if your organization's security posture assumes that sophisticated automated vulnerability discovery is not in your threat model, update that assumption now.

On Anthropic Specifically

Project Glasswing is also an Anthropic play for relevance in enterprise security. The Pentagon situation earlier this year made clear that Anthropic's government relationships are complicated. A controlled security partnership with civilian enterprise players is a different kind of foothold, and $100 million in usage credits is a real investment in building it.

None of that makes the initiative wrong. The capability exists, it's going to be used somewhere, and a deliberate deployment to established security organizations with clear use cases is probably better than the alternative. But understanding why a company makes a decision and whether the decision is good are different questions.

Project Glasswing is good for the partners. It's interesting for the industry. The question is whether the controlled preview holds long enough to actually matter.

Gigia Tsiklauri is a Security Architect and founder of Infosec.ge. Get in touch at /contact if you want the uncomfortable conversation before the incident report.