Back to the BeGig Knowledge Hub

Published: Wed - Apr 22, 2026

Anthropic’s Project Glasswing: What It Is, Why It Matters, and What It Means If You're Building with AI

Project Glasswing

On April 7, 2026, Anthropic did something unusual: it announced a powerful new AI model and immediately said it was too dangerous to release to the public.

The model is called Claude Mythos Preview. The initiative built around it is called Project Glasswing. And the reason Anthropic is withholding it - while simultaneously giving access to AWS, Apple, Google, Microsoft, Nvidia, and around 40 other organisations - tells you something important about where AI capabilities have arrived in 2026.

If you're a founder, operator, or business leader commissioning AI systems - or simply trying to understand what's happening at the frontier - here's what you need to know.

What Is Project Glasswing?

Project Glasswing is Anthropic's coordinated effort to use its most capable AI model - Claude Mythos Preview - to find and fix critical security vulnerabilities in the world's most widely used software, before attackers can exploit them.

The project is named after the glasswing butterfly (Greta oto), known for its transparent wings - a nod to transparency in how AI capabilities are being disclosed and deployed.

What makes this significant is what Claude Mythos Preview can actually do. In Anthropic's own testing over the past month, the model:

  • - Identified thousands of zero-day vulnerabilities - previously unknown security flaws - across every major operating system and every major web browser
  • - Found flaws that had survived decades of expert human review and millions of automated security tests
  • - Demonstrated the ability to autonomously chain multiple vulnerabilities together to achieve full system compromise - without human guidance

That last capability is the one that triggered Anthropic's decision not to release Mythos publicly. A model that can independently plan and execute multi-step cyberattacks - even if built for defensive purposes - would represent an unprecedented risk in the wrong hands.

How Project Glasswing Actually Works

Rather than a full public release, Anthropic structured Project Glasswing as a controlled, collaborative initiative with three tiers of access:

  1. - Launch partners (AWS, Apple, Google, Microsoft, Nvidia, Cisco, JPMorganChase, CrowdStrike, Broadcom, Palo Alto Networks, the Linux Foundation) - using Mythos Preview to scan and harden their own critical systems.
  2. - ~40 additional organisations - businesses and open-source maintainers responsible for critical software infrastructure, given access to scan first-party and open-source codebases.
  3. - Open-source community - Anthropic is committing $4 million in direct donations to open-source security organisations to fund vulnerability patching.

Anthropic has committed $100 million in Mythos Preview usage credits across the initiative. All participants are required to share their findings with the broader industry - this is explicitly designed as a collective defence effort, not a competitive advantage for any single company.

Why Anthropic Chose Not to Release Mythos Publicly

This is the first time in nearly seven years that a leading AI lab has publicly withheld a frontier model due to safety concerns. The last comparable decision was OpenAI in 2019, when it delayed releasing GPT-2.

Anthropic's reasoning is direct: Mythos Preview is so capable at finding and exploiting vulnerabilities that making it publicly available - even via API - would give attackers capabilities that currently require elite, nation-state-level human expertise. The company has reportedly briefed senior government officials warning that Mythos-class models make large-scale cyberattacks significantly more likely in 2026.

Their stated plan: use Project Glasswing to get defenders ahead of the curve, develop new safeguards, and then eventually deploy Mythos-class models safely at scale. Newton Cheng, Anthropic's Frontier Red Team Cyber Lead, was clear: "The rate of AI progress means it will not be long before such capabilities proliferate, potentially beyond actors committed to deploying them safely."

What This Means for Businesses Building with AI in 2026

Project Glasswing is not just a cybersecurity story - it's a signal about where AI capabilities are and what responsible deployment looks like at the frontier. For founders and operators building AI systems, there are three practical implications:

1. Security is now a scope decision, not an afterthought

As AI systems handle more sensitive customer data - conversations, purchase history, credentials, health records - the attack surface grows with them. The vulnerabilities Mythos found didn't appear overnight; many had existed undetected for years. If you're commissioning an AI chatbot, automation pipeline, or data-processing system, security architecture needs to be part of scoping, not a post-launch patch.

2. Open-source dependencies need scrutiny

Project Glasswing places particular emphasis on open-source software - for good reason. Most modern AI systems are built on open-source foundations (Python libraries, frameworks, infrastructure tools). Glasswing's $4 million in donations to open-source security organisations reflects how much of the world's attack surface runs on code maintained by volunteers with no dedicated security team. If your AI system runs on these foundations - and it almost certainly does - the vulnerabilities being patched through Glasswing are directly relevant to you.

3. India's digital infrastructure is directly in scope

Indian startups and enterprises are among the fastest-growing adopters of cloud infrastructure and AI tooling - which means they're building on the exact systems Glasswing is scanning: AWS, Google Cloud, Microsoft Azure, Linux-based infrastructure. The zero-day vulnerabilities being discovered and patched through this initiative are in software your business is almost certainly running today. The DPDP Act (2023) already mandates responsible data handling; the Glasswing vulnerabilities make compliance not just a legal requirement but an operational one.

How BeGig AI Studio Thinks About Security in Every Build

At BeGig AI Studio, security and data handling aren't features we add at the end - they're scope decisions we make at the beginning.

Every project we scope includes explicit decisions about:

  • - What data the AI system touches, stores, and transmits - and what it doesn't need to
  • - Which open-source dependencies the system relies on, and whether they're actively maintained
  • - How the system handles failure states - what happens when an API goes down, a model returns unexpected output, or a user attempts injection
  • - Whether the build is compliant with India's DPDP Act requirements for data consent, storage, and transfer

Project Glasswing is a reminder that the infrastructure underneath AI systems is not static. If you're building an AI product - or planning to - the question isn't whether your system will face security pressure. The question is whether it was designed to withstand it.

If you're scoping an AI project and want to understand how security fits into your build, book a 30-minute scoping call with BeGig AI Studio.

Frequently Asked Questions

What is Claude Mythos Preview and why wasn't it publicly released?

Claude Mythos Preview is Anthropic's most capable AI model to date, with exceptional ability in coding and cybersecurity tasks. Anthropic chose not to release it publicly because it can autonomously identify and exploit software vulnerabilities - including chaining multiple exploits together to compromise complex systems - at a level that could enable unprecedented cyberattacks if accessed by malicious actors. Instead, Anthropic restricted access to vetted organisations through Project Glasswing.

What is a zero-day vulnerability and why does it matter for my business?

A zero-day vulnerability is a security flaw in software that the developers don't yet know about - meaning there is zero time between discovery and exploitation risk. They're particularly dangerous because there is no patch available when an attacker finds and uses one. Claude Mythos Preview found thousands of these across major operating systems and browsers, many of which had existed undetected for years. These vulnerabilities exist in software that most businesses run today, including cloud infrastructure, web browsers, and open-source tools.

Which companies are part of Project Glasswing?

Launch partners include Amazon Web Services (AWS), Apple, Google, Microsoft, Nvidia, Cisco, JPMorganChase, CrowdStrike, Broadcom, Palo Alto Networks, and the Linux Foundation. Approximately 40 additional organisations building or maintaining critical software infrastructure also have access. Anthropic has committed $100 million in Mythos Preview usage credits across the initiative.

Does Project Glasswing affect Indian businesses?

Yes, directly. Indian startups and enterprises building on AWS, Google Cloud, Microsoft Azure, or using open-source Linux-based infrastructure are running software that Project Glasswing is actively scanning for vulnerabilities. The patches produced through the initiative will improve the security posture of the cloud infrastructure Indian businesses depend on. Additionally, the DPDP Act 2023 places compliance obligations on Indian organisations handling personal data - and AI-related security vulnerabilities directly intersect with those obligations.

What should founders ask their AI development partner about security?

Before commissioning any AI system, ask: What data does this system access, store, and transmit? Which open-source dependencies does it use, and are they actively maintained? How does it handle failure states and adversarial inputs? Is the architecture compliant with the DPDP Act for Indian data handling requirements? What happens if the underlying model or API is updated or deprecated? A reliable AI delivery partner should have clear answers to all of these before a single line of code is written.


Never miss a story

Stay updated about BeGig news as it happens