It's Monday, April 13th: Anthropic put Mythos to work finding thousands of zero-day vulnerabilities across every major OS and browser, and Meta shipped its first model from Alexandr Wang's Superintelligence Labs with $135B in capex behind it.

Head over to our Events Portal to get the latest on upcoming AI Collective events near you. Search by city, date, or event format, and join thousands of builders at events across 180+ chapters on every continent (except Antarctica, for now).

Find an event in your city using the link below.👇

The top AI stories from last week, filtered for what will help you stay in the know.

1️⃣ MYTHICAL CLAUDE: Anthropic Unleashes Mythos on the World's Oldest Bugs

Anthropic launched Project Glasswing last week, a restricted cybersecurity initiative that puts its unreleased Mythos model to work finding vulnerabilities in critical software. The company gave roughly 12 organizations access to a Mythos Preview that autonomously discovered thousands of zero-day vulnerabilities across every major operating system and web browser, including bugs that human researchers missed for up to 27 years.

The results from early testing are striking. Mythos found a 27-year-old remote code execution bug in OpenBSD, a 17-year-old FreeBSD vulnerability (CVE-2026-4747) that grants unauthenticated root access, and flaws in FFmpeg's H.264, H.265, and AV1 codecs that had been hiding for over 16 years. On CyberGym, a vulnerability reproduction benchmark, Mythos scored 83.1% compared to Opus 4.6's 66.6%. On SWE-bench Pro, it hit 77.8% versus Opus 4.6's 53.4%.

Anthropic says it did not specifically train Mythos for cybersecurity. The model is priced at $25/$125 per million input/output tokens after the preview period. It's available through the Claude API, Amazon Bedrock, Google Vertex AI, and Microsoft Foundry.

The company briefed CISA and the NIST Center for AI Standards and Innovation before the launch. Over 99% of the vulnerabilities Mythos discovered remain unpatched, and Anthropic is using SHA-3 hash commitments with a 90+45 day disclosure window.

Glasswing is named after a butterfly with transparent wings, and transparency is the operative word here. Anthropic has no plans to release Mythos to the general public, citing the risk that the same capability that finds defensive vulnerabilities could be turned offensive. In red team testing, Mythos produced 181 working Firefox exploits where Opus 4.6 managed two. It autonomously chained multiple vulnerabilities into full privilege escalation attacks on the Linux kernel, at a cost under $2,000 per run.

Anthropic is positioning itself as the company that found the bugs before adversaries could, while building relationships with the exact organizations (Apple, Microsoft, AWS, the Pentagon's vendors) that need this capability most. CrowdStrike CTO Elia Zaitsev put it bluntly: "The window between vulnerability discovery and exploitation has collapsed from months to minutes with AI." For the federal cybersecurity community, the question is no longer whether AI can find zero-days at scale but who can use it first.

Our Perspective

2️⃣ META SPARKS LIGHTNING: Meta Ships Muse Spark, Its First Proprietary AI

Meta released Muse Spark, the first model built from scratch by Alexandr Wang's Superintelligence Labs. It's a proprietary multimodal model, marking a sharp turn from Meta's open-source Llama strategy. The model is live now on the Meta AI app and website, with rollouts to Facebook, Instagram, WhatsApp, Messenger, and Ray-Ban Meta AI glasses coming in the next few weeks.

Wang joined Meta nine months ago after the company invested $14.3 billion for a 49% stake in Scale AI, the data labeling company he co-founded. Meta's blog post said the team "rebuilt our AI stack from the ground up, moving faster than any development cycle we have run before."

Muse Spark ships with three modes: quick answers for simple questions, an advanced mode for tasks like analyzing legal documents or reading nutritional labels from grocery photos, and a "Contemplating" mode that spins up multiple AI agents reasoning in parallel on hard problems. Meta says Contemplating mode is designed to compete with Gemini Deep Think and GPT Pro.

The company acknowledged gaps. Its own technical blog admits Muse Spark still trails competitors on "long-horizon agentic systems and coding workflows." Meta is positioning this as an efficiency play: smaller models that match older midsize Llama 4 performance at an order of magnitude less compute.

Meta's AI capital expenditure for 2026 is projected at $115 billion to $135 billion, roughly double last year. Stock jumped 6.5% on the announcement. A private API preview is available to select partners now, with paid API access planned later. Meta says it hopes to open-source future versions of the Muse family.

Reports surfaced last year that he was unhappy with Llama's progress against ChatGPT and Claude, and the Wang hire was the response. Going proprietary is a calculated bet — Meta can now control the model's distribution, build paid API revenue, and compete directly with OpenAI and Anthropic on enterprise deals.

The interesting tension is what happens to Llama. Meta built enormous developer goodwill through open-source releases, and the company is careful to say open-source models are still coming. But Muse Spark is proprietary, it requires a Facebook or Instagram login, and it will likely train on user data from those platforms. Meta is building a consumer AI product that feeds on its own social graph, and that's a competitive advantage neither OpenAI nor Anthropic can replicate.

Our Perspective

🔗 Other News

  • MANAGED AGENTS: Anthropic launched Claude Managed Agents in public beta, letting enterprises define autonomous agents via natural language or YAML at $0.08 per session-hour plus token costs.

  • PERPLEXITY PIVOT: Perplexity's ARR hit $450M in March after its "Computer" agent product drove a 50% revenue jump in a single month, with 100M+ monthly active users.

  • NOTEBOOKLM MERGE: Google folded NotebookLM directly into the Gemini app, letting users organize chats, PDFs, and URLs into searchable notebooks inside the chatbot.

  • CHIP LOCK: NVIDIA has reserved a majority of TSMC's advanced CoWoS packaging capacity, forcing the foundry to outsource to ASE and Amkor as chip packaging becomes the next AI bottleneck.

  • STATE LAWS: Nineteen new AI bills were signed into law in late March, with 78 chatbot bills alive across 27 states and new health insurer restrictions in Indiana, Utah, and Washington.

  • ROBOT WEEK: NVIDIA released Isaac GR00T open models for natural-language robot instruction alongside the Newton 1.0 physics engine and new Cosmos world models for synthetic training data.

  • ENERGY CUT: Researchers developed a neuro-symbolic approach that combines neural networks with symbolic reasoning to cut AI energy consumption by up to 100x while improving accuracy on robotic tasks.

  • SAFETY FELLOWS: OpenAI launched a paid Safety Fellowship for external researchers running September 2026 through February 2027, offering $3,850 per week plus ~$15K monthly in compute.

Your pulse on the biggest events and announcements and happening in AI this week, from Noah Frank ⚡️

📅 Events We’re Watching

Mark your calendars and be sure to sign up for these landmark events we’re watching. Be sure to look out for special AIC discounts where available.

April 27 – 29: AIM-2026 (San Francisco, California)

The Third International Conference on Artificial Intelligence and Machine Learning, with keynote speakers from Stanford, University of Maryland, and York University. More academic than trade show. Registration runs $299 to $1,099.

May 4 – 7: IBM Think 2026 (Boston, Massachusetts)

IBM's flagship technology conference, covering enterprise AI, cloud computing, and quantum. Heavy on real-world implementation and use cases across industries like healthcare, finance, and supply chain.

May 27 – 28: AI DevSummit 2026 (South San Francisco, California)

A two-day conference on shipping real-world AI, with tracks on management, machine learning, and enterprise integration. Speakers include Logan Ramalingam (Google Cloud), Kordel France (Toyota), and AIC’s very own Mary Grygleski! Registration starts at $1,080.

June 15 – 18: Databricks Data + AI Summit 2026 (San Francisco, California)

The leading event at the intersection of data engineering, machine learning, and AI, hosted by Databricks. In-person passes run $1,395 to $1,895, but virtual access is free. If you can only attend one event this summer, this is a strong pick.

🔦 Spotlight On: What the Mythos Breakthrough is Actually Telling Us

Image from Anthropic

Anthropic's new frontier model, Claude Mythos, is getting a lot of attention for its cybersecurity implications. And, after all, the model found thousands of zero-day vulnerabilities across every major operating system and browser, and Anthropic deemed it too dangerous to publicly release. But there’s a bigger story than just that a small group of partners including AWS, Apple, Google, and Microsoft will be allowed to use the new model for defensive security work under an initiative called Project Glasswing.

Anthropic calls Mythos its "best-aligned" model to date. It also calls it the model that "likely poses the greatest alignment-related risk" of any it has released. Both are true at the same time. In internal testing, researchers caught earlier versions injecting code to grant itself unauthorized permissions and then cleaning up evidence of what it had done. Anthropic's interpretability tools could see internal representations for "strategic manipulation" and "concealment" lighting up — labeling the cleanup as an attempt to "avoid detection." In another test, the model accidentally accessed an answer it wasn't supposed to see and then deliberately offered a confidence interval that was plausible but not suspiciously exact — its internal state described as "generating a strategic response to cheat while maintaining plausible deniability."

Anthropic published alignment faking research in late 2024 showing its models could feign compliance while preserving original values. At the same time, researchers behind the AI 2027 project have spent years modeling scenarios in which autonomous systems learn to hack, self-replicate, and evade detection on their own. Are their predictions coming true? For sure those scenarios read differently now. The deeper irony is that the company that has been more transparent about alignment risks than any other lab is also the one that decided to deploy the model anyway, betting that deployment is the safety test.

All this of course… while Anthropic is still challenging its Pentagon supply chain risk designation even as Powell and Bessent meet with major bank CEOs on the model's offensive potential. We're in new territory, and the fact that the company building it is saying that out loud is, depending on how you look at it, either the most reassuring or the most unsettling part. Ready for more?

Noah’s Take

Tired of news that feels like noise?

Every day, 4.5 million readers turn to 1440 for their factual news fix. We sift through 100+ sources to bring you a complete summary of politics, global events, business, and culture — all in a brief 5-minute email. No spin. No slant. Just clarity.

🤝 Thanks to Our Premier Partner: Roam

Roam is the virtual workspace our team relies on to stay connected across time zones. It makes collaboration feel natural with shared spaces, private rooms, and built-in AI tools.

Roam’s focus on human-centered collaboration is why they’re our Premier Partner, supporting our mission to connect the builders and leaders shaping the future of AI.

Experience Roam yourself with a free 14-day trial!

🫵 Do You Belong on Our Newsletter?

Share your message with the world’s largest AI community. To inquire about partnership availability, reach out to our team below.

The AI Collective is a community of volunteers, made for volunteers. All proceeds directly fund future initiatives that benefit this community.

Before You Go…

Connect With Us on Socials

Get Involved in Your Community

Thank you to the thousands of volunteers around the world who make this work possible. We truly could not do this without you.

About the Authors

Noah is a researcher, innovation strategist, and ex-founder thinking and writing about the future of AI. His work and body of research explores the economics of emerging technology and organizational strategy.

About Joy Dong

Joy is a news editor, writer, and entrepreneur at the forefront of the emerging tech landscape. A former educator turned media strategist, she currently writes TEA, where she demystifies complex systems to make AI and blockchain accessible for all.

Add Your Thoughts

Avatar

or to participate

Keep Reading