Daily Roundup

Anthropic AI Updates: March 28, 2026

1. Data Leak Reveals Claude Mythos, a Model Anthropic Calls a “Step Change” in Capabilities

Anthropic accidentally exposed internal documents through a misconfigured content management system, revealing details of an unreleased model called Claude Mythos (internally codenamed “Capybara”). Leaked draft blog posts describe the model as the company’s most capable to date, with significant advances in coding, academic reasoning, and cybersecurity. Anthropic acknowledged the breach was caused by “human error” and confirmed it is testing a general-purpose model with “meaningful advances,” warning that the model is “currently far ahead of any other AI model in cyber capabilities” and could accelerate the pace at which vulnerabilities are discovered and exploited. Source

2. Federal Judge Blocks Pentagon’s Anthropic Blacklisting

U.S. District Judge Rita Lin temporarily blocked the Pentagon’s designation of Anthropic as a national security supply-chain risk, a move that could have cost the company billions in lost contracts. Defense Secretary Pete Hegseth had labeled Anthropic a threat after the company refused to allow the military to use Claude for surveillance or autonomous weapons. In a 43-page decision, Judge Lin found the government’s actions appeared retaliatory, writing that “punishing Anthropic for bringing public scrutiny to the government’s contracting position is classic illegal First Amendment retaliation.” The ruling takes effect after seven days, allowing time for appeal. Source