Arrow-right Camera
The Spokesman-Review Newspaper
Spokane, Washington  Est. May 19, 1883

Anthropic accidentally exposes system behind Claude Code

The Anthropic website on a laptop arranged in New York, US, on Tuesday, Aug. 15, 2023. MUST CREDIT: Gabby Jones/Bloomberg  (Gabby Jones/Bloomberg)
By Shirin Ghaffary and </p><p>Mark Anderson bloomberg

Anthropic PBC inadvertently released internal source code behind its popular artificial intelligence-powered Claude coding assistant, raising questions about the security of an AI model developer that has built its brand on prioritizing safety.

“Earlier today, a Claude Code release included some internal source code. No sensitive customer data or credentials were involved or exposed,” Anthropic said in an emailed statement late Tuesday. “This was a release packaging issue caused by human error, not a security breach.”

The accidental release marked Anthropic’s second security slip-up in a matter of days, compromising approximately 1,900 files and 512,000 lines of code related to Claude Code – an agentic coding tool that runs directly inside developer environments. Last week, Fortune separately reported that Anthropic had been storing thousands of internal files on a publicly accessible system, including a draft blog post that detailed an upcoming model known internally as both “Mythos” and “Capybara.”

The exposures could not come at a worse time for Anthropic, which was declared by the U.S. government a supply chain risk earlier this year and is fighting the designation in court. The company has warned that the labeling could cost it billions in lost revenue.

The latest accidental release involving Claude Code first came to light in a post on the social media platform X that purported to share a link to the code and garnered more than 30 million views. The leak has touched off thousands of posts online by people saying they have scoured the code. Some have claimed they’ve unearthed yet-to-be-released features, as well as quirks in the existing Claude Code system.

Several experts raised concerns about potential security vulnerabilities. “Attackers can now study and fuzz exactly how data flows through Claude Code’s four-stage context management pipeline and craft payloads designed to survive compaction, effectively persisting a backdoor across an arbitrarily long session,” AI cybersecurity firm Straiker said in a blog post.

For its part, Anthropic said it is “rolling out measures to prevent this from happening again.”

In February, Anthropic raised $30 billion in a massive funding round that valued the company at $380 billion, including the money raised, roughly doubling its prior valuation. The company landed itself in the spotlight that same month for releasing a series of products that sent the shares of everything from legal services companies to software and cybersecurity firms plunging on fears of widespread disruption.