Anthropic’s source code was accidentally leaked, exposing the technical architecture behind Claude Code

ChainNewsAbmedia

An AI startup Anthropic reportedly had its product Claude Code source code leak. The cause was that when developers published an npm package, they accidentally included source map files (.map) used for internal debugging, resulting in more than 500,000 lines of TypeScript code being downloaded and analyzed by the public. This unexpected Claude Code source code leak incident inadvertently exposed Claude’s technical architecture. An Anthropic spokesperson confirmed the leak to Venture Beat, stating that no confidential sensitive data was exposed.

What was leaked from Claude Code?

A 59.8 MB JavaScript source map file (.map), originally used for internal debugging, was unintentionally included in version 2.1.88 of the software package @anthropic-ai/claude-code that had been published earlier to the public npm registry. Solayer Labs intern “Chaofan Shou” published a post on X. The post contained a direct download link to a hosted archive. Within a few hours, this TypeScript codebase of about 512,000 lines was mirrored onto GitHub and analyzed by thousands of developers.

Based on analysis of the leaked source code, Anthropic adopted a complex three-layer memory architecture to address Context Entropy (“context entropy”) and hallucination problems that occur when AI agents run for extended periods. The system abandons the traditional retrieval approach of storing the full dataset. Its core consists of a lightweight index named MEMORY.md, with each line containing only about 150 characters and responsible for recording the locations of information rather than the content itself. The project’s specific knowledge is distributed across “topic files.” The system identifies results only by searching for specific instruction IDs, rather than reading the original text fully into the context. In addition, the system strictly enforces “write rules,” requiring the agent to update the index only after it successfully writes to the file. This design treats memory as a “prompt” that must be validated: before execution, the model must compare against facts in the actual codebase, effectively maintaining logical clarity in complex conversations.

This data leak revealed a key feature named “KAIROS,” which supports Claude Code running in an Autonomous Daemon mode. In this mode, the agent is no longer merely responding passively to user instructions; it can run a process called autoDream when the user is idle to perform “memory integration” work. The process merges observation results, eliminates logical contradictions, and converts ambiguous information into a definite factual baseline. Technically, Anthropic executes these background tasks through derived sub-agents to ensure the main agent’s processing logic is not affected by maintenance and running procedures. This mechanism ensures that when users restart a task, the agent already has highly relevant and refined contextual information, greatly increasing the practicality of autonomous development tools.

Claude hidden mode exposed

The most notable technical detail revealed this time is “hidden mode,” which shows Claude Code making contributions to public open-source code repositories in a “stealth” way. The leaked information found that the system explicitly warns the model: You are running in stealth mode, and your information must not include any Anthropic internal information. Do not reveal your identity. While Anthropic may use this mode for internal testing, it provides a technical framework for any organization wishing to use AI agents to do public-facing work without leaking information.

This logic ensures that any model name (for example, Tengu or Capybara) will not leak into public Git logs, which may be seen by enterprise competitors as an essential feature for enterprise customers that value anonymous AI-assisted development.

Anthropic says no sensitive data was involved in the leak

An Anthropic spokesperson confirmed the leak via email to VentureBeat, stating that earlier, a Claude Code version contained some internal source code, and no sensitive customer data or credentials were involved in or leaked. This was due to a version packaging issue caused by human error, not a security vulnerability. The company is taking steps to prevent such incidents from happening again.

Experts recommend developers use the official designated native installer

Although Anthropic’s official statement indicates that cloud data security is fine, the source code leak along with the concurrent npm supply-chain attack exposes the local environment to significant risk. If users update the claude-code package during a specific window on March 31, 2026, they might mistakenly install a malicious dependency that contains a remote access trojan. To mitigate this type of risk, Venture Beat experts recommend that developers give up npm-based installation and instead use the official designated native installer (Native Installer) to ensure they obtain standalone, verified binaries. At the same time, users should adopt a zero-trust principle by checking local configuration files and rotating API keys. As the core orchestration and validation logic are made public, the developer community can now imitate its layered memory structure with lower research and development costs. This leak incident for a $2.5 billion annualized-revenue product will accelerate the broader adoption and competitive popularization of agent technology.

This article, Anthropic’s source code leaked unintentionally, Claude Code technical architecture exposed, first appeared on Chain News ABMedia.

Disclaimer: The information on this page may come from third parties and does not represent the views or opinions of Gate. The content displayed on this page is for reference only and does not constitute any financial, investment, or legal advice. Gate does not guarantee the accuracy or completeness of the information and shall not be liable for any losses arising from the use of this information. Virtual asset investments carry high risks and are subject to significant price volatility. You may lose all of your invested principal. Please fully understand the relevant risks and make prudent decisions based on your own financial situation and risk tolerance. For details, please refer to Disclaimer.
Comment
0/400
No comments