Futures
Access hundreds of perpetual contracts
TradFi
Gold
One platform for global traditional assets
Options
Hot
Trade European-style vanilla options
Unified Account
Maximize your capital efficiency
Demo Trading
Introduction to Futures Trading
Learn the basics of futures trading
Futures Events
Join events to earn rewards
Demo Trading
Use virtual funds to practice risk-free trading
Launch
CandyDrop
Collect candies to earn airdrops
Launchpool
Quick staking, earn potential new tokens
HODLer Airdrop
Hold GT and get massive airdrops for free
Launchpad
Be early to the next big token project
Alpha Points
Trade on-chain assets and earn airdrops
Futures Points
Earn futures points and claim airdrop rewards
Claude Code source code leak full record: The butterfly effect triggered by a .map file
Written by: Claude
I. Origin
In the early hours of March 31, 2026, a tweet set off a major uproar in the developer community.
Chaofan Shou, an intern at a blockchain security company, found that the official Anthropic npm package included a source map file, exposing the complete source code of Claude Code to the public. He immediately shared this discovery on X, along with a direct download link.
The post detonated in the developer community like a flare. Within hours, more than 512k lines of TypeScript code were mirrored to GitHub, and thousands of developers analyzed it in real time.
This was the second major information leak incident caused by Anthropic in less than a week.
Just five days earlier (March 26), a CMS configuration error at Anthropic exposed nearly 3,000 internal files, including draft blog posts for the “Claude Mythos” model that was set to be released.
II. How did the leak happen?
The technical reason behind this incident is almost laughable—the root cause was that an npm package incorrectly included a source map file (.map file).
The purpose of this kind of file is to map compressed and obfuscated production code back to the original source code, making it easier to locate error line numbers during debugging. And this .map file contained a link pointing to a zip archive stored in Anthropic’s own Cloudflare R2 bucket.
Shou and other developers downloaded that zip file directly—no hacker techniques were needed. The file was just there, completely public.
The affected version was @anthropic-ai/claude-code v2.1.88, which came with a 59.8MB JavaScript source map file.
In its statement responding to The Register, Anthropic admitted: “An earlier Claude Code version also experienced a similar source leak in February 2025.” This means the same mistake happened twice within 13 months.
Ironically, Claude Code has an internal system called “Undercover Mode,” designed specifically to prevent Anthropic’s internal code names from accidentally leaking in git commit history… and then the engineers packaged the entire source code into a .map file.
Another possible driver of the incident may have been the toolchain itself: Anthropic acquired Bun toward the end of the year, and Claude Code is built on Bun. On March 11, 2026, someone submitted a bug report in Bun’s issue tracking system (#28001), stating that in production mode Bun would still generate and output source maps, contradicting the official documentation. That issue remains open to this day.
In response, Anthropic’s official reply was brief and restrained: “No user data or credentials were involved or leaked. This was a human error during the release packaging process, not a security vulnerability. We are moving forward with measures to prevent this kind of incident from happening again.”
III. What was leaked?
Code scale
The leak covered about 1,900 files and more than 500k lines of code. This is not model weights—it is the engineering implementation of Claude Code’s entire “software layer,” including core architectures such as the tool-calling framework, multi-agent orchestration, permission systems, memory systems, and more.
Unreleased feature roadmap
This is the most strategically valuable part of the leak.
KAIROS autonomous guard process: The feature code name mentioned more than 150 times comes from the ancient Greek phrase “the right time,” representing a fundamental shift of Claude Code toward a “persistent background Agent.” KAIROS includes a process named autoDream that performs “memory consolidation” when the user is idle—merging fragmented observations, eliminating logical contradictions, and turning vague insights into deterministic facts. When the user returns, the Agent’s context is already cleaned and highly relevant.
Internal model code names and performance data: The leaked content confirms that Capybara is an internal code name for the Claude 4.6 variant; Fennec corresponds to Opus 4.6; and the unreleased Numbat is still under testing. Code comments also exposed that Capybara has a 29–30% hallucination rate, which is a decline from v4’s 16.7%.
Anti-Distillation mechanism: The code contains a feature flag named ANTI_DISTILLATION_CC. When enabled, Claude Code injects fake tool definitions into API requests, with the goal of polluting the API traffic data that competitors might use for model training.
Beta API feature list: The constants/betas.ts file reveals all beta features of Claude Code and API negotiation, including a 1M token context window (context-1m-2025-08-07), AFK mode (afk-mode-2026-01-31), task budget management (task-budgets-2026-03-13), and a range of other capabilities that have not been made public yet.
An embedded Pokémon-style virtual companion system: The code even hides a complete virtual companion system (Buddy), including species rarity, shiny variants, procedurally generated attributes, and a “soul description” written by Claude at the time of first hatching. Companion types are determined by a deterministic pseudo-random number generator based on a hash of the user ID—so the same user always gets the same companion.
IV. Concurrent supply-chain attacks
This incident did not happen in isolation. During the same time window as the source leak, the axios package on npm suffered a separate supply-chain attack.
Between 00:21 and 03:29 UTC on March 31, 2026, if you installed or updated Claude Code via npm, you could inadvertently introduce a malicious version containing a remote access trojan (RAT) (axios 1.14.1 or 0.30.4).
Anthropic advised affected developers to treat the host as fully compromised, rotate all keys, and reinstall the operating system.
The temporal overlap between the two incidents made the situation even more chaotic and dangerous.
V. Impact on the industry
Direct damage to Anthropic
For a company with annualized revenue of $19 billion that is in a period of rapid growth, this leak is not just a security lapse—it is a loss of strategic intellectual property.
At least some of Claude Code’s capabilities do not come from the underlying large language model itself, but from the “framework” software built around the model—it tells the model how to use tools, and provides important guardrails and instructions to standardize the model’s behavior.
Those guardrails and instructions are now completely visible to competitors.
A warning to the entire AI Agent tool ecosystem
This leak won’t sink Anthropic, but it gives every competitor a free engineering textbook—how to build production-grade AI programming agents, and which tool directions are worth focusing on.
The true value of the leaked content is not in the code itself, but in the product roadmap revealed by the feature flags. KAIROS, the anti-distillation mechanism—these are strategic details that competitors can now anticipate and react to early. Code can be refactored, but once strategic surprises leak, they cannot be taken back.
VI. Deep takeaways for Agent Coding
This leak is a mirror reflecting several core propositions in today’s AI Agent engineering:
1. The boundaries of an Agent’s capabilities are largely determined by the “framework layer,” not the model itself
The exposure of Claude Code’s 500k lines of code reveals a fact that matters to the entire industry: with the same underlying model, different tool orchestration frameworks, memory management mechanisms, and permission systems will produce entirely different Agent capabilities. This means that “who has the strongest model” is no longer the only competitive dimension—“whose framework engineering is more refined” is just as crucial.
2. Long-range autonomy is the next core battlefield
The existence of the KAIROS guard process indicates that the next phase of industry competition will focus on “enabling the Agent to keep working effectively without human supervision.” Background memory consolidation, cross-session knowledge transfer, and autonomous reasoning during idle time—once these capabilities mature, they will fundamentally change the basic mode of collaboration between Agents and humans.
3. Anti-distillation and intellectual property protection will become new foundational topics in AI engineering
Anthropic implemented an anti-distillation mechanism at the code level, signaling that a new engineering area is taking shape: how to prevent one’s own AI systems from being used by competitors for training-data harvesting. This will not only be a technical issue, but will evolve into a new battleground for legal and commercial games.
4. Supply-chain security is the Achilles’ heel of AI tools
When AI programming tools are distributed through public software package managers like npm, they face supply-chain attack risks just like other open-source software. The special risk for AI tools is that once backdoored, attackers don’t just gain the ability to execute code—they gain deep penetration into the entire development workflow.
5. The more complex the system, the more it needs automated release guards
“A misconfigured .npmignore or the files field in package.json can expose everything.” For any team building AI Agent products, this lesson doesn’t require paying such a high price to learn—introducing automated release-content review in the CI/CD pipeline should be standard practice, not a remedial measure after “herding sheep into the pen too late.”
Epilogue
Today is April 1, 2026—April Fools’ Day. But this isn’t a joke.
Anthropic made the same mistake twice within thirteen months. The source code has already been mirrored globally, and DMCA takedown requests can’t keep up with the speed of forks. That product roadmap that should have been hidden deep in an internal network is now a reference for everyone.
For Anthropic, this is a painful lesson.
For the entire industry, this has been an unexpected moment of transparency—so we can see exactly how today’s leading AI programming Agents are built line by line.