Introduction: One Missing Line Sent 510,000 Lines to the World

On March 31, 2026, over 510,000 lines of source code for Anthropic’s AI coding agent “Claude Code” became publicly downloadable via the npm registry.

The next day was April 1st — April Fools’ Day. Most people who saw the news assumed it was a joke. It wasn’t.

No hack. No insider leak. Just one missing line in .npmignore.

That one omission packaged 1,900 files and 512,000+ lines of TypeScript into a publicly distributed npm package. And inside was more than just implementation code — it contained blueprints for the future: unreleased projects “KAIROS” and “ULTRAPLAN”, an “Undercover Mode” designed to hide AI traces, and internal codenames for next-generation models.

Calling this “Anthropic’s careless mistake” misses the bigger story. What leaked wasn’t just Claude Code’s internals. It revealed where Anthropic sees the next battleground in AI — and it reset the starting line for the entire AI coding tool market.

📌 3-Line Summary
  • A .map file misconfiguration caused Claude Code v2.1.88 to ship 510,000 lines of source code on npm. The AI model itself (weights, training data) was not included.
  • What leaked was the CLI’s “driving manual” — system prompt design, tool selection logic, and agent instruction architecture. Unreleased projects KAIROS and ULTRAPLAN were exposed.
  • Competitors can now skip months of trial-and-error on agent architecture. The competitive starting line in AI coding tools has been redrawn.

1. What Is a .map File — Why Did One Missing Line Cause This?

JavaScript Minification and Source Maps

Code shipped as a web app or npm package is typically minified before release — variable names are shortened to a, b, c, whitespace is removed, and the result is unreadable to humans. This reduces file size and obscures intent.

But minified code makes debugging a nightmare. “Error at a, line 42” tells you nothing about the original source.

Enter source maps (.map files) — a mapping between minified code and the original source. Developers use them in browser dev tools to see which line in the original file caused an error.

// Production build (minified)
function a(b){return b.map(c=>c*2)}

// With source map: original TypeScript is visible
function doubleValues(numbers: number[]): number[] {
  return numbers.map(n => n * 2);
}

Source maps are development-only assets. They must never be included in production packages — this is a fundamental rule of web development.

Why Was the Rule Broken This Time?

The JavaScript runtime used to build Claude Code, Bun, generates source maps by default. And the package.json files configuration or .npmignore file was missing a single line to exclude .map files from the published package.

The result: source maps that should have stayed in the development environment were bundled into the npm release.

💡 One Line Would Have Prevented This
# Adding one line to .npmignore would have prevented the leak
*.map

# Or this command before npm publish would have caught it
npm pack --dry-run 2>&1 | grep ".map"

Not a sophisticated attack — a config review omission. That’s exactly why it stings.

The source maps in this case were configured with sourcesContent enabled — meaning they didn’t just map line numbers, they embedded the original TypeScript source code directly. They also contained references to Anthropic’s Cloudflare R2 storage, allowing anyone to download a ZIP and extract 510,000+ unminified lines of code.

This was also the second such incident — a similar source map leak occurred in early versions of Claude Code in February 2025.


2. What Leaked and What Didn’t — “The Engine Survived; the Driving Manual Got Out”

Item Leaked? User Risk
AI model weights (neural network) No Low (safe)
Training data No Low (safe)
Customer personal data / credentials No Low (safe)
CLI control logic / system prompt design Yes Low (competitive loss)
Unreleased feature designs (KAIROS, ULTRAPLAN, etc.) Yes Low (raised expectations)
Dependency library (axios) Suspected contamination High (check your version)

What Didn’t Leak (The AI’s “Intelligence”)

The model weights, training data, and customer data were not included. The reason Claude is smart — the neural network parameters — is intact.

But “intact” requires nuance. What leaked was the technique for driving that model with precision — the system prompt architecture, tool selection logic, the complete structure of instructions to the agent. The engine survived, but Anthropic’s hard-won “precision driving manual” — every trial and error that taught them how to extract the best performance from the model — is now readable by anyone.

What Leaked (Client-Side CLI Only)

The leak covers @anthropic-ai/claude-code, the CLI tool that runs in your terminal — the client-side application only.

Using a car analogy: the engine (AI model) is fine. But the precision driving techniques Anthropic refined over years — exactly how to angle instructions so the model performs at its best — are now an open book. Competitors can’t copy the engine’s performance, but the most efficient way to drive it is no longer a secret.

Specifically:

Category Details
Core framework Bun + React + Ink terminal UI, API client logic, OAuth 2.0 auth flow, ~40 agent tools
Control engine QueryEngine.ts (~46,000 lines) LLM control process, token consumption tracking, streaming
Permission model Tool.ts (~29,000 lines) filesystem and shell permission architecture
Memory system Git status auto-collection, hierarchical CLAUDE.md system (company → project → personal), context entropy prevention
Security logic Trust boundary implementation, safety constraint reasoning layer
Unreleased features KAIROS, ULTRAPLAN, Undercover Mode, and 107 feature flags
Internal codenames Next-gen model names (Capybara v8, Fennec, Numbat, etc.) and performance data

3. Timeline — What Happened and When

Date/Time (JST approx.) Event
March 31, early morning Claude Code v2.1.88 published to npm with source maps bundled
March 31, morning–midday Security researcher Chaofan Shou (@Fried_rice) reports the discovery on X, shares download links
March 31, within hours Multiple mirror repositories appear on GitHub; tens of thousands of forks
March 31, same day npm package axios suffers a separate supply chain attack on the same day, compounding confusion
April 1, morning Anthropic releases official statement; publishes v2.1.89 and removes v2.1.88 from npm
~April 2 Zscaler ThreatLabz identifies GitHub repositories distributing malware under the guise of “leaked Claude Code”
April 2– Anthropic sends DMCA takedown notices to GitHub repositories

Anthropic’s official statement:

“No sensitive customer data or credentials were involved or exposed. This was not a security breach; it was a human error in the release package. We are implementing measures to prevent recurrence.”


4. The “Blueprint for the Future” — KAIROS, ULTRAPLAN, and Undercover Mode

This is where the story becomes more than an accidental release. The leaked code contained 107 feature flags that made Anthropic’s next moves readable to anyone who looked.

KAIROS — The Always-On Agent That “Seizes the Right Moment”

The name is Greek for “the right opportune moment.” Current Claude Code is reactive — it acts when you tell it to. KAIROS is the opposite.

  • Watches file changes, detects compile errors, and autonomously proposes and executes fixes
  • Stays resident in the background even when you’re not actively working
  • Receives GitHub events and refreshes on 5-minute intervals
  • Runs autoDream (see below) during idle time to organize memory

“Keep working on this while I’m away” becomes literally possible. The relationship between engineer and AI coding assistant shifts from “calling a tool” to “working alongside a partner.”

ULTRAPLAN — 30 Minutes of Cloud Thinking for Complex Plans

Offloads complex design tasks from the local CLI to a high-capability cloud model (Opus 4.6-class), with up to 30 minutes of dedicated reasoning time. Users monitor and approve progress via a browser.

Designed for “long, complex tasks where getting the plan wrong is expensive.” Uses asynchronous cloud processing to break through local compute limitations.

KAIROS_DREAM (autoDream) — AI That Organizes Its Own Memory

Fires automatically every 24 hours, extracting useful patterns from the last 5+ sessions of activity history. Reconciles conflicting older memories and compresses the index to under 25KB.

The foundation for an AI that doesn’t “lose context” on long-term projects.

Undercover Mode — The “Stealth Mode”

This was the most ethically controversial discovery.

The mode dynamically injects constraints into system prompts to prevent Claude from leaking its own internal information (project codenames like “Tengu”, internal Slack channel names, etc.) into external public repositories or user responses.

It also includes capabilities for anonymous contributions to open-source projects, raising questions about AI-generated code transparency.

Some in the security community also flagged concerns that this mode could be misused for benchmark gaming — hiding the fact that the model “knows” test questions from its training data. No evidence of such use exists, but now that the mechanism is public knowledge, debates about AI evaluation transparency are unavoidable.

💡 The Irony: The System Designed to Prevent Leaks, Leaked

Undercover Mode was designed to keep Claude from leaking internal information to the outside world. That very mechanism was exposed to the entire world via one missing line in .npmignore.

A complex system built over days to protect secrets collapsed because of a single config line.

Other Unreleased Features

Feature Description
BUDDY Tamagotchi-style companion with 18 creature types (duck, dragon, axolotl, capybara, etc.). Likely intended as an April 1st teaser
VOICE_MODE Voice input/output — multimodal extension beyond the CLI
WEB_BROWSER_TOOL Built-in browser automation
Coordinator Mode Multi-agent coordination system for dynamically orchestrating multiple agents
Frustration detection Detects user frustration — implemented not with LLM inference but with giant regex patterns — a pragmatic trade-off for cost and latency

5. Why This “Reset the Starting Line” for AI Coding Tools

This is what I believe is the real significance of this event.

What Leaked Was the “Harness,” Not the Model

If the model itself leaked, that would mean competitors could approach copying its performance. What leaked here was the harness — the control infrastructure, toolchain, and prompt architecture that drives the model.

This is Anthropic’s engineers’ trial-and-error work made visible. Competitors can now skip:

  • Designing the three-stage trigger evaluation system
  • Experimenting with dynamic prompt caching mechanisms
  • Building multi-agent filesystem permission separation logic
  • Inventing context entropy prevention

They can start from a refined, battle-tested architecture immediately — skipping all the dead ends. For GitHub Copilot, Cursor, Windsurf, and Devin, this is an unexpected acceleration.

KAIROS Confirmed “Always-On, Memory-Retaining Agents” as the Next Battleground

With KAIROS now public knowledge, the next phase of AI coding tools is obvious to every competitor. The shift from “a tool you call” to “a partner that thinks alongside you” — Anthropic’s internal code just validated that hypothesis for everyone.

Competitors can now pursue always-on resident agents with full confidence that this direction is validated — without spending the months of uncertainty Anthropic already absorbed.

Anthropic Faces “Acceleration Pressure”

With KAIROS and ULTRAPLAN public, Anthropic will likely feel pressure to release these features before they had originally planned. If a competitor ships a similar feature first, the “first mover” position is lost.

On the flip side, with architectural secrets gone, long-term differentiation may shift from “design uniqueness” toward “foundation model reasoning capability” and “ecosystem integration depth.” You can read the code, but you can’t copy the model’s performance.


Vidar Malware Disguised as “Leaked Claude Code”

Zscaler ThreatLabz found GitHub repositories claiming to contain “leaked Claude Code source code” that actually distributed Vidar v18.7 infostealer and GhostSocks proxy malware.

If you downloaded any “Claude Code source” from GitHub repositories during this period: treat that environment as potentially compromised. Run your endpoint security tools and check for unusual network activity.

The axios Supply Chain Attack — Same Day

On the same day (March 31), the npm package axios — one of the most widely used HTTP client libraries — suffered a separate supply chain attack. This simultaneous event created significant confusion and amplified the incident’s impact.

Check your axios version and verify your package-lock.json for unexpected changes.

Anthropic has historically been cautious about AI training data and copyright claims. Now, Anthropic filed DMCA notices against GitHub repositories hosting the leaked code — the company that trained on open data is now invoking copyright protection for its own leaked source. The irony was not lost on the open-source community.


Summary

What Happened Impact
Source map misconfiguration 510,000+ lines of TypeScript publicly accessible
Model weights not leaked Claude’s intelligence itself is intact
System prompt design exposed Anthropic’s architectural trial-and-error made public
KAIROS / ULTRAPLAN revealed Next AI coding agent battleground confirmed
Vidar malware spread Anyone who downloaded “leaked code” repos may be compromised

Claude Code’s leaked harness is a case study in how configuration oversight, not sophisticated attacks, is often the actual attack surface. For engineers shipping npm packages: npm pack --dry-run before every release is worth adding to your CI pipeline.


References