If your question is is Claude Code safe, the practical answer on April 1, 2026 is: safer than the leaked build, but not something you should trust blindly. The public registry now points latest to 2.1.89, version 2.1.88 is no longer installable, and the replacement tarball does not include cli.js.map.
That reading is grounded in the live npm registry metadata, not just social posts. The point is not to assume the vendor is unsafe forever. The point is to verify that the currently distributed artifact really differs from the build that triggered the incident.
That does not mean users should shrug and move on. A packaging incident is still a supply chain warning. If you use Claude Code in daily workflows, the right response is to update, verify the current package contents, clear stale caches, and review what local trust you have granted the tool in your environment.
What changed after the leak
The public evidence points to a quick containment cycle rather than an open-ended exposure.
- 2.1.88 has a timestamp in the registry metadata, which confirms publication happened.
- That same version is not currently installable, which indicates the affected build is no longer available in the registry.
- 2.1.89 is now latest, and the current tarball does not include
cli.js.map. - Public reporting described the event as a release packaging issue, not a customer data breach.
That is why the right framing is not panic. It is verification. When a developer tool has enough access to read code, write files, and run commands, even a non-credential incident should trigger a short post-incident trust review.
Is Claude Code safe right now?
For most users, the risk question is not whether the leak exposed personal prompts. The more relevant question is whether the currently installable build looks clean and whether your local setup limits blast radius if another packaging mistake happens later.
A sensible checklist looks like this:
- Update to the latest installable version instead of relying on whatever was already in a global cache.
- Inspect the tarball contents before rolling the tool out across teams or CI machines.
- Review local tokens and environment variables that the tool can access on your workstation.
- Revisit command permissions if you allow the tool to execute shell actions with broad access.
- Separate personal and production contexts so one agent session cannot see everything by default.
That workstation context matters more than many teams admit. If Claude Code runs on a laptop that also has production cloud credentials, GitHub write access, internal VPN routes, and long-lived shell history, the tool inherits a high-trust operating environment. Even when the package itself is clean, your setup can still make the risk too broad.
This is the same mindset behind a good AI app security audit. You are not only asking whether a vendor fixed one file. You are asking whether your own environment treats powerful tooling with the right level of restraint.
How to audit the npm package yourself
You do not need a big reverse engineering workflow to answer the basic trust questions. Untar the current package, look for obvious leftover artifacts, and confirm the entrypoint matches what the package declares.
pkg="https://registry.npmjs.org/@anthropic-ai/claude-code/-/claude-code-2.1.89.tgz"
tmp="$(mktemp -d)"
curl -L -s "$pkg" -o "$tmp/pkg.tgz"
tar -tzf "$tmp/pkg.tgz" | rg "\\.map$"
tar -xzf "$tmp/pkg.tgz" -C "$tmp"
cat "$tmp/package/package.json"
test -f "$tmp/package/cli.js.map" && echo "map found" || echo "clean"That simple check will not prove a package is secure. It will prove whether the obvious artifact problem is still present. For a fast-moving incident, that is the first thing you want: evidence that the replacement build actually changed.
On shared teams, do one more thing after that check: write down the exact approved version and stop relying on whatever global install happens to be on a machine already. A floating developer-tool install is easy to forget and hard to audit later.
What users should do next
If Claude Code lives only on a personal laptop, your next steps are short. Update the package, verify the contents, and keep your local auth material tidy. If Claude Code is used in shared engineering workflows, the checklist should be stricter.
- Reinstall from the current latest version rather than trusting an earlier global installation.
- Document the incident internally so teammates know why package verification was required.
- Limit secrets exposure on machines where AI coding tools run regularly.
- Add artifact inspection to release review for any internal CLI your team publishes.
That final point is the lasting lesson. If your organization ships its own tooling, the Claude Code incident is a preview of how a small packaging miss can become a major credibility problem overnight. It belongs in the same conversation as Next.js security scanner checks, leaked bundle reviews, and Vercel API leak scanner workflows.
Lessons for teams shipping AI tools
The best answer to the question is Claude Code safe is neither blind confidence nor performative fear. It is a repeatable operating model: inspect the package, minimize privilege, verify changes after an incident, and keep a clear boundary between developer convenience and production trust.
That is also why a pre-release security scan is worth doing before your own CLI or AI wrapper goes public. Users rarely forgive packaging mistakes, because they feel preventable. In most cases, they are.
FAQ
Should I uninstall Claude Code completely?
Not necessarily. If you depend on the tool, a better first step is to reinstall from the current latest version, inspect the tarball, and confirm the leaked build is not what your machine is using. Full removal only becomes the obvious move if your team cannot verify package provenance or local trust boundaries.
Is version 2.1.89 enough to close the risk?
It appears to address the publicly visible source-map problem, because the current tarball does not include cli.js.map. But one clean artifact is not the same thing as total assurance. You still need sensible local permissions, clean secrets handling, and a habit of verifying high-trust tools after public incidents.
What is the main lesson for teams building their own CLI tools?
Treat the final published package as a security boundary. Test suites, type checks, and build success do not prove that the shipped artifact is clean. Teams should inspect the exact tarball, look for debug leftovers, and make artifact review part of their standard release checklist.