All articles

Detect exposed API keys on website, practical checks

7 min read

Free security scan

Is your Next.js app secure? Find out in 30 seconds.

Paste your URL and get a full vulnerability report — exposed keys, missing headers, open databases. Free, non-invasive.

No code access requiredSafe to run on productionActionable report in 30

To detect exposed API keys on website, inspect the live HTML, bundled JavaScript, and browser network calls, then verify whether the value is a harmless public identifier or a real secret. Most leaks appear in inline config, frontend bundles, public JSON files, or request headers. If a value can be copied from the browser and used outside the app, treat it as a production incident and rotate it quickly.

Detect exposed API keys on website

Start with the places where secrets usually leak during frontend builds or fast-moving releases:

  • Page source and inline scripts
  • Bundled JavaScript files loaded by the page
  • Network requests in browser DevTools
  • Public config files such as /config.json or /env.js

Open the page in an incognito window, then check View Source and the browser DevTools Network tab. Look for variable names such as apiKey, token, secret, auth, clientSecret, or Authorization. A common failure pattern is a build step that injects environment variables into a frontend bundle. Another is a framework that exposes runtime config through a global object like window.config.

Do not stop at the homepage. Check the routes that load the most client-side code, especially login, signup, dashboard, billing, and admin pages. The risky value may sit in a lazy-loaded chunk that never appears on the landing page. If you want a broader pass, start with these exposed secrets checks.

Also inspect request payloads. Teams sometimes remove a key from HTML but still send it in a header or query string from the browser. That is still a public exposure, because any visitor can replay what the browser sends.

Know which keys are actually dangerous

Not every key-like string is a breach. Some frontend integrations use public identifiers by design. Others are true secrets and should never reach the browser.

Usually intended to be public:

  • analytics or telemetry write identifiers
  • captcha site keys
  • map or embed public keys
  • client-side environment IDs for feature flags

Usually never safe to expose:

  • admin or private API tokens
  • database credentials
  • signing secrets
  • server webhook secrets
  • tokens with write, delete, or account-level access

The name alone is not enough. A value called publicKey may still have broad privileges, and a value called clientKey may still be abuseable if it is missing domain or origin restrictions. The real test is what it can do if copied out of the browser.

The reliability angle matters here. Even when an exposed key does not reveal data, it can still trigger a customer-facing incident. Attackers and bots use leaked client secrets to burn through API quotas, create fake traffic, or send abusive write requests. That can slow your app, exhaust a third-party rate limit, or break a critical flow while your uptime checks still show 200 responses.

How to verify a real leak?

Once you find a suspicious value, verify it safely. The goal is to confirm impact without causing damage.

  1. Reproduce from a clean browser. Open the page in incognito and confirm the value is visible to an unauthenticated visitor or normal user.
  2. Check where it came from. Note whether it appears in source, a JS bundle, a public config endpoint, or a request header.
  3. Test minimal access. Make a benign request to a non-destructive endpoint you control, or use a safe internal test environment, to see whether the value works outside the app.
  4. Review restrictions. Check domain, referrer, IP, or scope limits. Restricted keys are still poor hygiene if they were not meant to be public, but unrestricted keys are far higher risk.
  5. Check recent logs. Look for unusual volume, new IP ranges, quota spikes, or repeated failed auth attempts around the affected service.

A practical rule is simple: if the browser can read it and an external caller can use it, you have an exposed API key problem. If the key cannot be used outside tightly scoped client behavior, you may have a false positive, but you should still document why it is acceptable.

Teams often miss the second part. They confirm that a value is public, but they do not confirm whether it is actually usable. That distinction separates harmless frontend config from a real incident.

Fix and contain the issue

Once confirmed, move fast. The immediate fix is not just deleting the string from a file. You need to remove exposure, invalidate the old value, and check for downstream impact.

  1. Move the secret server-side. The browser should call your backend, and the backend should call the third-party API.
  2. Rotate or revoke the leaked value. Assume it was copied as soon as it was public.
  3. Purge caches and redeploy. Old JS chunks can stay available behind a CDN, in service workers, or in browser cache.
  4. Audit logs and quotas. Look for misuse, failed requests, unexpected writes, or spend spikes.
  5. Check all environments. Staging and preview deployments often leak the same pattern.
  6. Add a release gate. Scan rendered pages and built assets before shipping.

A common post-fix mistake is rotating the key but leaving the root cause in place. The next deploy reintroduces the same leak with a fresh token. Use a repeatable prelaunch review, such as this security review guide, to catch build-time exposure before it reaches production.

Also check source maps, static asset directories, and framework-generated runtime config. Many leaks come from convenience shortcuts like exposing an environment variable to make a frontend call work quickly. That shortcut turns a private integration into a public one.

Add ongoing detection

This issue is not a one-time audit. Frontend bundles change constantly, and a harmless refactor can expose a secret during the next release. The safer approach is continuous detection on public pages and critical flows.

Monitor the pages that matter most, then search the rendered response and loaded assets for suspicious patterns. Useful checks include:

  • strings that look like private tokens
  • unexpected Authorization values in client requests
  • new public config endpoints
  • key material appearing on login, signup, or billing routes

This is where uptime and security overlap. A key leak may not take the site down immediately, but it can lead to quota exhaustion, fraud, or third-party API failures that later break user flows. Continuous external website monitoring helps you catch risky changes after deploy, not days later during an audit.

For SaaS teams, treat secret exposure like any other production regression. Monitor your public pages, watch critical routes for content changes, and alert when new assets or response patterns appear. A lightweight website monitoring setup gives you visibility into changes that normal availability checks miss.

Final check

If a value appears in the browser, ask two questions: was it meant to be public, and can it be abused outside the app? That gives you the fastest path to the right response. Remove real secrets from the client, rotate them, and keep scanning after every deploy.

Faq

Are all API keys on a website a security issue?

No. Some frontend integrations use public identifiers that are designed to be exposed in the browser. The issue is not the presence of a key-shaped string by itself. The real question is whether the value grants sensitive access, can be reused outside the app, or lacks proper restrictions.

Where do exposed keys usually appear?

Most leaks show up in inline scripts, compiled JavaScript bundles, public config files, or request headers sent from the browser. They also appear in lazy-loaded chunks on login, signup, and billing pages. Checking only the homepage misses many real exposures.

Should i rotate the key even after removing it from the page?

Yes, if the value was not meant to be public. Once a secret is exposed in a live page or asset, assume it may have been copied by bots, crawlers, browser extensions, or anyone viewing source. Removing it from the page does not invalidate the old credential.

Can monitoring catch this after deployment?

Yes, if you monitor rendered pages and public assets for suspicious patterns, unexpected config output, or changed JavaScript bundles. Standard uptime checks will not catch most secret leaks. You need content-aware checks on public routes, plus alerts when high-risk pages change unexpectedly.

If you want faster visibility into risky frontend changes on production pages, AISHIPSAFE can help with monitoring and alerts that fit normal SaaS incident response.

Free security scan

Is your Next.js app secure? Find out in 30 seconds.

Paste your URL and get a full vulnerability report — exposed keys, missing headers, open databases. Free, non-invasive.

No code access requiredSafe to run on productionActionable report in 30