AI Securityllm-securityCI/CDSupply Chain

Your AI Stack Has a Supply Chain Problem - and TeamPCP Just Proved It

5 min read
Share

The first confirmed supply chain attack on a core LLM routing library landed today. The target: litellm, the Python package most enterprise teams use to route calls between OpenAI, Anthropic, AWS Bedrock, Google Vertex, and their local models. If it's in your stack — and there's a decent chance it is — you need to check your environment before reading the rest of this.

Malicious versions 1.82.7 and 1.82.8 are now removed from PyPI. That's the good news. The bad news is that the attack vector, the threat actor, and the targeting logic all point to something much bigger than one compromised package.

How They Got In

TeamPCP didn't break into BerriAI (litellm's maintainer) directly. They didn't need to. They got in through the side door — specifically, through credentials stolen during their compromise of Aqua Security's Trivy vulnerability scanner last week.

Trivy is the most widely used open-source container security scanner. Lots of projects use it in their CI/CD pipelines. Including, apparently, litellm's. When TeamPCP hijacked Trivy's GitHub Actions by force-pushing malicious commits to 75 of 76 version tags, they didn't just hit Trivy users — they started harvesting the CI/CD secrets of every project that called aquasecurity/trivy-action. That gave them the credentials they needed to push to litellm's PyPI account.

This is a cascade attack. Not a zero-day in litellm. Not a breach of litellm's infrastructure. Just a patient, methodical exploitation of the trust relationships your build pipeline implicitly extends to every tool it calls.

What the Payload Actually Does

Three stages, each designed to maximize persistent access before detection:

Stage 1 — Credential sweep. The malicious package scans for SSH private keys, AWS/GCP/Azure credentials, Kubernetes configs, .env files, cryptocurrency wallets, and anything else in predictable locations. Exfiltrated to attacker-controlled infrastructure immediately on import.

Stage 2 — Kubernetes lateral movement. If the environment has Kubernetes access (common in production AI deployments), the payload deploys privileged pods to every reachable node. At this point the attacker has container escape capability and can reach anything on the same cluster.

Stage 3 — Persistent backdoor. A sysmon.service systemd service is installed to survive reboots, polling an external C2 for additional binaries. The name is deliberately chosen to blend in with legitimate system monitoring services.

The whole thing ships in what looks like a routine patch-level version bump. Your dependency manager pulls it automatically if you're not pinning exact versions.

Why litellm Specifically

Here's what makes this different from a generic PyPI supply chain attack. litellm isn't just another utility library — it's the routing layer between your application and every LLM API you call. If you're using litellm, your litellm process has access to your OpenAI API key, your Anthropic API key, your AWS Bedrock role, your Google Vertex credentials, and whatever internal model configs you've set up.

A backdoor here doesn't just expose your CI/CD environment. It exposes your entire AI API surface. Every key. Every model endpoint. The configuration for your AI stack.

First time I've seen a supply chain attack engineered specifically to sit in the LLM routing layer. It's an obvious target in hindsight — high privileges, trusted by almost every enterprise AI deployment, and historically treated as infrastructure rather than application code (meaning less scrutiny, less review).

The Broader Pattern

TeamPCP has now compromised five ecosystems in this campaign: GitHub Actions and Docker Hub (via Trivy), npm (CanisterWorm spreading across 47 packages), Open VSX (GlassWorm hitting VSCode extension registry on March 23 — separate but linked attack using credentials from the same Trivy breach), and now PyPI (litellm). Each ecosystem's credentials become the key for the next door.

The security industry has spent years warning about supply chain attacks in general. But the AI security community specifically hasn't been thinking about AI tooling as a supply chain attack surface. We treat litellm, LangChain, llamaindex, and similar packages the way 2015-era developers treated npm packages — as trusted dependencies you add without much scrutiny.

Pentera's AI Security & Exposure Benchmark 2026 (surveying 300 US CISOs) found 67% have limited visibility into how AI is being used across their organization. I'd bet most of those same CISOs don't have litellm on their software inventory at all, let alone monitoring for tampered versions.

What to Do Right Now

If you run litellm anywhere — in production, in CI, in developer environments:

Check for versions 1.82.7 or 1.82.8. If found, treat the entire environment as compromised: rotate every credential that was accessible to that process, audit Kubernetes for unexpected privileged pods, check for sysmon.service in systemd, and review outbound network connections from that host.

Pin your litellm version explicitly (or your whole dependency tree, ideally). pip install litellm==1.82.6 until there's a verified clean release post-1.82.8.

Audit every other package that uses Trivy in its CI/CD pipeline. BerriAI isn't the only one. If your project runs aquasecurity/trivy-action in GitHub Actions and you haven't audited those runs since March 1, that's a gap.


Gigia Tsiklauri is a Security Architect and founder of Infosec.ge. Get in touch if you want someone to find the holes before someone else does.