Zero-Day Dawn

Zero-Day Dawn

"Supply Chain Is The New DNS"

When the tool that protects your AI pipeline is the tool that compromises it, every governance artifact built on top of it becomes fiction overnight

Violeta Klein, CISSP, CEFA's avatar
Violeta Klein, CISSP, CEFA
Mar 30, 2026
∙ Paid

Executive Summary

DNS is the invisible layer that translates every web address into a destination — and when it breaks, nothing works even though nothing looks broken. The AI supply chain has become the same kind of invisible dependency. Every enterprise AI system — from demand forecasting agents to credit decisioning models — runs on a stack of open-source components that route the calls, verify the code, and connect the models to production infrastructure. Nobody in the boardroom thinks about those components. They are assumed to work. They are assumed to be trustworthy. They are assumed to be what they claim to be.

Last week, one of the most widely deployed components in that invisible layer was silently replaced by an attacker — and the entry point was the security scanner that was supposed to protect it. The compromise is not contained. The attacker retains persistent access to every affected system, deployable at any time.

The enterprise AI stack now has a structural vulnerability that no governance framework on the market is designed to detect. When a component underneath your AI system is silently replaced, every governance artifact your organization filed becomes a description of a system that no longer exists — the technical documentation, the risk assessment, the conformity assessment all describe something that stopped being real the moment the compromised component executed. The security team will find it and patch it. The compliance team will not know that the foundation underneath their documentation changed, because nothing in the current governance architecture connects the security team’s remediation workflow to the compliance team’s regulatory filing obligation.

The financial exposure is not abstract. The penalty ceiling under the EU AI Act runs to €15 million or 3% of global annual turnover. The reporting clocks — 15 days under the AI Act, 24 hours under NIS2 — start running the moment the organization becomes aware. For organizations that deployed AI agents in supply chain operations, logistics, or financial services, the real cost is the fine compounded by the operational disruption, the reputational fallout, and the discovery that the governance program the board funded was governing a fiction.

As Gadi Evron put it last week at RSA: “Supply chain is the new DNS.” He meant it as a security warning. The governance consequence hasn’t landed yet. This piece shows where it lands.


The Comfortable Lie

Here is what the market wants to believe:

Agentic AI is transforming supply chains through autonomous execution, and the governance challenge is under control. AI agents forecast demand, optimize logistics, manage inventory, schedule production, and reroute shipments — all in real time, all at scale, all with minimal human intervention. Trusted guardrails keep the system within its boundaries. Human oversight handles the exceptions. The infrastructure underneath is stable, verified, and trustworthy.

That belief is everywhere this year. EY describes agentic AI as enabling “autonomous decision-making and task execution” that will “unlock unprecedented value” for supply chain executives. Microsoft has deployed over 25 AI agents across its own supply chain operations, with a target of 100 by year-end, and published a reference architecture for multi-agent orchestration spanning demand planning, logistics, and warehouse management. IBM frames AI agents as systems that “perceive incoming data, reason about possible actions, and act in context rather than following fixed instructions,” predicting that by 2028, a third of enterprise software applications will include agentic AI. SAP calls 2026 the year AI agents “become team members” and describes a future where “copilots embedded in planning workspaces handle repetitive analysis while people focus on scenario choice and exception management.” Forbes reports that agentic AI “can reason through a situation, plan next steps, and execute actions across systems,” representing “the biggest change enterprise software has seen in years.”

Every one of these visions shares the same unexamined assumption: the components underneath the agents are what they claim to be. The models are clean. The APIs are authentic. The libraries behave as documented. The security scanner protecting the CI/CD pipeline is actually protecting the CI/CD pipeline.

Last week, that assumption broke down — and it did so through the security layer itself.


The Breach

On March 24, 2026, security firm Semgrep published a detailed technical analysis of a multi-stage supply chain attack that cascaded from a security scanner into the AI infrastructure underneath enterprise deployments. The attack is ongoing, the threat actors are still active, and the full scope of the compromise remains unknown.

It started with Trivy — an open-source vulnerability scanner made by Aqua Security that is widely used across the industry to find vulnerabilities in CI/CD pipelines before builds are published. In late February, an automated bot exploited a workflow misconfiguration to steal credentials from the Aqua Security GitHub organization. Aqua rotated credentials, but the attacker retained access through a single bot account with write and admin privileges across both the public and internal GitHub organizations. With that access, the attackers — a group called TeamPCP — pushed a malicious Trivy release that ran a credential stealer alongside the legitimate scanner, force-pushed 75 of 76 version tags in Trivy’s GitHub Actions so that anyone referencing those actions by tag pulled the infostealer into their pipeline, and pushed malicious Docker images with no corresponding GitHub releases.

The stolen credentials gave TeamPCP access to downstream projects — and one of those projects was LiteLLM.

LiteLLM is not a peripheral library. It is the unified API gateway that enterprises use to route calls across multiple LLM providers — OpenAI, Anthropic, Google, Mistral — through a single interface. In enterprise deployments, LiteLLM operates as the AI routing layer: managing provider selection, budget controls, authentication, and model flexibility underneath applications that were written to a single API standard. Its proxy server is the most widely deployed feature in enterprise contexts, and it sits at the exact layer where enterprise AI systems connect to the models they depend on.

LiteLLM used Trivy in its own CI/CD pipeline — the security scanner that was supposed to protect its code was the mechanism through which the malware entered.

The attack inside LiteLLM was technically precise and deliberately difficult to detect. Rather than using a postinstall hook — a technique developers have learned to watch for — the malware dropped a .pth file into Python’s site-packages directory. Python auto-executes .pth files on every interpreter startup, which means the malware triggers not when you import litellm, but when you run any Python process at all, including something as innocuous as python --version.

The credential harvesting was comprehensive in a way that security researchers described as unprecedented in supply chain attacks. The malware exfiltrated SSH keys, AWS credentials including full IMDSv2 token flows and Secrets Manager enumeration, GCP and Azure credentials, Kubernetes tokens and service account secrets, environment configuration files across all standard naming conventions, shell history, git credentials, Docker registry authentication, Terraform state files containing infrastructure secrets, TLS private keys, and even cryptocurrency wallet keys. If the malware detected a Kubernetes environment with a permissive service account, it escalated from credential theft to full cluster compromise — creating privileged DaemonSets across every node including the control plane, mounting the host filesystem, and installing a persistent backdoor directly onto the underlying host.

The exfiltrated data was encrypted with AES-256-CBC, the session key wrapped with the attacker’s RSA-4096 public key, and transmitted to a domain designed to mimic LiteLLM’s legitimate infrastructure. This encryption architecture means that even if network traffic is intercepted, the stolen credentials cannot be recovered without the attacker’s private key — which means affected organizations cannot determine with certainty what was taken, and must assume the worst when deciding what to rotate.

The persistent backdoor installed on compromised systems polls a command-and-control endpoint every fifty minutes. When no active campaign is running, the endpoint returns a YouTube URL — the dormancy signal. The infrastructure is silent, but fully operational. The attacker retains arbitrary code execution on every compromised host, deployable at any time they choose.

TeamPCP shared their motive on their Telegram channel, referring to the security vendors they had compromised: “These companies were built to protect your supply chains yet they can’t even protect their own.”

The irony is real. But the consequence extends far beyond the security vendors — and far beyond what any security team can remediate with a patch.


The full regulatory analysis — including where the EU AI Act's provider conversion trap applies to off-the-shelf supply chain agents, why operational gridlock qualifies as a mandatory serious incident filing, and the three questions your organization must answer before your next board meeting — continues below for paid subscribers.

User's avatar

Continue reading this post for free, courtesy of Violeta Klein, CISSP, CEFA.

Or purchase a paid subscription.
© 2026 Quantum Coherence LLC · Privacy ∙ Terms ∙ Collection notice
Start your SubstackGet the app
Substack is the home for great culture