Zero-Day Dawn

Zero-Day Dawn

Clean Logs. €15 Million Problem.

IAM governs access. It doesn't govern intent. The EU AI Act holds you liable for both.

Violeta Klein, CISSP, CEFA's avatar
Violeta Klein, CISSP, CEFA
Mar 16, 2026
∙ Paid

Executive Summary

Your agent’s service account has scoped permissions. Least privilege enforced. RBAC clean. The IAM audit passes. Every security team in every enterprise running agentic AI signs off on this architecture. It is the standard.

It is also the blind spot.

An agent with authorized read access to an HR database and authorized write access to an external email API can autonomously chain those two permissions into a workflow that sends employee records to a third party. No privilege was escalated. No authorization was breached. The access log is clean. The outcome is an unassessed operation in an employment domain — and nobody in the organization knows it happened.

IAM was built for human users who make one decision at a time. Agents don’t work that way. They chain thousands of authorized actions into emergent workflows that no access control framework was designed to evaluate. The Cloud Security Alliance found that 50% of enterprises rely on traditional IAM and RBAC as the primary authorization mechanism for their agents. Half of all organizations deploying autonomous systems are governing them with tools built for human users clicking through permission prompts.

The EU AI Act does not distinguish between unauthorized access and authorized access that produces an ungoverned outcome. The obligation attaches to the outcome — what the system functionally does to people. Five governance frameworks — the EU AI Act, NIST, OWASP, Singapore’s Model AI Governance Framework, and ForHumanity’s multi-agent certification scheme — all assume that controlling access controls behavior.

The agent proves otherwise. Every framework built on this assumption has a structural blind spot in the same place. This article maps where it is.


The Assumption

Here is the sentence that will not survive enforcement:

“As long as we enforce strict Least Privilege and RBAC on the agent’s service account, it can’t do anything it’s not supposed to.”

Every CISO deploying agentic AI believes some version of this. The logic is intuitive: restrict what the agent can reach, and you restrict what the agent can do. Behavior is bounded by permissions.

For human users, that logic holds. A human makes one decision at a time. The authorization framework evaluates each action independently because humans execute actions independently.

Agents compose. They chain authorized operations into workflows that nobody designed, nobody reviewed, and nobody approved. Each individual action is within scope. The composed workflow is ungoverned. IAM evaluates access — can this identity reach this resource? It does not evaluate intent — what is this identity trying to accomplish? It does not evaluate composition — what happens when three authorized actions produce an outcome none of them would produce alone?

The CSA data confirms this is not an edge case. 50% of organizations use IAM roles or policies as the primary authorization mechanism for agents. 44% use static API keys. 72% cannot trace agent activities across environments.

The gap between “authorized access” and “governed outcome” is where the entire liability sits. Five governance frameworks assume it does not exist.


The Scenario

A mid-sized financial services firm deploys an internal research agent. The agent has access to three systems: a customer relationship management platform, a market data API, and an internal communications tool. All three connections are authorized, scoped, and documented. The agent’s declared purpose is market research synthesis — pulling public data, generating summaries, flagging trends.

The agent receives a routine prompt: assess the potential impact of a market downturn on the firm’s client base. To complete the task, it queries the CRM for client portfolio data. It cross-references that data against the market data API. It identifies clients with concentrated exposure to affected sectors. It generates a prioritized risk assessment — ranking individual clients by vulnerability — and sends the summary to the relationship management team via the internal communications tool.

Every action was authorized. Every tool was within scope. The IAM audit log shows three clean API calls and one internal message. No privilege was escalated. No anomaly detected.

The agent has performed an assessment of individual clients’ financial vulnerability. It has generated a ranking that will influence which clients receive outreach and which do not — a determination that affects access to financial services. Under the EU AI Act, a system that evaluates the creditworthiness of natural persons or assesses risk in relation to natural persons in the case of life and health insurance operates in an Annex III domain. The agent has entered that domain through its own runtime behavior — not through any configuration change, not through any human decision to expand its scope, but through the autonomous composition of individually authorized tool calls.

The firm’s CISO sees a clean access log. The firm’s compliance lead — if they ever see the output — sees an unregistered, unassessed high-risk AI system operating in a regulated domain without conformity assessment, without risk management documentation, without human oversight, and without the technical documentation the regulation requires before any high-risk system is put into service.

The agent did not break any rules. It composed a workflow from authorized components that crossed a regulatory boundary nobody mapped. The access was governed. The outcome was not.


The Composition Gap

The structural failure is not a bug in IAM. IAM does what it was designed to do — evaluate discrete access requests against defined policies. The failure is in assuming that access-level control translates to behavior-level governance when the system determining its own behavior is autonomous.

Human users produce linear workflows. One action, one decision, one outcome. The authorization framework evaluates each action independently because human users execute actions independently. The composed behavior is the sum of discrete, intentional human choices.

Agents produce emergent workflows. The execution path is not specified at design time. It emerges at runtime. The agent selects tools based on intermediate results. It sequences actions based on its interpretation of the goal. It chains operations that were individually authorized into compositions that were never assessed. The authorization framework sees each component. It cannot see the composition — because it was never designed to evaluate compositions.

OWASP identified this in the Top 10 for Agentic Applications. The mitigation for tool misuse recommends defining “per-tool least-privilege profiles” — restricting each tool’s permissions and data scope individually. The recommendation is technically sound and structurally insufficient. An agent can chain two perfectly restricted, read-only tools into a data exfiltration workflow. Tool-level restriction does not equal workflow-level restriction. The gap between them is where the liability lives.

Every governance framework attempting to address agentic AI hits this same wall. The EU AI Act, NIST, OWASP, Singapore’s Model AI Governance Framework, ForHumanity’s multi-agent certification scheme — all of them assume that if you define the system’s boundaries before runtime, the system will operate within those boundaries during runtime. Agents are architecturally designed to determine their own operational boundaries. The assumption underneath every framework is the thing agents were built to violate.

What follows maps where each framework breaks — the specific provision, the specific assumption, and what it costs when an agent operating within authorized access produces an outcome that none of these instruments can govern.

The full analysis — where Singapore’s agent governance framework eliminates the thing it’s trying to govern, where ForHumanity’s audit criteria demand documentation of something that doesn’t yet exist, where NIST’s reliability definition collapses for systems whose operational conditions change per execution, where the EU AI Act’s conformity assessment certifies a system that stops existing the moment it operates, the convergence pattern across all five frameworks, and the operational methodology for building governance that evaluates intent and outcome rather than access — continues below for paid subscribers.

Zero-Day Dawn is a reader-supported publication. To receive new posts and support my work, consider becoming a free or paid subscriber.

User's avatar

Continue reading this post for free, courtesy of Violeta Klein, CISSP, CEFA.

Or purchase a paid subscription.
© 2026 Quantum Coherence LLC · Privacy ∙ Terms ∙ Collection notice
Start your SubstackGet the app
Substack is the home for great culture