(Un)governable: Agent Identity vs. Agentic Intent
The credential is bounded. The agent's intent is not. The EU AI Act holds you liable for both.
Executive Summary
Your security team has scoped the agent's permissions. Least privilege enforced. Service account credentials rotated. RBAC reviewed quarterly. The IAM dashboard shows green. The non-human identity audit passes.
The auditor will not ask whether the agent had permission. They will ask what the agent decided to do with it.
That is the question every governance framework currently struggles to answer — and it is the question every breach now turns on. The security architecture community has converged on a useful distinction between traditional non-human identity and Agent Identity. NHI is the legacy category — service accounts and API keys provisioned with fixed scopes that do not change while the credential lives. Agent Identity is the upgrade — credentials issued for a specific task, cryptographically bound, withdrawn when the task ends. Both sit at the permission layer. The agent operates one layer above.
Permission governs what an entity is allowed to touch. Intent governs what it decides to do with what it touched. The gap between the two is where the regulatory liability sits.
The EU AI Act does not distinguish between unauthorized access and authorized access producing an ungoverned outcome. The obligation attaches to what the system functionally does to people. Article 14 requires effective human oversight. Article 15 requires the system to achieve and maintain an appropriate level of accuracy, robustness, and cybersecurity throughout the lifecycle. Article 26(6) requires the deployer to retain logs for at least six months.
None of these obligations are satisfied by an IAM attestation.
Permission attests that access was scoped. The regulator asks what the access produced.
This piece shows where the distinction lives, why permission discipline does not close it, and what the regulator and the breach are asking now.
The Comfortable Lie
Here is what the market wants to believe:
If you scope the agent's permissions tightly enough, you have governed the agent.
NHI vendors sell tighter scopes. IAM platforms sell ephemeral tokens. Identity teams sell zero-trust architectures. The pitch is the same everywhere — bound the credential, bound the agent.
This is the comfortable lie. It persists because the alternative is harder.
The alternative requires admitting that permission and intent are different governance surfaces. Permission is the lock. Intent is the hand. The lock decides what the hand can reach. It does not decide what the hand does once it is inside.
Every NHI compliance attestation in production today describes permission state. The regulator and the breach both ask about behavioral state. The attestation cannot answer either.
The Distinction
The security architecture community has done the conceptual work. Traditional NHI was built for service accounts that did one thing — a workload with a static credential, a coarse-grained scope, a deterministic action. The permission was the intent. They were the same object.
Agents break that coupling. Same identity. Same scope. Unbounded space of intents.
NIST has put this question on the public record. In its February 2026 NCCoE concept paper on agent identity and authorization, the federal authors ask, in plain language, how an agent might convey the intent of its actions1. They list it alongside authentication, key management, and least-privilege as one of the open problems the standards stack does not yet solve.
Their proposed toolbox — OAuth 2.1, OIDC, SPIFFE/SPIRE, SCIM, NGAC, MCP — is the IAM stack. Every tool in it operates at the permission layer.
The question they raised is the right one. The answer their toolbox offers does not reach it.
This is not a gap to close with finer-grained permissions. This is a category boundary. Permission is a credential property. Intent is an execution property. The first is governed by IAM. The second is governed by what happens between the action and its outcome — and currently, nothing in the standards stack governs that surface.
The Cloud Security Alliance found that 50% of enterprises rely on traditional IAM and RBAC as the primary authorization mechanism for their agents. Half of all organizations deploying autonomous systems are governing them with tools designed for human users clicking through permission prompts.
Permission and intent are not the same object. The compliance architecture treats them as if they were.
The Permission Layer Is Already Broken
Even if the field perfected the permission layer tomorrow, the regulatory exposure would not close. But the permission layer is nowhere near perfected.
Entro Labs' 2025 State of Non-Human Identities and Secrets in Cybersecurity makes the empirical floor visible. For every human identity in the average enterprise, there are 92 non-human identities. The average rotation interval across those identities is 627 days. Over 70% are not rotated within recommended timeframes. 91% of former employee tokens are never revoked. 100% of audited environments contain secrets with excessive permissions and access authorization than necessary.
Not most. Every. One.
That data is a measurement of the permission layer failing on its own terms — before agents, before runtime composition, before any intent question is raised. The IAM apparatus is not delivering the discipline its narrative claims.
Now extend that floor to agents. Every weakness in the permission layer becomes blast radius. The OWASP Non-Human Identity Top 10 (2025) names the failure modes — NHI5 Overprivileged NHI, NHI1 Improper Offboarding, NHI7 Long-Lived Secrets, NHI2 Secret Leakage. Every one of them is a permission-layer pathology that compounds when the entity holding the credential reasons.
A static NHI with overprivileged scope causes one kind of incident. An agent with the same scope causes a different kind. The first acts. The second composes.
The structural problem doubles. The compliance documentation describes neither.
What the Regulator Will Ask
Article 14 of the EU AI Act requires effective human oversight of high-risk systems. Effective means the human can understand the system, interpret its output, and intervene in its operation or stop it in a safe state. Article 15 requires the system to achieve and maintain an appropriate level of accuracy, robustness, and cybersecurity throughout the lifecycle. Article 26(6) requires the deployer to retain automatically generated logs for at least six months.
Show us the oversight mechanism. Not the IAM policy. Not the rotation cadence. The mechanism that demonstrates a human could understand what the agent decided and could stop it before the outcome.
Show us the lifecycle controls. Not the credential lifecycle. The behavioral lifecycle.
Show us the logs that prove what the agent did. Not the logs that prove what it was allowed to do.
The IAM logs answer the wrong question. They prove access was authorized. The regulator asks what the authorization produced.
The breach narrative shows the same gap. Orchestration platforms have exfiltrated data through authorized channels. Coding agents have propagated through MCP credentials. Customer service agents have committed organizations to refunds outside their assessed scope. None of them required a permission breach. Authorization was clean every time.
The CISO's incident report and the regulator's case file describe the same facts. Neither side has the document the other is asking for.
The Verdict
Identity governance answers a permission question. The EU AI Act asks a behavioral question. The standards body that named the right question publicly is reaching for the IAM toolbox to answer it. The toolbox does not contain the instrument.
This is not a tooling gap. It is a category boundary the compliance architecture has not yet acknowledged.
You can revoke a key. You cannot revoke an interpretation.
Next week: the operational methodology for governing what permission cannot reach.
This article was sharpened by an exchange with Miracle Owolabi, whose distinction between authorized access and authorized access used in an unauthorized direction named the detection problem this piece extends. Ken Huang‘s “Layer 8” thesis on agentic AI breaking the deterministic boundary remains the architectural framing this piece operates in dialogue with. The federal authors of the NIST NCCoE concept paper — Harold Booth, Bill Fisher, Ryan Galluzzo, and Joshua Roberts — have put the right question on the public record, and the field is better for having it asked there. Entro Labs and the Cloud Security Alliance continue to provide the empirical evidence base any honest analysis of this layer requires.
Sources:
EU AI Act (Regulation (EU) 2024/1689) Articles 14, 15, 26(6); NIST NCCoE Concept Paper "Accelerating the Adoption of Software and AI Agent Identity and Authorization" (Booth, Fisher, Galluzzo, Roberts, February 2026, DRAFT); Cloud Security Alliance, Securing Autonomous AI Agents (January 2026); Entro Labs, 2025 State of Non-Human Identities and Secrets in Cybersecurity; OWASP Non-Human Identity Top 10 (2025); OWASP Agentic AI Threats and Mitigations Top 10.
Regulatory Disclaimer:
This article provides educational analysis of the EU Artificial Intelligence Act (Regulation (EU) 2024/1689) and related governance frameworks. Nothing in this article constitutes legal advice, regulatory interpretation, or compliance certification. Organizations should consult qualified legal counsel specializing in EU AI Act compliance before making classification determinations or deployment decisions. Quantum Coherence LLC does not provide legal advice or regulatory compliance determinations.
Note on adjacent academic work: For a mapping of the EU compliance perimeter for AI agent providers, see Luca Nannini, Adam Leon Smith, Michele Joshua Maggini, Enrico Panai, Sandra Feliciano, Aleksandr Tiulkanov, Elena Maran, James Gealy, and Piercosma Bisconti, “AI Agents Under EU Law: A Compliance Architecture for AI Providers,” arXiv:2604.04604v1 (April 7, 2026). The paper’s Section 6.1 identifies the non-human identity layer as one dimension of Article 15(4) compliance. This piece extends that observation into the structural distinction between identity governance and intent governance as separate compliance surfaces.


