Governing What Your Agent Does Next
The operational envelope for Agentic AI: four questions, one tripwire, and the only governance framework built for runtime
Executive Summary
Last week’s piece laid out the structural impossibility. Human oversight at machine speed fails on the math. Kill switches fail on propagation speed. The blast perimeter expands before detection fires. Four governance frameworks mandate oversight. None of them account for the speed differential.
This piece delivers the response.
The operational envelope is the only governance architecture that survives enforcement — because it is the only one designed for systems whose behavior cannot be enumerated before runtime. Four questions define the boundary. A tripwire detects departure. A response protocol converts detection into a human decision. A documentation framework makes the whole thing defensible.
Every existing framework that assumes pre-deployment behavioral description requires this architecture underneath. They do not name it. The organizations that build it will be the ones that answer the regulator’s questions. The ones that do not will discover that “we have a human in the loop” is not an answer.
The Comfortable Lie
Here is what the market wants to believe: risk-tiered review solves the oversight problem.
It does not.
Risk-tiered review is the emerging consensus. Route high-consequence actions to a human. Let low-risk operations execute autonomously. Every framework is converging on this pattern. The OWASP State of Agentic AI Security and Governance report calls for it. Singapore’s MGF recommends checkpoints on high-stakes, irreversible, or outlier actions. ForHumanity mandates Human-in-Command with established stop, pause, disregard, override, and reverse processes.
The pattern is architecturally sound. The problem underneath it is unsolved.
Who defines high-consequence? The OWASP State of Agentic AI Security and Governance report mandates classifying agent actions by risk tier and assigning oversight requirements to each tier. The mandate is correct.
The problem underneath it is unaddressed: what counts as high-stakes when the agent composed a workflow at runtime that nobody anticipated at assessment time? This is the threshold-definition problem. No framework has solved it.
Risk-tiered review without a defined boundary is a governance fiction. It classifies actions against a threshold that does not exist. The operational envelope is the answer. It does not classify individual actions. It defines the boundary of the entire assessed behavioral space — and treats every departure from that space as a governance event.
The Threshold Nobody Defined
All major governance frameworks share a structural assumption: the provider or deployer can describe what the system does before it operates. Document the intended purpose. Assess risks within those boundaries. Certify against requirements. Monitor for deviation from the documented baseline.
For agentic AI, this assumption is architecturally false.
An agent with access to ten authorized tools across ten chaining steps can compose ten billion possible workflows. The outcome space grows exponentially with every action the agent is permitted to chain. No documentation captures it. No risk assessment bounds it. No monitoring system watches all of it.
The Pre-Computation Fallacy is the name for this structural failure. The governance specification requires describability. The math does not allow it.
The operational envelope resolves the fallacy — not by attempting to describe the full outcome space, but by defining the subset of behaviors the organization actually assessed. Everything inside the envelope was evaluated, documented, and accepted. Everything outside it is unknown territory.
The envelope is not a fence around the system’s behavior. It is a tripwire inside a defined boundary. When the agent’s behavior crosses that boundary, what happens next is not another automated decision. It is a human judgment about whether the system continues, pauses, or stops.
This is the architecture the OWASP State of Agentic AI Security and Governance report calls for when it names risk-tiered review. This is what Article 14 means when it requires effective oversight. This is what Singapore’s MGF requires when it mandates checkpoints on high-stakes actions. The frameworks describe the need. The operational envelope is the engineering response.
The full methodology — the four questions that define the envelope, the tripwire detection architecture, the response protocol for boundary crossings, and the documentation framework a CISO can take into a Monday morning meeting — continues below for paid subscribers.



