<?xml version="1.0" encoding="UTF-8"?><rss xmlns:dc="http://purl.org/dc/elements/1.1/" xmlns:content="http://purl.org/rss/1.0/modules/content/" xmlns:atom="http://www.w3.org/2005/Atom" version="2.0" xmlns:itunes="http://www.itunes.com/dtds/podcast-1.0.dtd" xmlns:googleplay="http://www.google.com/schemas/play-podcasts/1.0"><channel><title><![CDATA[Zero-Day Dawn]]></title><description><![CDATA[Agentic AI governance and EU AI Act enforcement intelligence. Where autonomous systems break the regulation's assumptions - and what to build before the regulator arrives. For leaders who decide, not delegate.]]></description><link>https://www.zerodaydawn.com</link><generator>Substack</generator><lastBuildDate>Sun, 10 May 2026 10:38:01 GMT</lastBuildDate><atom:link href="https://www.zerodaydawn.com/feed" rel="self" type="application/rss+xml"/><copyright><![CDATA[Quantum Coherence LLC]]></copyright><language><![CDATA[en]]></language><webMaster><![CDATA[wave@quantumcoherence.ai]]></webMaster><itunes:owner><itunes:email><![CDATA[wave@quantumcoherence.ai]]></itunes:email><itunes:name><![CDATA[Violeta Klein, CISSP, AIGP]]></itunes:name></itunes:owner><itunes:author><![CDATA[Violeta Klein, CISSP, AIGP]]></itunes:author><googleplay:owner><![CDATA[wave@quantumcoherence.ai]]></googleplay:owner><googleplay:email><![CDATA[wave@quantumcoherence.ai]]></googleplay:email><googleplay:author><![CDATA[Violeta Klein, CISSP, AIGP]]></googleplay:author><itunes:block><![CDATA[Yes]]></itunes:block><item><title><![CDATA[Why Agentic AI Breaks Every Existing Governance Framework]]></title><description><![CDATA[The Pre-Computation Fallacy: Five frameworks. One assumption. The math breaks all of them.]]></description><link>https://www.zerodaydawn.com/p/why-agentic-ai-breaks-every-existing</link><guid isPermaLink="false">https://www.zerodaydawn.com/p/why-agentic-ai-breaks-every-existing</guid><dc:creator><![CDATA[Violeta Klein, CISSP, AIGP]]></dc:creator><pubDate>Sun, 03 May 2026 13:01:24 GMT</pubDate><enclosure url="https://substackcdn.com/image/fetch/$s_!9fKi!,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F8b67f09b-2550-481b-b4c2-bf055ead256b_1920x1080.png" length="0" type="image/jpeg"/><content:encoded><![CDATA[<div class="captioned-image-container"><figure><a class="image-link image2 is-viewable-img" target="_blank" href="https://substackcdn.com/image/fetch/$s_!9fKi!,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F8b67f09b-2550-481b-b4c2-bf055ead256b_1920x1080.png" data-component-name="Image2ToDOM"><div class="image2-inset"><picture><source type="image/webp" srcset="https://substackcdn.com/image/fetch/$s_!9fKi!,w_424,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F8b67f09b-2550-481b-b4c2-bf055ead256b_1920x1080.png 424w, https://substackcdn.com/image/fetch/$s_!9fKi!,w_848,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F8b67f09b-2550-481b-b4c2-bf055ead256b_1920x1080.png 848w, https://substackcdn.com/image/fetch/$s_!9fKi!,w_1272,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F8b67f09b-2550-481b-b4c2-bf055ead256b_1920x1080.png 1272w, https://substackcdn.com/image/fetch/$s_!9fKi!,w_1456,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F8b67f09b-2550-481b-b4c2-bf055ead256b_1920x1080.png 1456w" sizes="100vw"><img src="https://substackcdn.com/image/fetch/$s_!9fKi!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F8b67f09b-2550-481b-b4c2-bf055ead256b_1920x1080.png" width="1456" height="819" data-attrs="{&quot;src&quot;:&quot;https://substack-post-media.s3.amazonaws.com/public/images/8b67f09b-2550-481b-b4c2-bf055ead256b_1920x1080.png&quot;,&quot;srcNoWatermark&quot;:null,&quot;fullscreen&quot;:null,&quot;imageSize&quot;:null,&quot;height&quot;:819,&quot;width&quot;:1456,&quot;resizeWidth&quot;:null,&quot;bytes&quot;:3310241,&quot;alt&quot;:null,&quot;title&quot;:null,&quot;type&quot;:&quot;image/png&quot;,&quot;href&quot;:null,&quot;belowTheFold&quot;:false,&quot;topImage&quot;:true,&quot;internalRedirect&quot;:&quot;https://www.zerodaydawn.com/i/196132341?img=https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F8b67f09b-2550-481b-b4c2-bf055ead256b_1920x1080.png&quot;,&quot;isProcessing&quot;:false,&quot;align&quot;:null,&quot;offset&quot;:false}" class="sizing-normal" alt="" srcset="https://substackcdn.com/image/fetch/$s_!9fKi!,w_424,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F8b67f09b-2550-481b-b4c2-bf055ead256b_1920x1080.png 424w, https://substackcdn.com/image/fetch/$s_!9fKi!,w_848,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F8b67f09b-2550-481b-b4c2-bf055ead256b_1920x1080.png 848w, https://substackcdn.com/image/fetch/$s_!9fKi!,w_1272,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F8b67f09b-2550-481b-b4c2-bf055ead256b_1920x1080.png 1272w, https://substackcdn.com/image/fetch/$s_!9fKi!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F8b67f09b-2550-481b-b4c2-bf055ead256b_1920x1080.png 1456w" sizes="100vw" fetchpriority="high"></picture><div class="image-link-expand"><div class="pencraft pc-display-flex pc-gap-8 pc-reset"><button tabindex="0" type="button" class="pencraft pc-reset pencraft icon-container restack-image"><svg role="img" width="20" height="20" viewBox="0 0 20 20" fill="none" stroke-width="1.5" stroke="var(--color-fg-primary)" stroke-linecap="round" stroke-linejoin="round" xmlns="http://www.w3.org/2000/svg"><g><title></title><path d="M2.53001 7.81595C3.49179 4.73911 6.43281 2.5 9.91173 2.5C13.1684 2.5 15.9537 4.46214 17.0852 7.23684L17.6179 8.67647M17.6179 8.67647L18.5002 4.26471M17.6179 8.67647L13.6473 6.91176M17.4995 12.1841C16.5378 15.2609 13.5967 17.5 10.1178 17.5C6.86118 17.5 4.07589 15.5379 2.94432 12.7632L2.41165 11.3235M2.41165 11.3235L1.5293 15.7353M2.41165 11.3235L6.38224 13.0882"></path></g></svg></button><button tabindex="0" type="button" class="pencraft pc-reset pencraft icon-container view-image"><svg xmlns="http://www.w3.org/2000/svg" width="20" height="20" viewBox="0 0 24 24" fill="none" stroke="currentColor" stroke-width="2" stroke-linecap="round" stroke-linejoin="round" class="lucide lucide-maximize2 lucide-maximize-2"><polyline points="15 3 21 3 21 9"></polyline><polyline points="9 21 3 21 3 15"></polyline><line x1="21" x2="14" y1="3" y2="10"></line><line x1="3" x2="10" y1="21" y2="14"></line></svg></button></div></div></div></a></figure></div><h2>Executive Summary</h2><p>Five governance frameworks. Five different organizations. <strong>One shared assumption</strong>.</p><p>The EU AI Act requires providers to document intended purpose before deployment. NIST requires reliability assessment under conditions of expected use. The OWASP Top 10 for Agentic Applications prescribes per-tool restriction profiles. Singapore&#8217;s Model Governance Framework for Agentic AI requires bounding risks and limiting scope of impact at the planning stage. ForHumanity CORE AAA mandates pre-deployment specification of scope, nature, context, and purpose.</p><p>Each framework was built independently. Each arrived at the same foundational requirement: <strong>describe what the system will do before the system does it</strong>.</p><p>Agentic AI systems are architecturally designed to determine their own execution paths at runtime. They select tools. They chain actions. They compose workflows nobody anticipated at assessment time. The system described in the compliance documentation is not the system running in production. It stopped being that system the moment the agent made its first autonomous decision.</p><p>This is the <strong>Pre-Computation Fallacy</strong> &#8212; the structural assumption embedded in every major governance framework that behavior is describable before the system operates. It is not a gap to close with better documentation. It is a mathematical constraint that documentation cannot overcome. And every downstream governance artifact &#8212; the risk assessment, the conformity assessment, the human oversight design, the incident response plan &#8212; inherits the flaw the moment the upstream assumption fails.</p><p>The organizations that recognize this will build the governance architecture that survives enforcement. The ones that do not will discover during their first regulatory inquiry that their compliance documentation describes a system that no longer exists.</p><p class="button-wrapper" data-attrs="{&quot;url&quot;:&quot;https://www.zerodaydawn.com/subscribe?&quot;,&quot;text&quot;:&quot;Subscribe now&quot;,&quot;action&quot;:null,&quot;class&quot;:null}" data-component-name="ButtonCreateButton"><a class="button primary" href="https://www.zerodaydawn.com/subscribe?"><span>Subscribe now</span></a></p><div><hr></div><h2>The Comfortable Lie</h2><p>Here is what the market wants to believe:</p><p>If you document the system&#8217;s behavior thoroughly enough before deployment, you have governed the system.</p><p>This is the foundational promise of every compliance program, every management system, every conformity assessment on the market. Document the intended purpose. Assess risks within those boundaries. Certify against the documented baseline. Monitor for deviation.</p><p>The promise works for deterministic systems. A medical device performs the same function on Tuesday that it performed on Monday. A financial calculation engine produces outputs from a bounded set of inputs. The documentation describes the system. The system matches the documentation. The auditor verifies the match.</p><p><strong>Agentic AI breaks the match.</strong></p><p>The system does not perform the same function on Tuesday that it performed on Monday. It performs a function it composed for itself based on the tools it selected, the data it retrieved, and the action chains it assembled &#8212; none of which were specified at deployment. The documentation describes Monday&#8217;s system. Tuesday&#8217;s system built itself at runtime.</p><p>The compliance market has not absorbed this. The governance industry is selling documentation discipline for systems that outrun documentation by design.</p><div><hr></div><h2>The Math</h2><p>An agent has access to ten authorized tools. It can chain actions up to ten steps deep.</p><p>Ten tools. Ten chaining steps. Ten billion possible workflows.</p><p>That is not a metaphor. It is combinatorics. N actions across D chaining steps produce N^D possible compositions. The outcome space grows exponentially with every additional tool or chaining depth the agent is permitted. Even with constraints that reduce the practical space, the growth remains exponential.</p><p>Three tools across three steps: 27 workflows. Documentable.</p><p>Five tools across five steps: 3,125 workflows. Difficult.</p><p>Ten tools across ten steps: 10,000,000,000 workflows. Impossible.</p><p>No quality management system documents ten billion workflows. No risk assessment bounds them. No monitoring system watches all of them. No human reviewer evaluates a meaningful fraction.</p><p>The governance specification requires the organization to describe what the system does. The math does not allow it.</p><p>This is the constraint. It is not a resourcing problem. It is not a tooling gap. It is not a maturity deficit. It is an exponential function operating against a linear governance requirement. More documentation does not help. Better documentation does not help. The space the documentation needs to cover grows faster than any organization can write.</p><p>Every framework that requires pre-deployment behavioral description runs into this wall. None of them name it.</p><div><hr></div><h2>Five Frameworks, One Assumption</h2><p>Each of the five major governance frameworks requires the provider or deployer to describe what the system will do before the system operates. Each breaks on the same structural constraint.</p><h3>EU AI Act</h3><p>The regulation requires providers of high-risk AI systems to document the system&#8217;s intended purpose, its foreseeable conditions of use, and its capabilities and limitations &#8212; before the system is placed on the market. The conformity assessment evaluates this documentation. The regulatory architecture assumes that the system described in the documentation is the system that will operate.</p><p>Where it breaks: an agent&#8217;s intended purpose changes at runtime. Tool selection, action chaining, and workflow composition produce operational purposes nobody declared and nobody documented. When the agent composes a workflow that enters an Annex III domain &#8212; creditworthiness assessment, employment screening, law enforcement support &#8212; the classification assessment filed at deployment no longer describes the system in production. The documentation was accurate when it was written. The system it describes no longer exists.</p><p>What it costs: behavior that exceeds the documented scope constitutes a potential substantial modification. The deployer may assume provider obligations for a high-risk system nobody registered.</p><h3>NIST AI Risk Management Framework</h3><p>NIST requires reliability assessment under conditions of expected use. MAP requires documenting the full scope of agent tools and autonomy boundaries. MEASURE requires metrics for trustworthiness characteristics. The framework assumes the conditions of use are knowable in advance.</p><p>Where it breaks: the conditions of use for an agentic system are not knowable in advance. The agent determines its own conditions of use at runtime by composing tool calls into workflows. The expected-use specification describes a subset of what the system can do. The system operates across the full compositional space. The reliability assessment evaluated conditions the system has already departed from.</p><h3>OWASP Top 10 for Agentic Applications</h3><p>OWASP prescribes controls including per-tool restriction profiles &#8212; capability allowlists, schema validation, rate limits. Each tool gets its own security boundary. The approach assumes that securing individual tools secures the workflow.</p><p>Where it breaks: tool-level security does not equal workflow-level security. An agent with authorized access to a customer database, a communication API, and a scheduling tool can compose a workflow that sends unsolicited messages to customers using data retrieved from the database, scheduled at times calculated to maximize response rates. Each tool call passes validation individually. The composed workflow was never assessed. The security controls see components. The regulatory obligation covers the composition.</p><p>This is not a criticism of OWASP&#8217;s work &#8212; the Top 10 is the strongest practitioner-facing taxonomy available. It is a structural observation about where per-tool controls reach their architectural limit.</p><p class="button-wrapper" data-attrs="{&quot;url&quot;:&quot;https://www.zerodaydawn.com/p/why-agentic-ai-breaks-every-existing?utm_source=substack&utm_medium=email&utm_content=share&action=share&quot;,&quot;text&quot;:&quot;Share&quot;,&quot;action&quot;:null,&quot;class&quot;:null}" data-component-name="ButtonCreateButton"><a class="button primary" href="https://www.zerodaydawn.com/p/why-agentic-ai-breaks-every-existing?utm_source=substack&utm_medium=email&utm_content=share&action=share"><span>Share</span></a></p><h3>Singapore Model Governance Framework for Agentic AI</h3><p>Singapore&#8217;s MGF requires organizations to assess and bound risks upfront and limit the scope of impact at the planning stage. It is the only framework globally that explicitly addresses agentic AI governance. It recommends constraining agents to documented operational boundaries and testing both individual and multi-agent interactions.</p><p>Where it breaks: bounding risks at the planning stage assumes the risks are enumerable at the planning stage. For compositional systems, they are not. The agent&#8217;s impact scope changes with every workflow it composes. The boundary documented at planning time describes the system as planned. The system in production assembles its own scope.</p><p>Singapore&#8217;s framework comes closest to acknowledging this &#8212; its emphasis on continuous monitoring and kill-switch capability reflects an awareness that planning-stage controls alone are insufficient. The gap is in the assumption that planning-stage risk bounding can produce the reference frame continuous monitoring needs to monitor against.</p><h3>ForHumanity CORE AAA</h3><p>ForHumanity requires pre-deployment specification of scope, nature, context, and purpose. The Algorithmic Risk Committee establishes monitoring standards and the boundaries within which agent behavior is assessed. The multi-agent governance scheme explicitly states that combining systems beyond the provider&#8217;s pre-determinations makes the deployer a provider of a multi-agent system.</p><p>Where it breaks: the pre-determination requirement assumes the behavioral space can be pre-determined. For agents that compose workflows at runtime, it cannot. The provider pre-determined scope A. The deployer combined it with systems B and C. The agent composed a workflow across all three that nobody pre-determined. The deployer has become a provider of a system whose behavior nobody specified.</p><p>ForHumanity&#8217;s multi-agent governance provision is the most direct acknowledgment of this structural problem in any certification framework. The gap is upstream: the pre-determination requirement that feeds the multi-agent trigger cannot be satisfied for compositional systems.</p><div><hr></div><h2>The Downstream Cascade</h2><p>When the upstream assumption fails, every downstream governance artifact inherits the failure.</p><p>The risk assessment evaluated risks within the documented behavioral space. The agent operates outside it. The risks it encounters were never assessed.</p><p>The conformity assessment certified the system described in the documentation. The system in production is a different system. The certification describes Monday&#8217;s system. Tuesday&#8217;s system was composed at runtime.</p><p>The human oversight design was built to oversee the documented workflows. The agent composes workflows the oversight function was never designed to evaluate. The reviewer does not recognize the workflow as outside scope because nobody defined where scope ends.</p><p>The incident response plan was written for anticipated failure modes. The agent produces failure modes nobody anticipated because the compositional space is too large to enumerate. The first time the organization encounters the failure is during the incident.</p><p>Each downstream artifact was built correctly against the upstream specification. The upstream specification does not describe the system that is running.</p><p>This is the cascade. It does not require malice. It does not require negligence. It requires only that an autonomous system did what autonomous systems are designed to do &#8212; determined its own behavior at runtime.</p><div><hr></div><h2>The Convergence</h2><p>The Pre-Computation Fallacy is not only a compliance problem. It is simultaneously a security problem.</p><p>The OWASP Top 10 for Agentic Applications catalogs how agents fail under adversarial pressure &#8212; goal hijacking, tool misuse, identity abuse, memory poisoning. The EU AI Act specifies what providers and deployers must prevent &#8212; unauthorized behavioral drift, inadequate oversight, undocumented modifications. The overlap exists because both frameworks target the same architectural property: behavioral predictability.</p><p>When an agent&#8217;s behavior exceeds the pre-computed space, the security team sees an anomaly. The compliance team sees a potential substantial modification. The incident report and the regulatory case file describe the same facts.</p><p>The security team does not read the regulation. The compliance team does not read OWASP. They are both looking at the same agent. Neither has the full picture.</p><p>The Pre-Computation Fallacy is the structural reason they are looking at the same event through different lenses. The assumption that behavior is describable before runtime is the assumption that makes security monitoring possible and the assumption that makes compliance documentation valid. When it fails, it fails for both teams simultaneously.</p><p>One event. Two frameworks. Zero shared vocabulary.</p><p>The organizations that survive enforcement will be the ones that recognized the convergence before the incident forced them to do so.</p><div><hr></div><h2>The Response</h2><p>The Pre-Computation Fallacy does not have a documentation solution. No amount of pre-deployment specification solves an exponential constraint.</p><p>It has an architectural response.</p><p>The operational envelope does not attempt to describe the full compositional space. It defines the subset of behaviors the organization actually assessed &#8212; the bounded region where the risk assessment, the conformity assessment, the human oversight design, and the incident response plan remain valid. Everything inside the envelope was evaluated. Everything outside it is unknown territory.</p><p>The governance mechanism is detection, not prediction. A tripwire inside the boundary. When the agent&#8217;s behavior crosses that boundary, what happens next is a human decision &#8212; not an agent decision and not an automated remediation.</p><p>Four questions define the envelope. A detection architecture makes it operational. A response protocol converts boundary crossings into governance events. A documentation framework makes the whole thing defensible.</p><p>That methodology was published in full in the piece <strong><a href="https://www.zerodaydawn.com/p/governing-what-your-agent-does-next">Governing What Your Agent Does Next</a></strong> of this newsletter. The architecture is available. The question is whether organizations build it before the first enforcement inquiry reveals that their compliance documentation describes a system that no longer exists.</p><div><hr></div><h3>The Verdict</h3><p>Five frameworks. Five different organizations. Five different regulatory traditions. One assumption.</p><p>System behavior can be described before the system operates.</p><p>For every system that preceded agentic AI, the assumption held well enough. For agentic AI, it fails &#8212; structurally, mathematically, and operationally.</p><p>The governance frameworks are not wrong to require documentation. They are wrong to assume documentation can capture a compositional space that grows exponentially with every tool the agent is authorized to use.</p><p>The Pre-Computation Fallacy is the name for the gap between what governance requires and what the architecture allows. Every organization deploying agentic AI is operating inside that gap. The ones that name it can build around it. The ones that do not will discover it during enforcement.</p><p><strong>Name it before the regulator does.</strong></p><div><hr></div><p><em><strong><a href="https://www.zerodaydawn.com/">Zero-Day Dawn</a></strong></em> publishes enforcement intelligence on agentic AI governance every Sunday at 4:00 PM EET. If you build, deploy, or govern AI agents &#8212; the gap between what you assume and what survives enforcement is widening every week. Paid subscribers get the full map.</p><div><hr></div><h3>Regulatory Disclaimer</h3><p>This article provides educational analysis of the EU Artificial Intelligence Act (Regulation (EU) 2024/1689) and related governance frameworks. Nothing in this article constitutes legal advice, regulatory interpretation, or compliance certification. Organizations should consult qualified legal counsel specializing in EU AI Act compliance before making classification determinations or deployment decisions. Quantum Coherence LLC does not provide legal advice or regulatory compliance determinations.</p><h3>Sources</h3><p>EU AI Act (Regulation 2024/1689), Articles 3(23), 6, 9, 11, 14, 15, 25, 43, Annex IV. NIST AI Risk Management Framework 1.0 (January 2023). OWASP Top 10 for Agentic Applications (December 2025). Singapore Model Governance Framework for Agentic AI (IMDA, January 2026). ForHumanity CORE AAA Multi-Agent Governance v1.5 (2026).</p>]]></content:encoded></item><item><title><![CDATA[(Un)governable: Agent Identity vs. Agentic Intent]]></title><description><![CDATA[The credential is bounded. The agent's intent is not. The EU AI Act holds you liable for both.]]></description><link>https://www.zerodaydawn.com/p/ungovernable-agent-identity-vs-agentic</link><guid isPermaLink="false">https://www.zerodaydawn.com/p/ungovernable-agent-identity-vs-agentic</guid><dc:creator><![CDATA[Violeta Klein, CISSP, AIGP]]></dc:creator><pubDate>Sun, 26 Apr 2026 13:02:32 GMT</pubDate><enclosure url="https://substackcdn.com/image/fetch/$s_!2f0Y!,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fbe2b957f-2161-417d-b896-a75dab7257ad_1920x1080.png" length="0" type="image/jpeg"/><content:encoded><![CDATA[<div class="captioned-image-container"><figure><a class="image-link image2 is-viewable-img" target="_blank" href="https://substackcdn.com/image/fetch/$s_!2f0Y!,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fbe2b957f-2161-417d-b896-a75dab7257ad_1920x1080.png" data-component-name="Image2ToDOM"><div class="image2-inset"><picture><source type="image/webp" srcset="https://substackcdn.com/image/fetch/$s_!2f0Y!,w_424,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fbe2b957f-2161-417d-b896-a75dab7257ad_1920x1080.png 424w, https://substackcdn.com/image/fetch/$s_!2f0Y!,w_848,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fbe2b957f-2161-417d-b896-a75dab7257ad_1920x1080.png 848w, https://substackcdn.com/image/fetch/$s_!2f0Y!,w_1272,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fbe2b957f-2161-417d-b896-a75dab7257ad_1920x1080.png 1272w, https://substackcdn.com/image/fetch/$s_!2f0Y!,w_1456,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fbe2b957f-2161-417d-b896-a75dab7257ad_1920x1080.png 1456w" sizes="100vw"><img src="https://substackcdn.com/image/fetch/$s_!2f0Y!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fbe2b957f-2161-417d-b896-a75dab7257ad_1920x1080.png" width="1456" height="819" data-attrs="{&quot;src&quot;:&quot;https://substack-post-media.s3.amazonaws.com/public/images/be2b957f-2161-417d-b896-a75dab7257ad_1920x1080.png&quot;,&quot;srcNoWatermark&quot;:null,&quot;fullscreen&quot;:null,&quot;imageSize&quot;:null,&quot;height&quot;:819,&quot;width&quot;:1456,&quot;resizeWidth&quot;:null,&quot;bytes&quot;:3294206,&quot;alt&quot;:null,&quot;title&quot;:null,&quot;type&quot;:&quot;image/png&quot;,&quot;href&quot;:null,&quot;belowTheFold&quot;:false,&quot;topImage&quot;:true,&quot;internalRedirect&quot;:&quot;https://www.zerodaydawn.com/i/195336836?img=https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fbe2b957f-2161-417d-b896-a75dab7257ad_1920x1080.png&quot;,&quot;isProcessing&quot;:false,&quot;align&quot;:null,&quot;offset&quot;:false}" class="sizing-normal" alt="" srcset="https://substackcdn.com/image/fetch/$s_!2f0Y!,w_424,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fbe2b957f-2161-417d-b896-a75dab7257ad_1920x1080.png 424w, https://substackcdn.com/image/fetch/$s_!2f0Y!,w_848,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fbe2b957f-2161-417d-b896-a75dab7257ad_1920x1080.png 848w, https://substackcdn.com/image/fetch/$s_!2f0Y!,w_1272,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fbe2b957f-2161-417d-b896-a75dab7257ad_1920x1080.png 1272w, https://substackcdn.com/image/fetch/$s_!2f0Y!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fbe2b957f-2161-417d-b896-a75dab7257ad_1920x1080.png 1456w" sizes="100vw" fetchpriority="high"></picture><div class="image-link-expand"><div class="pencraft pc-display-flex pc-gap-8 pc-reset"><button tabindex="0" type="button" class="pencraft pc-reset pencraft icon-container restack-image"><svg role="img" width="20" height="20" viewBox="0 0 20 20" fill="none" stroke-width="1.5" stroke="var(--color-fg-primary)" stroke-linecap="round" stroke-linejoin="round" xmlns="http://www.w3.org/2000/svg"><g><title></title><path d="M2.53001 7.81595C3.49179 4.73911 6.43281 2.5 9.91173 2.5C13.1684 2.5 15.9537 4.46214 17.0852 7.23684L17.6179 8.67647M17.6179 8.67647L18.5002 4.26471M17.6179 8.67647L13.6473 6.91176M17.4995 12.1841C16.5378 15.2609 13.5967 17.5 10.1178 17.5C6.86118 17.5 4.07589 15.5379 2.94432 12.7632L2.41165 11.3235M2.41165 11.3235L1.5293 15.7353M2.41165 11.3235L6.38224 13.0882"></path></g></svg></button><button tabindex="0" type="button" class="pencraft pc-reset pencraft icon-container view-image"><svg xmlns="http://www.w3.org/2000/svg" width="20" height="20" viewBox="0 0 24 24" fill="none" stroke="currentColor" stroke-width="2" stroke-linecap="round" stroke-linejoin="round" class="lucide lucide-maximize2 lucide-maximize-2"><polyline points="15 3 21 3 21 9"></polyline><polyline points="9 21 3 21 3 15"></polyline><line x1="21" x2="14" y1="3" y2="10"></line><line x1="3" x2="10" y1="21" y2="14"></line></svg></button></div></div></div></a></figure></div><h2>Executive Summary</h2><p style="text-align: justify;">Your security team has scoped the agent's permissions. Least privilege enforced. Service account credentials rotated. RBAC reviewed quarterly. The IAM dashboard shows green. The non-human identity audit passes.</p><p style="text-align: justify;"><strong>The auditor will not ask whether the agent had permission. They will ask what the agent decided to do with it.</strong></p><p style="text-align: justify;">That is the question every governance framework currently struggles to answer &#8212; and it is the question every breach now turns on. The security architecture community has converged on a useful distinction between traditional non-human identity and Agent Identity. NHI is the legacy category &#8212; service accounts and API keys provisioned with fixed scopes that do not change while the credential lives. Agent Identity is the upgrade &#8212; credentials issued for a specific task, cryptographically bound, withdrawn when the task ends. Both sit at the permission layer. <strong>The agent operates one layer above</strong>.</p><p style="text-align: justify;">Permission governs what an entity is allowed to touch. Intent governs what it decides to do with what it touched. The gap between the two is where the regulatory liability sits.</p><p style="text-align: justify;"><strong>The EU AI Act does not distinguish between unauthorized access and authorized access producing an ungoverned outcome</strong>. The obligation attaches to what the system functionally does to people. Article 14 requires effective human oversight. Article 15 requires the system to achieve and maintain an appropriate level of accuracy, robustness, and cybersecurity throughout the lifecycle. Article 26(6) requires the deployer to retain logs for at least six months.</p><p style="text-align: justify;"><strong>None of these obligations are satisfied by an IAM attestation.</strong></p><p style="text-align: justify;">Permission attests that access was scoped. The regulator asks what the access produced.</p><p style="text-align: justify;">This piece shows where the distinction lives, why permission discipline does not close it, and what the regulator and the breach are asking now.</p><div><hr></div><h2>The Comfortable Lie</h2><p style="text-align: justify;">Here is what the market wants to believe:</p><p style="text-align: justify;">If you scope the agent's permissions tightly enough, you have governed the agent.</p><p style="text-align: justify;">NHI vendors sell tighter scopes. IAM platforms sell ephemeral tokens. Identity teams sell zero-trust architectures. The pitch is the same everywhere &#8212; bound the credential, bound the agent.</p><p style="text-align: justify;">This is the comfortable lie. It persists because the alternative is harder.</p><p style="text-align: justify;">The alternative requires admitting that permission and intent are different governance surfaces. Permission is the lock. Intent is the hand. The lock decides what the hand can reach. It does not decide what the hand does once it is inside.</p><p style="text-align: justify;">Every NHI compliance attestation in production today describes permission state. The regulator and the breach both ask about behavioral state. The attestation cannot answer either.</p><div><hr></div><h2>The Distinction</h2><p style="text-align: justify;">The security architecture community has done the conceptual work. Traditional NHI was built for service accounts that did one thing &#8212; a workload with a static credential, a coarse-grained scope, a deterministic action. The permission was the intent. They were the same object.</p><div class="pullquote"><p style="text-align: justify;"><strong>Agents break that coupling. Same identity. Same scope. Unbounded space of intents.</strong></p></div><p style="text-align: justify;">NIST has put this question on the public record. In its February 2026 NCCoE concept paper on agent identity and authorization, the federal authors ask, in plain language, how an agent might convey the intent of its actions<a class="footnote-anchor" data-component-name="FootnoteAnchorToDOM" id="footnote-anchor-1" href="#footnote-1" target="_self">1</a>. They list it alongside authentication, key management, and least-privilege as one of the open problems the standards stack does not yet solve.</p><p style="text-align: justify;">Their proposed toolbox &#8212; OAuth 2.1, OIDC, SPIFFE/SPIRE, SCIM, NGAC, MCP &#8212; is the IAM stack. Every tool in it operates at the permission layer.</p><p style="text-align: justify;">The question they raised is the right one. The answer their toolbox offers does not reach it.</p><p style="text-align: justify;">This is not a gap to close with finer-grained permissions. This is a category boundary. Permission is a credential property. Intent is an execution property. The first is governed by IAM. The second is governed by what happens between the action and its outcome &#8212; and currently, nothing in the standards stack governs that surface.</p><p style="text-align: justify;">The Cloud Security Alliance found that 50% of enterprises rely on traditional IAM and RBAC as the primary authorization mechanism for their agents. Half of all organizations deploying autonomous systems are governing them with tools designed for human users clicking through permission prompts.</p><div class="pullquote"><p style="text-align: justify;"><strong>Permission and intent are not the same object. The compliance architecture treats them as if they were.</strong></p></div><div><hr></div><h2>The Permission Layer Is Already Broken</h2><p style="text-align: justify;">Even if the field perfected the permission layer tomorrow, the regulatory exposure would not close. But the permission layer is nowhere near perfected.</p><p style="text-align: justify;">Entro Labs' 2025 State of Non-Human Identities and Secrets in Cybersecurity makes the empirical floor visible. For every human identity in the average enterprise, there are 92 non-human identities. The average rotation interval across those identities is 627 days. Over 70% are not rotated within recommended timeframes. 91% of former employee tokens are never revoked. 100% of audited environments contain secrets with excessive permissions and access authorization than necessary.</p><p style="text-align: justify;"><strong>Not most. Every. One.</strong></p><p style="text-align: justify;">That data is a measurement of the permission layer failing on its own terms &#8212; before agents, before runtime composition, before any intent question is raised. The IAM apparatus is not delivering the discipline its narrative claims.</p><p style="text-align: justify;">Now extend that floor to agents. Every weakness in the permission layer becomes blast radius. The <strong>OWASP Non-Human Identity Top 10 (2025)</strong> names the failure modes &#8212; NHI5 Overprivileged NHI, NHI1 Improper Offboarding, NHI7 Long-Lived Secrets, NHI2 Secret Leakage. Every one of them is a permission-layer pathology that compounds when the entity holding the credential reasons.</p><p style="text-align: justify;">A static NHI with overprivileged scope causes one kind of incident. An agent with the same scope causes a different kind. <strong>The first acts. The second composes.</strong></p><p style="text-align: justify;">The structural problem doubles. The compliance documentation describes neither.</p><div><hr></div><h2>What the Regulator Will Ask</h2><p style="text-align: justify;">Article 14 of the EU AI Act requires effective human oversight of high-risk systems. <strong>Effective means the human can understand the system, interpret its output, and intervene in its operation or stop it in a safe state</strong>. Article 15 requires the system to achieve and maintain an appropriate level of accuracy, robustness, and cybersecurity throughout the lifecycle. Article 26(6) requires the deployer to retain automatically generated logs for at least six months.</p><p style="text-align: justify;"><strong>Show us the oversight mechanism</strong>. Not the IAM policy. Not the rotation cadence. The mechanism that demonstrates a human could understand what the agent decided and could stop it before the outcome.</p><p style="text-align: justify;"><strong>Show us the lifecycle controls</strong>. Not the credential lifecycle. The behavioral lifecycle.</p><p style="text-align: justify;"><strong>Show us the logs that prove what the agent did</strong>. Not the logs that prove what it was allowed to do.</p><p style="text-align: justify;"><strong>The IAM logs answer the wrong question</strong>. They prove access was authorized. <strong>The regulator asks what the authorization produced</strong>.</p><p style="text-align: justify;">The breach narrative shows the same gap. Orchestration platforms have exfiltrated data through authorized channels. Coding agents have propagated through MCP credentials. Customer service agents have committed organizations to refunds outside their assessed scope. None of them required a permission breach. Authorization was clean every time.</p><p style="text-align: justify;">The CISO's incident report and the regulator's case file describe the same facts. Neither side has the document the other is asking for.</p><div><hr></div><h2>The Verdict</h2><p style="text-align: justify;">Identity governance answers a permission question. The EU AI Act asks a behavioral question. The standards body that named the right question publicly is reaching for the IAM toolbox to answer it. The toolbox does not contain the instrument.</p><p><strong>This is not a tooling gap. It is a category boundary the compliance architecture has not yet acknowledged.</strong></p><div class="pullquote"><p><strong>You can revoke a key. You cannot revoke an interpretation.</strong></p></div><p style="text-align: justify;">Next week: the operational methodology for governing what permission cannot reach.</p><div><hr></div><p style="text-align: justify;">This article was sharpened by an exchange with <span class="mention-wrap" data-attrs="{&quot;name&quot;:&quot;Miracle Owolabi&quot;,&quot;id&quot;:248259915,&quot;type&quot;:&quot;user&quot;,&quot;url&quot;:null,&quot;photo_url&quot;:&quot;https://substack-post-media.s3.amazonaws.com/public/images/b3e63279-44e0-4430-a8e0-970092738f04_1399x1399.jpeg&quot;,&quot;uuid&quot;:&quot;91d7445e-00de-45f3-acec-a7b4ad441029&quot;}" data-component-name="MentionToDOM"></span>, whose distinction between authorized access and authorized access used in an unauthorized direction named the detection problem this piece extends. <span class="mention-wrap" data-attrs="{&quot;name&quot;:&quot;Ken Huang&quot;,&quot;id&quot;:1160339,&quot;type&quot;:&quot;user&quot;,&quot;url&quot;:null,&quot;photo_url&quot;:&quot;https://substack-post-media.s3.amazonaws.com/public/images/3d670301-204b-472e-a2ee-bbb1b7633a99_2026x2026.png&quot;,&quot;uuid&quot;:&quot;68ce051f-47bf-447d-b226-7beff9521222&quot;}" data-component-name="MentionToDOM"></span>&#8216;s &#8220;Layer 8&#8221; thesis on agentic AI breaking the deterministic boundary remains the architectural framing this piece operates in dialogue with. The federal authors of the NIST NCCoE concept paper &#8212; Harold Booth, Bill Fisher, Ryan Galluzzo, and Joshua Roberts &#8212; have put the right question on the public record, and the field is better for having it asked there. Entro Labs and the Cloud Security Alliance continue to provide the empirical evidence base any honest analysis of this layer requires.</p><div><hr></div><h4 style="text-align: justify;">Sources: </h4><p style="text-align: justify;">EU AI Act (Regulation (EU) 2024/1689) Articles 14, 15, 26(6); NIST NCCoE Concept Paper "Accelerating the Adoption of Software and AI Agent Identity and Authorization" (Booth, Fisher, Galluzzo, Roberts, February 2026, DRAFT); Cloud Security Alliance, Securing Autonomous AI Agents (January 2026); Entro Labs, 2025 State of Non-Human Identities and Secrets in Cybersecurity; OWASP Non-Human Identity Top 10 (2025); OWASP Agentic AI Threats and Mitigations Top 10. </p><div><hr></div><h4>Regulatory Disclaimer: </h4><p style="text-align: justify;">This article provides educational analysis of the EU Artificial Intelligence Act (Regulation (EU) 2024/1689) and related governance frameworks. Nothing in this article constitutes legal advice, regulatory interpretation, or compliance certification. Organizations should consult qualified legal counsel specializing in EU AI Act compliance before making classification determinations or deployment decisions. Quantum Coherence LLC does not provide legal advice or regulatory compliance determinations.</p><div class="footnote" data-component-name="FootnoteToDOM"><a id="footnote-1" href="#footnote-anchor-1" class="footnote-number" contenteditable="false" target="_self">1</a><div class="footnote-content"><p style="text-align: justify;">Note on adjacent academic work: For a mapping of the EU compliance perimeter for AI agent providers, see Luca Nannini, Adam Leon Smith, Michele Joshua Maggini, Enrico Panai, Sandra Feliciano, Aleksandr Tiulkanov, Elena Maran, James Gealy, and Piercosma Bisconti, &#8220;AI Agents Under EU Law: A Compliance Architecture for AI Providers,&#8221; arXiv:2604.04604v1 (April 7, 2026). The paper&#8217;s Section 6.1 identifies the non-human identity layer as one dimension of Article 15(4) compliance. This piece extends that observation into the structural distinction between identity governance and intent governance as separate compliance surfaces.</p></div></div>]]></content:encoded></item><item><title><![CDATA[The AI Decision Your Board Got Half Right]]></title><description><![CDATA[What happens when the AI investment that impressed the board meets the regulator who doesn't care about your ROI.]]></description><link>https://www.zerodaydawn.com/p/the-ai-decision-your-board-got-half</link><guid isPermaLink="false">https://www.zerodaydawn.com/p/the-ai-decision-your-board-got-half</guid><dc:creator><![CDATA[Violeta Klein, CISSP, AIGP]]></dc:creator><pubDate>Sun, 19 Apr 2026 13:02:35 GMT</pubDate><enclosure url="https://substackcdn.com/image/fetch/$s_!5vsv!,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F6b47df51-a656-4847-96b4-8bdb0e123d36_1920x1080.png" length="0" type="image/jpeg"/><content:encoded><![CDATA[<div class="captioned-image-container"><figure><a class="image-link image2 is-viewable-img" target="_blank" href="https://substackcdn.com/image/fetch/$s_!5vsv!,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F6b47df51-a656-4847-96b4-8bdb0e123d36_1920x1080.png" data-component-name="Image2ToDOM"><div class="image2-inset"><picture><source type="image/webp" srcset="https://substackcdn.com/image/fetch/$s_!5vsv!,w_424,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F6b47df51-a656-4847-96b4-8bdb0e123d36_1920x1080.png 424w, https://substackcdn.com/image/fetch/$s_!5vsv!,w_848,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F6b47df51-a656-4847-96b4-8bdb0e123d36_1920x1080.png 848w, https://substackcdn.com/image/fetch/$s_!5vsv!,w_1272,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F6b47df51-a656-4847-96b4-8bdb0e123d36_1920x1080.png 1272w, https://substackcdn.com/image/fetch/$s_!5vsv!,w_1456,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F6b47df51-a656-4847-96b4-8bdb0e123d36_1920x1080.png 1456w" sizes="100vw"><img src="https://substackcdn.com/image/fetch/$s_!5vsv!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F6b47df51-a656-4847-96b4-8bdb0e123d36_1920x1080.png" width="1456" height="819" data-attrs="{&quot;src&quot;:&quot;https://substack-post-media.s3.amazonaws.com/public/images/6b47df51-a656-4847-96b4-8bdb0e123d36_1920x1080.png&quot;,&quot;srcNoWatermark&quot;:null,&quot;fullscreen&quot;:null,&quot;imageSize&quot;:null,&quot;height&quot;:819,&quot;width&quot;:1456,&quot;resizeWidth&quot;:null,&quot;bytes&quot;:3302807,&quot;alt&quot;:null,&quot;title&quot;:null,&quot;type&quot;:&quot;image/png&quot;,&quot;href&quot;:null,&quot;belowTheFold&quot;:false,&quot;topImage&quot;:true,&quot;internalRedirect&quot;:&quot;https://www.zerodaydawn.com/i/194168507?img=https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F6b47df51-a656-4847-96b4-8bdb0e123d36_1920x1080.png&quot;,&quot;isProcessing&quot;:false,&quot;align&quot;:null,&quot;offset&quot;:false}" class="sizing-normal" alt="" srcset="https://substackcdn.com/image/fetch/$s_!5vsv!,w_424,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F6b47df51-a656-4847-96b4-8bdb0e123d36_1920x1080.png 424w, https://substackcdn.com/image/fetch/$s_!5vsv!,w_848,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F6b47df51-a656-4847-96b4-8bdb0e123d36_1920x1080.png 848w, https://substackcdn.com/image/fetch/$s_!5vsv!,w_1272,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F6b47df51-a656-4847-96b4-8bdb0e123d36_1920x1080.png 1272w, https://substackcdn.com/image/fetch/$s_!5vsv!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F6b47df51-a656-4847-96b4-8bdb0e123d36_1920x1080.png 1456w" sizes="100vw" fetchpriority="high"></picture><div class="image-link-expand"><div class="pencraft pc-display-flex pc-gap-8 pc-reset"><button tabindex="0" type="button" class="pencraft pc-reset pencraft icon-container restack-image"><svg role="img" width="20" height="20" viewBox="0 0 20 20" fill="none" stroke-width="1.5" stroke="var(--color-fg-primary)" stroke-linecap="round" stroke-linejoin="round" xmlns="http://www.w3.org/2000/svg"><g><title></title><path d="M2.53001 7.81595C3.49179 4.73911 6.43281 2.5 9.91173 2.5C13.1684 2.5 15.9537 4.46214 17.0852 7.23684L17.6179 8.67647M17.6179 8.67647L18.5002 4.26471M17.6179 8.67647L13.6473 6.91176M17.4995 12.1841C16.5378 15.2609 13.5967 17.5 10.1178 17.5C6.86118 17.5 4.07589 15.5379 2.94432 12.7632L2.41165 11.3235M2.41165 11.3235L1.5293 15.7353M2.41165 11.3235L6.38224 13.0882"></path></g></svg></button><button tabindex="0" type="button" class="pencraft pc-reset pencraft icon-container view-image"><svg xmlns="http://www.w3.org/2000/svg" width="20" height="20" viewBox="0 0 24 24" fill="none" stroke="currentColor" stroke-width="2" stroke-linecap="round" stroke-linejoin="round" class="lucide lucide-maximize2 lucide-maximize-2"><polyline points="15 3 21 3 21 9"></polyline><polyline points="9 21 3 21 3 15"></polyline><line x1="21" x2="14" y1="3" y2="10"></line><line x1="3" x2="10" y1="21" y2="14"></line></svg></button></div></div></div></a></figure></div><p><em>This piece has two authors because the problem it describes sits in a gap between two worlds that rarely talk to each other.</em></p><p><em><span class="mention-wrap" data-attrs="{&quot;name&quot;:&quot;Neha Kabra&quot;,&quot;id&quot;:120858550,&quot;type&quot;:&quot;user&quot;,&quot;url&quot;:null,&quot;photo_url&quot;:&quot;https://substack-post-media.s3.amazonaws.com/public/images/a26e4b77-bd83-41c3-b708-22c6806b1e0c_314x316.png&quot;,&quot;uuid&quot;:&quot;ddc88bcf-8890-4f4f-ba7f-b560a95bc4d8&quot;}" data-component-name="MentionToDOM"></span> has spent eighteen years inside the rooms where AI deployment decisions get made &#8212; at McKinsey and Standard Chartered, working with boards, CXOs, and PE operating partners on the tradeoffs that determine whether AI programs create value or stall. She writes from inside the business architecture. </em></p><div class="captioned-image-container"><figure><a class="image-link image2 is-viewable-img" target="_blank" href="https://substackcdn.com/image/fetch/$s_!4G2D!,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fb69f739b-0df9-48b9-b163-a1c67eca3910_1387x715.png" data-component-name="Image2ToDOM"><div class="image2-inset"><picture><source type="image/webp" srcset="https://substackcdn.com/image/fetch/$s_!4G2D!,w_424,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fb69f739b-0df9-48b9-b163-a1c67eca3910_1387x715.png 424w, https://substackcdn.com/image/fetch/$s_!4G2D!,w_848,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fb69f739b-0df9-48b9-b163-a1c67eca3910_1387x715.png 848w, https://substackcdn.com/image/fetch/$s_!4G2D!,w_1272,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fb69f739b-0df9-48b9-b163-a1c67eca3910_1387x715.png 1272w, https://substackcdn.com/image/fetch/$s_!4G2D!,w_1456,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fb69f739b-0df9-48b9-b163-a1c67eca3910_1387x715.png 1456w" sizes="100vw"><img src="https://substackcdn.com/image/fetch/$s_!4G2D!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fb69f739b-0df9-48b9-b163-a1c67eca3910_1387x715.png" width="1387" height="715" data-attrs="{&quot;src&quot;:&quot;https://substack-post-media.s3.amazonaws.com/public/images/b69f739b-0df9-48b9-b163-a1c67eca3910_1387x715.png&quot;,&quot;srcNoWatermark&quot;:null,&quot;fullscreen&quot;:null,&quot;imageSize&quot;:null,&quot;height&quot;:715,&quot;width&quot;:1387,&quot;resizeWidth&quot;:null,&quot;bytes&quot;:566043,&quot;alt&quot;:null,&quot;title&quot;:null,&quot;type&quot;:&quot;image/png&quot;,&quot;href&quot;:null,&quot;belowTheFold&quot;:false,&quot;topImage&quot;:false,&quot;internalRedirect&quot;:&quot;https://www.zerodaydawn.com/i/194168507?img=https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fb69f739b-0df9-48b9-b163-a1c67eca3910_1387x715.png&quot;,&quot;isProcessing&quot;:false,&quot;align&quot;:null,&quot;offset&quot;:false}" class="sizing-normal" alt="" srcset="https://substackcdn.com/image/fetch/$s_!4G2D!,w_424,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fb69f739b-0df9-48b9-b163-a1c67eca3910_1387x715.png 424w, https://substackcdn.com/image/fetch/$s_!4G2D!,w_848,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fb69f739b-0df9-48b9-b163-a1c67eca3910_1387x715.png 848w, https://substackcdn.com/image/fetch/$s_!4G2D!,w_1272,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fb69f739b-0df9-48b9-b163-a1c67eca3910_1387x715.png 1272w, https://substackcdn.com/image/fetch/$s_!4G2D!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fb69f739b-0df9-48b9-b163-a1c67eca3910_1387x715.png 1456w" sizes="100vw"></picture><div class="image-link-expand"><div class="pencraft pc-display-flex pc-gap-8 pc-reset"><button tabindex="0" type="button" class="pencraft pc-reset pencraft icon-container restack-image"><svg role="img" width="20" height="20" viewBox="0 0 20 20" fill="none" stroke-width="1.5" stroke="var(--color-fg-primary)" stroke-linecap="round" stroke-linejoin="round" xmlns="http://www.w3.org/2000/svg"><g><title></title><path d="M2.53001 7.81595C3.49179 4.73911 6.43281 2.5 9.91173 2.5C13.1684 2.5 15.9537 4.46214 17.0852 7.23684L17.6179 8.67647M17.6179 8.67647L18.5002 4.26471M17.6179 8.67647L13.6473 6.91176M17.4995 12.1841C16.5378 15.2609 13.5967 17.5 10.1178 17.5C6.86118 17.5 4.07589 15.5379 2.94432 12.7632L2.41165 11.3235M2.41165 11.3235L1.5293 15.7353M2.41165 11.3235L6.38224 13.0882"></path></g></svg></button><button tabindex="0" type="button" class="pencraft pc-reset pencraft icon-container view-image"><svg xmlns="http://www.w3.org/2000/svg" width="20" height="20" viewBox="0 0 24 24" fill="none" stroke="currentColor" stroke-width="2" stroke-linecap="round" stroke-linejoin="round" class="lucide lucide-maximize2 lucide-maximize-2"><polyline points="15 3 21 3 21 9"></polyline><polyline points="9 21 3 21 3 15"></polyline><line x1="21" x2="14" y1="3" y2="10"></line><line x1="3" x2="10" y1="21" y2="14"></line></svg></button></div></div></div></a></figure></div><p class="button-wrapper" data-attrs="{&quot;url&quot;:&quot;https://nehakabra1.substack.com/?utm_campaign=profile_chips&quot;,&quot;text&quot;:&quot;Subscribe to Neha&quot;,&quot;action&quot;:null,&quot;class&quot;:null}" data-component-name="ButtonCreateButton"><a class="button primary" href="https://nehakabra1.substack.com/?utm_campaign=profile_chips"><span>Subscribe to Neha</span></a></p><p><em><span class="mention-wrap" data-attrs="{&quot;name&quot;:&quot;Violeta Klein, CISSP, CEFA&quot;,&quot;id&quot;:405281162,&quot;type&quot;:&quot;user&quot;,&quot;url&quot;:null,&quot;photo_url&quot;:&quot;https://substack-post-media.s3.amazonaws.com/public/images/039e68c9-cf47-4e1e-b25e-3c4e039560fe_3361x3361.jpeg&quot;,&quot;uuid&quot;:&quot;b3b64094-9d7e-4382-804d-eb57c3d09a43&quot;}" data-component-name="MentionToDOM"></span> writes from the other side of the table. Where the governance the board approved meets the regulator who wasn&#8217;t in the room. Where the documentation that satisfied the risk committee fails the enforcement examination it was never designed for.</em></p><p><em>We kept running into the same pattern from opposite directions. CXOs building governance that passes the board but wouldn&#8217;t survive a regulator. Compliance teams building frameworks that satisfy the regulation but have no connection to how the business actually makes decisions. Two architectures. Same organization. No shared vocabulary.</em></p><p><em>Neither of us could write this piece alone.</em></p><p class="button-wrapper" data-attrs="{&quot;url&quot;:&quot;https://www.zerodaydawn.com/p/the-ai-decision-your-board-got-half?utm_source=substack&utm_medium=email&utm_content=share&action=share&quot;,&quot;text&quot;:&quot;Share&quot;,&quot;action&quot;:null,&quot;class&quot;:null}" data-component-name="ButtonCreateButton"><a class="button primary" href="https://www.zerodaydawn.com/p/the-ai-decision-your-board-got-half?utm_source=substack&utm_medium=email&utm_content=share&action=share"><span>Share</span></a></p><div><hr></div><h3>In this article</h3><ol><li><p><strong>The Decision That Starts It All</strong> &#8212; How a board approves an AI credit decisioning deployment &#8212; and what it actually creates</p></li><li><p><strong>What the Regulator Sees</strong> &#8212; The same deployment examined from the enforcement side</p></li><li><p><strong>The Five Governance Tradeoffs</strong> &#8212; Five decisions every board makes that create regulatory exposure</p></li><li><p><strong>The Ownership Gap</strong> &#8212; Four functions, four partial views, no single owner</p></li><li><p><strong>What a Defensible Architecture Looks Like</strong> &#8212; Meeting the standard and building something that works</p><div class="captioned-image-container"><figure><a class="image-link image2 is-viewable-img" target="_blank" href="https://substackcdn.com/image/fetch/$s_!-Rlu!,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fd6fe252b-b136-4613-a3f5-b1d7783a3c9f_1856x2304.png" data-component-name="Image2ToDOM"><div class="image2-inset"><picture><source type="image/webp" srcset="https://substackcdn.com/image/fetch/$s_!-Rlu!,w_424,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fd6fe252b-b136-4613-a3f5-b1d7783a3c9f_1856x2304.png 424w, https://substackcdn.com/image/fetch/$s_!-Rlu!,w_848,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fd6fe252b-b136-4613-a3f5-b1d7783a3c9f_1856x2304.png 848w, https://substackcdn.com/image/fetch/$s_!-Rlu!,w_1272,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fd6fe252b-b136-4613-a3f5-b1d7783a3c9f_1856x2304.png 1272w, https://substackcdn.com/image/fetch/$s_!-Rlu!,w_1456,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fd6fe252b-b136-4613-a3f5-b1d7783a3c9f_1856x2304.png 1456w" sizes="100vw"><img src="https://substackcdn.com/image/fetch/$s_!-Rlu!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fd6fe252b-b136-4613-a3f5-b1d7783a3c9f_1856x2304.png" width="1456" height="1807" data-attrs="{&quot;src&quot;:&quot;https://substack-post-media.s3.amazonaws.com/public/images/d6fe252b-b136-4613-a3f5-b1d7783a3c9f_1856x2304.png&quot;,&quot;srcNoWatermark&quot;:null,&quot;fullscreen&quot;:null,&quot;imageSize&quot;:null,&quot;height&quot;:1807,&quot;width&quot;:1456,&quot;resizeWidth&quot;:null,&quot;bytes&quot;:3419314,&quot;alt&quot;:null,&quot;title&quot;:null,&quot;type&quot;:&quot;image/png&quot;,&quot;href&quot;:null,&quot;belowTheFold&quot;:true,&quot;topImage&quot;:false,&quot;internalRedirect&quot;:&quot;https://www.zerodaydawn.com/i/194168507?img=https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fd6fe252b-b136-4613-a3f5-b1d7783a3c9f_1856x2304.png&quot;,&quot;isProcessing&quot;:false,&quot;align&quot;:null,&quot;offset&quot;:false}" class="sizing-normal" alt="" srcset="https://substackcdn.com/image/fetch/$s_!-Rlu!,w_424,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fd6fe252b-b136-4613-a3f5-b1d7783a3c9f_1856x2304.png 424w, https://substackcdn.com/image/fetch/$s_!-Rlu!,w_848,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fd6fe252b-b136-4613-a3f5-b1d7783a3c9f_1856x2304.png 848w, https://substackcdn.com/image/fetch/$s_!-Rlu!,w_1272,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fd6fe252b-b136-4613-a3f5-b1d7783a3c9f_1856x2304.png 1272w, https://substackcdn.com/image/fetch/$s_!-Rlu!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fd6fe252b-b136-4613-a3f5-b1d7783a3c9f_1856x2304.png 1456w" sizes="100vw" loading="lazy"></picture><div class="image-link-expand"><div class="pencraft pc-display-flex pc-gap-8 pc-reset"><button tabindex="0" type="button" class="pencraft pc-reset pencraft icon-container restack-image"><svg role="img" width="20" height="20" viewBox="0 0 20 20" fill="none" stroke-width="1.5" stroke="var(--color-fg-primary)" stroke-linecap="round" stroke-linejoin="round" xmlns="http://www.w3.org/2000/svg"><g><title></title><path d="M2.53001 7.81595C3.49179 4.73911 6.43281 2.5 9.91173 2.5C13.1684 2.5 15.9537 4.46214 17.0852 7.23684L17.6179 8.67647M17.6179 8.67647L18.5002 4.26471M17.6179 8.67647L13.6473 6.91176M17.4995 12.1841C16.5378 15.2609 13.5967 17.5 10.1178 17.5C6.86118 17.5 4.07589 15.5379 2.94432 12.7632L2.41165 11.3235M2.41165 11.3235L1.5293 15.7353M2.41165 11.3235L6.38224 13.0882"></path></g></svg></button><button tabindex="0" type="button" class="pencraft pc-reset pencraft icon-container view-image"><svg xmlns="http://www.w3.org/2000/svg" width="20" height="20" viewBox="0 0 24 24" fill="none" stroke="currentColor" stroke-width="2" stroke-linecap="round" stroke-linejoin="round" class="lucide lucide-maximize2 lucide-maximize-2"><polyline points="15 3 21 3 21 9"></polyline><polyline points="9 21 3 21 3 15"></polyline><line x1="21" x2="14" y1="3" y2="10"></line><line x1="3" x2="10" y1="21" y2="14"></line></svg></button></div></div></div></a></figure></div></li></ol><div><hr></div><h2>1. The Decision That Starts It All</h2><h3><span class="mention-wrap" data-attrs="{&quot;name&quot;:&quot;Neha Kabra&quot;,&quot;id&quot;:120858550,&quot;type&quot;:&quot;user&quot;,&quot;url&quot;:null,&quot;photo_url&quot;:&quot;https://substack-post-media.s3.amazonaws.com/public/images/a26e4b77-bd83-41c3-b708-22c6806b1e0c_314x316.png&quot;,&quot;uuid&quot;:&quot;f13de74f-2ab1-4a21-8cbf-fea9925efcda&quot;}" data-component-name="MentionToDOM"></span> </h3><p>Credit decisioning is where a retail bank makes money and where it creates liability. When AI runs that process at scale, the board isn&#8217;t approving a technology investment. It&#8217;s approving a system that will make millions of decisions about people&#8217;s access to credit &#8212; with the bank&#8217;s operating license sitting behind every one of them.</p><div class="pullquote"><p><strong>&#8220;The board sees a business case. A market opportunity. A cost reduction. A competitive necessity. They approve it the way they approve most technology investments &#8212; on the strength of the return. The liability lasts the life of the system.&#8221;</strong></p></div><h3>What AI actually changes</h3><p>Analytics-led credit decisioning has been standard practice in retail banking for over a decade. Scorecards, risk models, digital intake &#8212; the infrastructure existed long before AI entered the conversation. What AI changes is not the data. It&#8217;s who handles the steps between the data.</p><p>Traditional decisioning moved information between humans. AI moves decisions between systems &#8212; with humans inserted at specific points rather than present throughout. That distinction is where the exposure sits.</p><h3>The six steps &#8212; and where AI changes the equation</h3><div class="captioned-image-container"><figure><a class="image-link image2 is-viewable-img" target="_blank" href="https://substackcdn.com/image/fetch/$s_!PVpY!,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fefec8676-5b8a-46c3-92ad-8f491a0cba9e_1856x2304.png" data-component-name="Image2ToDOM"><div class="image2-inset"><picture><source type="image/webp" srcset="https://substackcdn.com/image/fetch/$s_!PVpY!,w_424,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fefec8676-5b8a-46c3-92ad-8f491a0cba9e_1856x2304.png 424w, https://substackcdn.com/image/fetch/$s_!PVpY!,w_848,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fefec8676-5b8a-46c3-92ad-8f491a0cba9e_1856x2304.png 848w, https://substackcdn.com/image/fetch/$s_!PVpY!,w_1272,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fefec8676-5b8a-46c3-92ad-8f491a0cba9e_1856x2304.png 1272w, https://substackcdn.com/image/fetch/$s_!PVpY!,w_1456,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fefec8676-5b8a-46c3-92ad-8f491a0cba9e_1856x2304.png 1456w" sizes="100vw"><img src="https://substackcdn.com/image/fetch/$s_!PVpY!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fefec8676-5b8a-46c3-92ad-8f491a0cba9e_1856x2304.png" width="1456" height="1807" data-attrs="{&quot;src&quot;:&quot;https://substack-post-media.s3.amazonaws.com/public/images/efec8676-5b8a-46c3-92ad-8f491a0cba9e_1856x2304.png&quot;,&quot;srcNoWatermark&quot;:null,&quot;fullscreen&quot;:null,&quot;imageSize&quot;:null,&quot;height&quot;:1807,&quot;width&quot;:1456,&quot;resizeWidth&quot;:null,&quot;bytes&quot;:2423559,&quot;alt&quot;:null,&quot;title&quot;:null,&quot;type&quot;:&quot;image/png&quot;,&quot;href&quot;:null,&quot;belowTheFold&quot;:true,&quot;topImage&quot;:false,&quot;internalRedirect&quot;:&quot;https://www.zerodaydawn.com/i/194168507?img=https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fefec8676-5b8a-46c3-92ad-8f491a0cba9e_1856x2304.png&quot;,&quot;isProcessing&quot;:false,&quot;align&quot;:null,&quot;offset&quot;:false}" class="sizing-normal" alt="" srcset="https://substackcdn.com/image/fetch/$s_!PVpY!,w_424,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fefec8676-5b8a-46c3-92ad-8f491a0cba9e_1856x2304.png 424w, https://substackcdn.com/image/fetch/$s_!PVpY!,w_848,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fefec8676-5b8a-46c3-92ad-8f491a0cba9e_1856x2304.png 848w, https://substackcdn.com/image/fetch/$s_!PVpY!,w_1272,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fefec8676-5b8a-46c3-92ad-8f491a0cba9e_1856x2304.png 1272w, https://substackcdn.com/image/fetch/$s_!PVpY!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fefec8676-5b8a-46c3-92ad-8f491a0cba9e_1856x2304.png 1456w" sizes="100vw" loading="lazy"></picture><div class="image-link-expand"><div class="pencraft pc-display-flex pc-gap-8 pc-reset"><button tabindex="0" type="button" class="pencraft pc-reset pencraft icon-container restack-image"><svg role="img" width="20" height="20" viewBox="0 0 20 20" fill="none" stroke-width="1.5" stroke="var(--color-fg-primary)" stroke-linecap="round" stroke-linejoin="round" xmlns="http://www.w3.org/2000/svg"><g><title></title><path d="M2.53001 7.81595C3.49179 4.73911 6.43281 2.5 9.91173 2.5C13.1684 2.5 15.9537 4.46214 17.0852 7.23684L17.6179 8.67647M17.6179 8.67647L18.5002 4.26471M17.6179 8.67647L13.6473 6.91176M17.4995 12.1841C16.5378 15.2609 13.5967 17.5 10.1178 17.5C6.86118 17.5 4.07589 15.5379 2.94432 12.7632L2.41165 11.3235M2.41165 11.3235L1.5293 15.7353M2.41165 11.3235L6.38224 13.0882"></path></g></svg></button><button tabindex="0" type="button" class="pencraft pc-reset pencraft icon-container view-image"><svg xmlns="http://www.w3.org/2000/svg" width="20" height="20" viewBox="0 0 24 24" fill="none" stroke="currentColor" stroke-width="2" stroke-linecap="round" stroke-linejoin="round" class="lucide lucide-maximize2 lucide-maximize-2"><polyline points="15 3 21 3 21 9"></polyline><polyline points="9 21 3 21 3 15"></polyline><line x1="21" x2="14" y1="3" y2="10"></line><line x1="3" x2="10" y1="21" y2="14"></line></svg></button></div></div></div></a></figure></div><div><hr></div><h3>The step that matters most &#8212; and why it&#8217;s the hardest</h3><p>Step 3 is where the exposure concentrates. AI synthesizes data, summarizes financials, and produces a credit memo faster than any analyst. The problem is that AI is probabilistic. Run the same case twice and you get two different memos. The output always looks authoritative. It is not always accurate.</p><p>The human approver at Step 4 relies on that memo to make a final decision. If something is wrong and it looks right, the override function exists on paper but not in practice. The glossiness of AI output is not a feature in a credit context. It is a risk that has to be designed around before deployment &#8212; not discovered at enforcement.</p><h3>The question the board didn&#8217;t ask</h3><p>The board approved the system. They approved the business case, the risk framework, and the governance structure that sat behind it. What they didn&#8217;t approve &#8212; because nobody asked &#8212; is whether any of that would satisfy the person who wasn&#8217;t in the room.</p><p>Its recommendations shift as the underlying patterns shift. If the system&#8217;s behavior drifts enough that it functionally changes what it does &#8212; scoring applicants on criteria that were never assessed, weighting factors that were never documented &#8212; that drift constitutes a potential substantial modification under the regulation.</p><p>A substantial modification triggers a new conformity assessment.</p><p>The bank has no mechanism to detect it. No process to flag it. No owner responsible for watching.</p><div class="pullquote"><p><em><strong>&#8220;The governance the board approved was designed to launch the system. It was never designed to follow it.&#8221;</strong></em></p></div><div><hr></div><h2>2. What the Regulator Sees</h2><h4><span class="mention-wrap" data-attrs="{&quot;name&quot;:&quot;Violeta Klein, CISSP, CEFA&quot;,&quot;id&quot;:405281162,&quot;type&quot;:&quot;user&quot;,&quot;url&quot;:null,&quot;photo_url&quot;:&quot;https://substack-post-media.s3.amazonaws.com/public/images/039e68c9-cf47-4e1e-b25e-3c4e039560fe_3361x3361.jpeg&quot;,&quot;uuid&quot;:&quot;05b574e4-38a0-4588-afc6-144953b00b67&quot;}" data-component-name="MentionToDOM"></span> </h4><p>The same deployment. Different table.</p><p>The regulator does not ask whether the investment was sound. They do not ask about the competitive landscape, the board&#8217;s risk appetite, or the deployment timeline. They ask whether the system is safe to operate &#8212; and whether the organization can prove it.</p><p>A retail bank deploys an AI credit decisioning system. The system evaluates loan applications, scores applicants, and generates recommendations that relationship managers act on. The business case was compelling. The board approved it. The regulator arrives and opens a different file.</p><p><strong>Show us the intended purpose documentation.</strong></p><p>The regulation requires a description of what the system does, what it was designed to do, and the boundaries of its operation &#8212; documented before deployment. The bank&#8217;s documentation describes &#8220;an AI-assisted credit assessment tool.&#8221;</p><p>The regulator asks what &#8220;assisted&#8221; means operationally. Does the system recommend, or does it decide? Does the relationship manager review every output, or only exceptions? If the system scores an applicant and the manager follows the score 95% of the time, the system is not assisting. It is directing.</p><p>The documentation says one thing. The operational reality says another.</p><p><strong>Show us the risk assessment.</strong></p><p>A system that evaluates the creditworthiness of natural persons operates in high-risk territory under the EU AI Act. That classification is not discretionary &#8212; it follows from the system&#8217;s function in a regulated domain. The regulation requires a risk management system that identifies known and reasonably foreseeable risks, conducted before deployment and maintained throughout the lifecycle.</p><p>The bank&#8217;s risk assessment was conducted once, by the model risk team, using the framework they use for all models. It evaluated accuracy, bias, and performance metrics. It did not evaluate whether the system&#8217;s outputs materially influence decisions about people&#8217;s access to financial services &#8212; which is the question the regulation actually asks.</p><p>The assessment answered the bank&#8217;s question. It did not answer the regulator&#8217;s.</p><p><strong>Show us the conformity assessment.</strong></p><p>High-risk AI systems require a conformity assessment before placement on the market. For credit decisioning systems, this is a self-assessment &#8212; but self-assessment does not mean self-certification. It means the organization must document that it has verified, against specific regulatory requirements, that the system meets the standard.</p><p>Model validation asks one question: does the model perform as intended? Conformity assessment asks a fundamentally different set of questions: does the system comply with the regulation&#8217;s requirements for risk management, data governance, transparency, human oversight, accuracy, robustness, and cybersecurity?</p><p>One question. Seven requirements. Model validation answers the first. The regulation asks all of them.</p><p><strong>Show us the human oversight mechanism.</strong></p><p>The regulation requires human oversight by designated, competent personnel with the authority and ability to intervene. The bank&#8217;s process assigns oversight to the relationship managers who use the system daily.</p><p>The relationship managers were not trained on the system&#8217;s limitations. They were not given the authority to override the system&#8217;s recommendations without escalation. They do not have visibility into why the system scored an applicant the way it did.</p><p>Oversight exists on the organizational chart. It does not exist in practice.</p><div class="pullquote"><p><em><strong>&#8220;The regulator does not audit the chart. They audit the practice.&#8221;</strong></em></p></div><p><strong>Show us how you would know if the system changed.</strong></p><p>This is the question that separates governance built for the boardroom from governance that survives enforcement.</p><p>The credit decisioning system was assessed at deployment. The documentation describes the system as it was designed. But the system in production is not static. Its recommendations shift as the underlying patterns shift. If the system&#8217;s behavior drifts enough that it functionally changes what it does &#8212; scoring applicants on criteria that were never assessed, weighting factors that were never documented &#8212; that drift constitutes a potential substantial modification under the regulation.</p><p>A substantial modification triggers a new conformity assessment.</p><p>The bank has no mechanism to detect it. No process to flag it. No owner responsible for watching.</p><div class="pullquote"><p><em><strong>&#8220;The governance the board approved was designed to launch the system. It was never designed to follow it.&#8221;</strong></em></p></div><div><hr></div><h2>3. The Five Governance Tradeoffs That Create Regulatory Exposure</h2><p>Five governance decisions every board makes on AI deployment. Each one creates a regulatory exposure the board never priced in.</p><h3>3a. Cost of Governance</h3><h4><span class="mention-wrap" data-attrs="{&quot;name&quot;:&quot;Violeta Klein, CISSP, CEFA&quot;,&quot;id&quot;:405281162,&quot;type&quot;:&quot;user&quot;,&quot;url&quot;:null,&quot;photo_url&quot;:&quot;https://substack-post-media.s3.amazonaws.com/public/images/039e68c9-cf47-4e1e-b25e-3c4e039560fe_3361x3361.jpeg&quot;,&quot;uuid&quot;:&quot;7e6ccc77-0bc9-4880-b5c8-f3470a185051&quot;}" data-component-name="MentionToDOM"></span> </h4><p>The regulation does not price governance as an optional investment. It prices it as a legal minimum.</p><p>A high-risk AI system under the EU AI Act requires, before deployment: a risk management system maintained throughout the lifecycle. Technical documentation describing the system&#8217;s purpose, capabilities, limitations, and performance. A data governance framework ensuring training data meets quality criteria. Human oversight by designated, competent personnel. Accuracy, robustness, and cybersecurity measures maintained continuously. Post-market monitoring. Automatic logging of system operations.</p><p>That is not a governance framework layered on top of a deployment. It is a parallel infrastructure that must exist before the system goes live. The cost of not building it is statutory: up to &#8364;15 million or 3% of global annual turnover.</p><p>The gap between what the board funded and what the regulation requires is where the exposure sits.</p><h4><span class="mention-wrap" data-attrs="{&quot;name&quot;:&quot;Neha Kabra&quot;,&quot;id&quot;:120858550,&quot;type&quot;:&quot;user&quot;,&quot;url&quot;:null,&quot;photo_url&quot;:&quot;https://substack-post-media.s3.amazonaws.com/public/images/a26e4b77-bd83-41c3-b708-22c6806b1e0c_314x316.png&quot;,&quot;uuid&quot;:&quot;6aad899a-be43-4018-a9fc-fcc4243a7de6&quot;}" data-component-name="MentionToDOM"></span> </h4><p><strong>Governance operates across three lines &#8212; and the cost of getting each one wrong is different.</strong></p><p>The first line is the relationship manager using the workbench to submit a credit proposal &#8212; the point where AI output meets human judgement for the first time. If the RM can&#8217;t challenge what the system produces, the first line isn&#8217;t functioning. It&#8217;s performing.</p><p>The second line is the risk officer reviewing what the first line produces. Their effectiveness depends entirely on the quality of what comes from below. Reviewing AI-generated credit memos without visibility into how they were produced is signing off on a process you can&#8217;t actually see.</p><p>The third line is the board and the technology stack behind both &#8212; the systematic auditability that makes the first two lines defensible when the regulator arrives.</p><p>Perfect governance across all three lines isn&#8217;t achievable in the near term. The CXO&#8217;s decision isn&#8217;t perfect versus imperfect. It&#8217;s where to strengthen first &#8212; and how to build a trajectory the regulator can follow.</p><p>Governance cost isn&#8217;t a line item problem. It&#8217;s a lines of defense problem. Fund the first line properly, and the rest of the architecture has something real to build on.</p><div class="captioned-image-container"><figure><a class="image-link image2 is-viewable-img" target="_blank" href="https://substackcdn.com/image/fetch/$s_!jqNc!,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F8004baf4-02a5-404b-a170-cdddd5094de3_1649x2048.png" data-component-name="Image2ToDOM"><div class="image2-inset"><picture><source type="image/webp" srcset="https://substackcdn.com/image/fetch/$s_!jqNc!,w_424,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F8004baf4-02a5-404b-a170-cdddd5094de3_1649x2048.png 424w, https://substackcdn.com/image/fetch/$s_!jqNc!,w_848,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F8004baf4-02a5-404b-a170-cdddd5094de3_1649x2048.png 848w, https://substackcdn.com/image/fetch/$s_!jqNc!,w_1272,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F8004baf4-02a5-404b-a170-cdddd5094de3_1649x2048.png 1272w, https://substackcdn.com/image/fetch/$s_!jqNc!,w_1456,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F8004baf4-02a5-404b-a170-cdddd5094de3_1649x2048.png 1456w" sizes="100vw"><img src="https://substackcdn.com/image/fetch/$s_!jqNc!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F8004baf4-02a5-404b-a170-cdddd5094de3_1649x2048.png" width="1456" height="1808" data-attrs="{&quot;src&quot;:&quot;https://substack-post-media.s3.amazonaws.com/public/images/8004baf4-02a5-404b-a170-cdddd5094de3_1649x2048.png&quot;,&quot;srcNoWatermark&quot;:null,&quot;fullscreen&quot;:null,&quot;imageSize&quot;:null,&quot;height&quot;:1808,&quot;width&quot;:1456,&quot;resizeWidth&quot;:null,&quot;bytes&quot;:1801470,&quot;alt&quot;:null,&quot;title&quot;:null,&quot;type&quot;:&quot;image/png&quot;,&quot;href&quot;:null,&quot;belowTheFold&quot;:true,&quot;topImage&quot;:false,&quot;internalRedirect&quot;:&quot;https://www.zerodaydawn.com/i/194168507?img=https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F8004baf4-02a5-404b-a170-cdddd5094de3_1649x2048.png&quot;,&quot;isProcessing&quot;:false,&quot;align&quot;:null,&quot;offset&quot;:false}" class="sizing-normal" alt="" srcset="https://substackcdn.com/image/fetch/$s_!jqNc!,w_424,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F8004baf4-02a5-404b-a170-cdddd5094de3_1649x2048.png 424w, https://substackcdn.com/image/fetch/$s_!jqNc!,w_848,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F8004baf4-02a5-404b-a170-cdddd5094de3_1649x2048.png 848w, https://substackcdn.com/image/fetch/$s_!jqNc!,w_1272,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F8004baf4-02a5-404b-a170-cdddd5094de3_1649x2048.png 1272w, https://substackcdn.com/image/fetch/$s_!jqNc!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F8004baf4-02a5-404b-a170-cdddd5094de3_1649x2048.png 1456w" sizes="100vw" loading="lazy"></picture><div class="image-link-expand"><div class="pencraft pc-display-flex pc-gap-8 pc-reset"><button tabindex="0" type="button" class="pencraft pc-reset pencraft icon-container restack-image"><svg role="img" width="20" height="20" viewBox="0 0 20 20" fill="none" stroke-width="1.5" stroke="var(--color-fg-primary)" stroke-linecap="round" stroke-linejoin="round" xmlns="http://www.w3.org/2000/svg"><g><title></title><path d="M2.53001 7.81595C3.49179 4.73911 6.43281 2.5 9.91173 2.5C13.1684 2.5 15.9537 4.46214 17.0852 7.23684L17.6179 8.67647M17.6179 8.67647L18.5002 4.26471M17.6179 8.67647L13.6473 6.91176M17.4995 12.1841C16.5378 15.2609 13.5967 17.5 10.1178 17.5C6.86118 17.5 4.07589 15.5379 2.94432 12.7632L2.41165 11.3235M2.41165 11.3235L1.5293 15.7353M2.41165 11.3235L6.38224 13.0882"></path></g></svg></button><button tabindex="0" type="button" class="pencraft pc-reset pencraft icon-container view-image"><svg xmlns="http://www.w3.org/2000/svg" width="20" height="20" viewBox="0 0 24 24" fill="none" stroke="currentColor" stroke-width="2" stroke-linecap="round" stroke-linejoin="round" class="lucide lucide-maximize2 lucide-maximize-2"><polyline points="15 3 21 3 21 9"></polyline><polyline points="9 21 3 21 3 15"></polyline><line x1="21" x2="14" y1="3" y2="10"></line><line x1="3" x2="10" y1="21" y2="14"></line></svg></button></div></div></div></a></figure></div><p></p><h3>3b. Reputational Risk</h3><h4><span class="mention-wrap" data-attrs="{&quot;name&quot;:&quot;Violeta Klein, CISSP, CEFA&quot;,&quot;id&quot;:405281162,&quot;type&quot;:&quot;user&quot;,&quot;url&quot;:null,&quot;photo_url&quot;:&quot;https://substack-post-media.s3.amazonaws.com/public/images/039e68c9-cf47-4e1e-b25e-3c4e039560fe_3361x3361.jpeg&quot;,&quot;uuid&quot;:&quot;d04dd1ab-4b78-4546-aa47-2330e324b30e&quot;}" data-component-name="MentionToDOM"></span> </h4><p>Boards frame AI risk as reputational exposure. The governance model gets built around a single question: how bad could this look?</p><p>The regulation asks a different question entirely. It prices non-compliance, not reputation. The penalty framework operates independently of whether anyone noticed, whether the media reported it, whether customers complained. A system operating in a high-risk domain without conformity assessment is non-compliant whether or not it produces a headline.</p><p>The more dangerous inversion: a system that works flawlessly and generates no complaints can still be operating illegally. If nobody assessed whether a credit decisioning system constitutes a high-risk AI system under the regulation, the system&#8217;s operational success is irrelevant to its compliance status.</p><p>The board optimized for the wrong risk. The reputational risk they priced was the one where the system fails publicly. The regulatory risk they missed was the one where the system works perfectly &#8212; in a domain nobody classified.</p><h4><span class="mention-wrap" data-attrs="{&quot;name&quot;:&quot;Neha Kabra&quot;,&quot;id&quot;:120858550,&quot;type&quot;:&quot;user&quot;,&quot;url&quot;:null,&quot;photo_url&quot;:&quot;https://substack-post-media.s3.amazonaws.com/public/images/a26e4b77-bd83-41c3-b708-22c6806b1e0c_314x316.png&quot;,&quot;uuid&quot;:&quot;093c7e65-ee28-47fe-bce5-cad1c2e75f3f&quot;}" data-component-name="MentionToDOM"></span> </h4><p><strong>Reputational risk lands on the business &#8212; regardless of where the governance failure originated.</strong></p><p>Reputational risk in financial services almost always gets reported at the business unit level &#8212; not the risk function level. When a mortgage portfolio deteriorates, the headline reads &#8220;Bank X&#8217;s retail lending division posts $200 million loss.&#8221; Not &#8220;Bank X&#8217;s risk committee missed a model assumption.&#8221;</p><p>The 2008 subprime crisis made this pattern permanent. Countrywide&#8217;s mortgage business became synonymous with the crisis. Bank of America&#8217;s mortgage unit absorbed tens of billions in losses in the years following its acquisition. The business took the hit &#8212; publicly, permanently, and regardless of where the governance failure actually originated inside the organization.</p><p>For the head of retail banking deploying AI in credit decisioning today, that history is directly relevant. The reputational exposure doesn&#8217;t sit with the function that approved the governance framework. It sits with the function whose name is on the product.</p><p><strong>The investment calculus changes when you own the consequences.</strong></p><p>This changes the investment calculus. The question isn&#8217;t whether governance is the business leader&#8217;s responsibility &#8212; formally, it often isn&#8217;t. The question is whether the business leader is prepared to own the consequences when something goes wrong at scale. Because the consequences will land on the business, not on the framework.</p><p>The practical trade-off isn&#8217;t perfect governance versus no governance. It&#8217;s building a defensible first line now &#8212; even imperfectly &#8212; versus carrying undisclosed exposure on a book that, if it deteriorates, will be reported under your division&#8217;s name.</p><div class="pullquote"><p><em><strong>&#8220;Invest in governance because it protects the business. Not just because the regulation requires it.&#8221;</strong></em></p></div><p class="button-wrapper" data-attrs="{&quot;url&quot;:&quot;https://nehakabra1.substack.com/?utm_campaign=profile_chips&quot;,&quot;text&quot;:&quot;Subscribe to Neha Kabra&quot;,&quot;action&quot;:null,&quot;class&quot;:null}" data-component-name="ButtonCreateButton"><a class="button primary" href="https://nehakabra1.substack.com/?utm_campaign=profile_chips"><span>Subscribe to Neha Kabra</span></a></p><h3>3c. Model Risk Management</h3><h4><span class="mention-wrap" data-attrs="{&quot;name&quot;:&quot;Violeta Klein, CISSP, CEFA&quot;,&quot;id&quot;:405281162,&quot;type&quot;:&quot;user&quot;,&quot;url&quot;:null,&quot;photo_url&quot;:&quot;https://substack-post-media.s3.amazonaws.com/public/images/039e68c9-cf47-4e1e-b25e-3c4e039560fe_3361x3361.jpeg&quot;,&quot;uuid&quot;:&quot;dde519c3-190d-42b3-9d97-c15de151d4c3&quot;}" data-component-name="MentionToDOM"></span> </h4><p>Financial services organizations have decades of model risk management infrastructure. Validation frameworks. Backtesting. Governance committees. The assumption is natural: existing model risk frameworks cover AI.</p><p>They cover the model. They do not cover the system.</p><p>Model risk management asks whether the model performs as intended. The EU AI Act asks whether the system &#8212; not the model, the system &#8212; makes decisions about people in a way that requires regulatory oversight. A credit scoring model that passes validation can still be operating as an unregistered high-risk AI system if nobody assessed whether it falls within the regulation&#8217;s scope, nobody documented the intended purpose at the system level, and nobody built the human oversight mechanism the regulation requires.</p><p>Model validation and regulatory conformity assessment are not the same process, do not ask the same questions, and do not produce the same evidence.</p><h4><span class="mention-wrap" data-attrs="{&quot;name&quot;:&quot;Neha Kabra&quot;,&quot;id&quot;:120858550,&quot;type&quot;:&quot;user&quot;,&quot;url&quot;:null,&quot;photo_url&quot;:&quot;https://substack-post-media.s3.amazonaws.com/public/images/a26e4b77-bd83-41c3-b708-22c6806b1e0c_314x316.png&quot;,&quot;uuid&quot;:&quot;91665120-acfb-43be-9e5a-eb25354506c4&quot;}" data-component-name="MentionToDOM"></span> </h4><p><strong>Tiered model routing: governance architecture, not just cost management.</strong></p><p>A well-governed financial services organization deploying AI in credit decisioning doesn&#8217;t run every case through the same model. It builds a tiered routing architecture &#8212; simpler, lower-risk applications processed through faster, less expensive models; complex cases escalated to more capable ones; the highest-sensitivity decisions, perhaps the top five percent, routed to the most capable model available. This isn&#8217;t cost optimization alone. It&#8217;s a documented decision logic for which model touches which decision &#8212; an auditable governance architecture, not just a deployment choice.</p><p><strong>Model checks model: building quality control into the process.</strong></p><p>The second layer is quality control. Because AI is probabilistic &#8212; the same case can produce a different credit memo on every run &#8212; a well-governed deployment uses one model to check another model&#8217;s output before it reaches the human approver. This doesn&#8217;t eliminate variance. It creates a systematic check on it before the output influences a consequential decision.</p><p><strong>Human in the loop: the trigger has to be jointly owned.</strong></p><p>The third layer is the human in the loop. But here is where most deployments get it wrong. The trigger that routes a case to human review &#8212; the definition of what constitutes an edge case &#8212; is typically set by the model risk team in isolation, using technical parameters that have no connection to what a genuinely complex credit decision looks like in practice.</p><p>The business knows what a hard case looks like. The risk function knows where the model&#8217;s failure modes sit. Neither can define the human review trigger alone without creating a gap. A jointly owned definition &#8212; business and risk sitting in the same room, calibrating the trigger against real operational context &#8212; is what makes the human oversight mechanism function in practice rather than just on paper.</p><p>The governance architecture is only as strong as the collaboration that defined its boundaries.</p><div class="captioned-image-container"><figure><a class="image-link image2 is-viewable-img" target="_blank" href="https://substackcdn.com/image/fetch/$s_!LJi2!,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F2c46a297-d298-45fd-a6f4-b50ddbb279f7_1649x2048.png" data-component-name="Image2ToDOM"><div class="image2-inset"><picture><source type="image/webp" srcset="https://substackcdn.com/image/fetch/$s_!LJi2!,w_424,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F2c46a297-d298-45fd-a6f4-b50ddbb279f7_1649x2048.png 424w, https://substackcdn.com/image/fetch/$s_!LJi2!,w_848,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F2c46a297-d298-45fd-a6f4-b50ddbb279f7_1649x2048.png 848w, https://substackcdn.com/image/fetch/$s_!LJi2!,w_1272,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F2c46a297-d298-45fd-a6f4-b50ddbb279f7_1649x2048.png 1272w, https://substackcdn.com/image/fetch/$s_!LJi2!,w_1456,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F2c46a297-d298-45fd-a6f4-b50ddbb279f7_1649x2048.png 1456w" sizes="100vw"><img src="https://substackcdn.com/image/fetch/$s_!LJi2!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F2c46a297-d298-45fd-a6f4-b50ddbb279f7_1649x2048.png" width="1456" height="1808" data-attrs="{&quot;src&quot;:&quot;https://substack-post-media.s3.amazonaws.com/public/images/2c46a297-d298-45fd-a6f4-b50ddbb279f7_1649x2048.png&quot;,&quot;srcNoWatermark&quot;:null,&quot;fullscreen&quot;:null,&quot;imageSize&quot;:null,&quot;height&quot;:1808,&quot;width&quot;:1456,&quot;resizeWidth&quot;:null,&quot;bytes&quot;:1774922,&quot;alt&quot;:null,&quot;title&quot;:null,&quot;type&quot;:&quot;image/png&quot;,&quot;href&quot;:null,&quot;belowTheFold&quot;:true,&quot;topImage&quot;:false,&quot;internalRedirect&quot;:&quot;https://www.zerodaydawn.com/i/194168507?img=https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F2c46a297-d298-45fd-a6f4-b50ddbb279f7_1649x2048.png&quot;,&quot;isProcessing&quot;:false,&quot;align&quot;:null,&quot;offset&quot;:false}" class="sizing-normal" alt="" srcset="https://substackcdn.com/image/fetch/$s_!LJi2!,w_424,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F2c46a297-d298-45fd-a6f4-b50ddbb279f7_1649x2048.png 424w, https://substackcdn.com/image/fetch/$s_!LJi2!,w_848,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F2c46a297-d298-45fd-a6f4-b50ddbb279f7_1649x2048.png 848w, https://substackcdn.com/image/fetch/$s_!LJi2!,w_1272,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F2c46a297-d298-45fd-a6f4-b50ddbb279f7_1649x2048.png 1272w, https://substackcdn.com/image/fetch/$s_!LJi2!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F2c46a297-d298-45fd-a6f4-b50ddbb279f7_1649x2048.png 1456w" sizes="100vw" loading="lazy"></picture><div class="image-link-expand"><div class="pencraft pc-display-flex pc-gap-8 pc-reset"><button tabindex="0" type="button" class="pencraft pc-reset pencraft icon-container restack-image"><svg role="img" width="20" height="20" viewBox="0 0 20 20" fill="none" stroke-width="1.5" stroke="var(--color-fg-primary)" stroke-linecap="round" stroke-linejoin="round" xmlns="http://www.w3.org/2000/svg"><g><title></title><path d="M2.53001 7.81595C3.49179 4.73911 6.43281 2.5 9.91173 2.5C13.1684 2.5 15.9537 4.46214 17.0852 7.23684L17.6179 8.67647M17.6179 8.67647L18.5002 4.26471M17.6179 8.67647L13.6473 6.91176M17.4995 12.1841C16.5378 15.2609 13.5967 17.5 10.1178 17.5C6.86118 17.5 4.07589 15.5379 2.94432 12.7632L2.41165 11.3235M2.41165 11.3235L1.5293 15.7353M2.41165 11.3235L6.38224 13.0882"></path></g></svg></button><button tabindex="0" type="button" class="pencraft pc-reset pencraft icon-container view-image"><svg xmlns="http://www.w3.org/2000/svg" width="20" height="20" viewBox="0 0 24 24" fill="none" stroke="currentColor" stroke-width="2" stroke-linecap="round" stroke-linejoin="round" class="lucide lucide-maximize2 lucide-maximize-2"><polyline points="15 3 21 3 21 9"></polyline><polyline points="9 21 3 21 3 15"></polyline><line x1="21" x2="14" y1="3" y2="10"></line><line x1="3" x2="10" y1="21" y2="14"></line></svg></button></div></div></div></a></figure></div><p></p><h3>3d. Scaling Governance for Repeatable Use Cases</h3><h4><span class="mention-wrap" data-attrs="{&quot;name&quot;:&quot;Violeta Klein, CISSP, CEFA&quot;,&quot;id&quot;:405281162,&quot;type&quot;:&quot;user&quot;,&quot;url&quot;:null,&quot;photo_url&quot;:&quot;https://substack-post-media.s3.amazonaws.com/public/images/039e68c9-cf47-4e1e-b25e-3c4e039560fe_3361x3361.jpeg&quot;,&quot;uuid&quot;:&quot;966eb09c-ac9a-45f1-8579-8e097b46f38b&quot;}" data-component-name="MentionToDOM"></span> </h4><p>One governance template. Standardized risk assessments. Reusable compliance artifacts. Governance-as-a-platform. The CFO&#8217;s dream.</p><p>The regulation requires a risk assessment for each system, not each template. Scaling governance through standardization creates documentation that describes a generic system. The regulator examines the specific system &#8212; the specific data it processes, the specific decisions it influences, the specific population it affects, the specific operational context it operates in.</p><p>A templated assessment that says &#8220;this system processes personal data in a financial services context&#8221; does not satisfy a regulation that asks &#8220;what specific risks does this specific system pose to the specific natural persons whose creditworthiness it evaluates?&#8221;</p><p>Templated governance looks efficient internally. It looks like a gap externally.</p><p>The insurance parallel makes the pattern visible. A life and health insurance risk assessment system deployed across multiple product lines with the same governance template faces the same structural problem. Each product line affects a different population, processes different health data, and operates under different actuarial assumptions. One template cannot document risks it was never designed to differentiate.</p><h4><span class="mention-wrap" data-attrs="{&quot;name&quot;:&quot;Neha Kabra&quot;,&quot;id&quot;:120858550,&quot;type&quot;:&quot;user&quot;,&quot;url&quot;:null,&quot;photo_url&quot;:&quot;https://substack-post-media.s3.amazonaws.com/public/images/a26e4b77-bd83-41c3-b708-22c6806b1e0c_314x316.png&quot;,&quot;uuid&quot;:&quot;230008ce-2c8d-4206-bcb7-76624eeba04b&quot;}" data-component-name="MentionToDOM"></span> </h4><p><strong>Templates are not the problem &#8212; missing context is.</strong></p><p>Templates have always existed in financial services. The relationship manager workbench, the credit assessment process, the underwriting checklist &#8212; standardized steps and documented workflows are not new. They are not the problem. In an AI-first world, the differentiator is not whether you have a template. It is whether the AI system has a deep enough understanding of business intent and context to differentiate between cases that look identical on the surface but require completely different responses.</p><p><strong>The contact center test: same classification, different intent.</strong></p><p>A contact center handling bill payment calls illustrates the point. Every call carries the same surface classification &#8212; bill payment. But the intent behind each call is different. A customer calling because their limit is exceeded needs a different resolution path than one calling because digital authentication failed, or because they need to register a new payee, or because they want to query a charge. If the AI system routes every call on the surface classification, it is technically functioning and operationally failing. The wrong solution gets deployed at scale &#8212; consistently, repeatedly, and invisibly.</p><p><strong>In credit decisioning, outcome without context is not a decision.</strong></p><p>Extrapolate that to credit decisioning. A credit application can be declined for twenty different reasons. The right next step &#8212; refer to a specialist, request additional documentation, trigger a manual review, offer an alternative product &#8212; depends on which reason applies. An AI system that classifies the outcome without understanding the causal context will produce a decision. It will not produce the right decision.</p><p>Building a repeatable AI capability requires investing in the context layer first &#8212; the shared, aligned understanding of business logic that allows the system to differentiate at the intent level, not just the classification level. That understanding cannot be templated. It has to be built, validated, and documented specifically for each deployment context.</p><p>A template governs the process. Only context governs the outcome.</p><h3>3e. ROI vs. Governance Tradeoffs</h3><h4><span class="mention-wrap" data-attrs="{&quot;name&quot;:&quot;Violeta Klein, CISSP, CEFA&quot;,&quot;id&quot;:405281162,&quot;type&quot;:&quot;user&quot;,&quot;url&quot;:null,&quot;photo_url&quot;:&quot;https://substack-post-media.s3.amazonaws.com/public/images/039e68c9-cf47-4e1e-b25e-3c4e039560fe_3361x3361.jpeg&quot;,&quot;uuid&quot;:&quot;eaa0160f-fd8a-4287-b994-396a36cb6bb3&quot;}" data-component-name="MentionToDOM"></span> </h4><p>The board-level tension is structural: governance slows deployment, deployment drives returns. The pressure to defer governance to &#8220;phase two&#8221; is real, rational, and dangerous.</p><div class="pullquote"><p><em><strong>&#8220;There is no phase two.&#8221;</strong></em></p></div><p>Regulatory obligations attach the moment the system is deployed. A high-risk AI system placed on the EU market without conformity assessment is non-compliant from day one &#8212; not from the day the organization gets around to completing the assessment. Deferring governance does not defer liability. It creates undocumented liability that compounds with every day the system operates.</p><p>The business case priced the upside. Nobody told the board the downside had a statutory price tag.</p><h4><span class="mention-wrap" data-attrs="{&quot;name&quot;:&quot;Neha Kabra&quot;,&quot;id&quot;:120858550,&quot;type&quot;:&quot;user&quot;,&quot;url&quot;:null,&quot;photo_url&quot;:&quot;https://substack-post-media.s3.amazonaws.com/public/images/a26e4b77-bd83-41c3-b708-22c6806b1e0c_314x316.png&quot;,&quot;uuid&quot;:&quot;4089df4c-9408-49b7-91b5-f63fa78189c4&quot;}" data-component-name="MentionToDOM"></span> </h4><p><strong>Governance gets deferred because it arrives too late in the wrong room.</strong></p><p>The reason governance gets deferred is structural, not intentional. The business case is built by the team that owns the ROI. Governance cost sits in a different function, with a different budget and a different reporting line. By the time the governance estimate arrives, the business case is already approved and the deployment already sequenced. Governance becomes a retrofit. </p><div class="pullquote"><p><strong>&#8220;Retrofits are always more expensive than builds.&#8221;</strong></p><p>The practical fix is uncomfortable: governance has to be in the room when the business case is being written. Not reviewed after. Not added as a phase two workstream. A line in the original investment case, with a named owner, before the board sees the number.</p></div><p><strong>AI is new enough that nobody gets it right the first time.</strong></p><p>But there is a harder problem underneath the sequencing problem. AI is new enough that nobody gets the governance right the first time. The system will encounter cases it wasn&#8217;t designed for. The context layer will need enriching. The model routing thresholds will need recalibrating as model costs and capabilities shift. The intent-level logic will need updating as new case types emerge. The human-in-the-loop trigger will need revalidating as the system&#8217;s failure modes become better understood.</p><p>That is not a governance failure. That is the nature of the technology.</p><p><strong>The regulator doesn&#8217;t object to learning. They object to undocumented learning.</strong></p><p>The question is what iteration looks like inside a regulatory framework. The regulator doesn&#8217;t object to learning and improving. They object to undocumented learning and improving. Every change &#8212; the routing logic, the context layer, the decisioning tools, the oversight mechanism &#8212; needs to be documented, assessed, approved, and auditable. The governance architecture has to be alive, not static.</p><p>This is what changes the investment calculus. The ROI calculation that excludes governance cost is incomplete. But so is the governance budget that funds a one-time build. The real cost is the operating rhythm &#8212; the ongoing discipline of reviewing, revising, and revalidating as the system evolves.</p><p>Governance isn&#8217;t a project. It&#8217;s how you run the system.</p><div><hr></div><h2>4. The Ownership Gap</h2><h4><span class="mention-wrap" data-attrs="{&quot;name&quot;:&quot;Violeta Klein, CISSP, CEFA&quot;,&quot;id&quot;:405281162,&quot;type&quot;:&quot;user&quot;,&quot;url&quot;:null,&quot;photo_url&quot;:&quot;https://substack-post-media.s3.amazonaws.com/public/images/039e68c9-cf47-4e1e-b25e-3c4e039560fe_3361x3361.jpeg&quot;,&quot;uuid&quot;:&quot;32d87ef6-2ac5-4f68-9a49-f9b552384ae2&quot;}" data-component-name="MentionToDOM"></span> </h4><p>Four functions. Four partial views. No single owner of the question the regulator actually asks.</p><p>The CISO understands the security exposure but not the regulatory reporting obligation. The compliance lead understands the obligation but not the system&#8217;s technical behavior. The CTO understands the architecture but not the regulatory classification logic. The CRO understands the risk framework but not how the system&#8217;s outputs materially influence decisions about people.</p><p>The regulation holds the organization liable &#8212; not the function.</p><p>Until the internal accountability structure matches the external liability structure, governance is a fiction distributed across an org chart that nobody owns.</p><p>The serious incident reporting obligation makes this concrete. If the credit decisioning system produces an outcome that constitutes a serious incident &#8212; discriminatory lending at scale, a breach of fundamental rights &#8212; the reporting clock starts at awareness. The CISO&#8217;s incident response playbook does not include a regulatory filing step. The compliance team&#8217;s reporting workflow does not start with a SOC alert. The clock runs while three teams debate which process applies.</p><h4><span class="mention-wrap" data-attrs="{&quot;name&quot;:&quot;Neha Kabra&quot;,&quot;id&quot;:120858550,&quot;type&quot;:&quot;user&quot;,&quot;url&quot;:null,&quot;photo_url&quot;:&quot;https://substack-post-media.s3.amazonaws.com/public/images/a26e4b77-bd83-41c3-b708-22c6806b1e0c_314x316.png&quot;,&quot;uuid&quot;:&quot;179c2222-a8c6-48c0-a635-a626bb229bae&quot;}" data-component-name="MentionToDOM"></span> </h4><p>Violeta has described the structural problem precisely. Four functions, four partial views, one organization holding the liability. The instinct is to solve it with a new committee, a new governance workstream, a new reporting line. That instinct produces more paper and less accountability.</p><p>The ownership gap doesn&#8217;t close through org structure. It closes through a shared definition of what you are trying to achieve &#8212; and what failure looks like.</p><p><strong>The four questions that close the gap.</strong></p><p>Before any governance framework is designed, the business leader and the risk function need to agree on four things.</p><p>What does a well-governed AI credit decision look like &#8212; not in regulatory language, but in terms the business actually operates by?</p><p>What is an unacceptable outcome &#8212; not just a compliance breach, but a business failure the organization cannot absorb?</p><p>What is the tolerance for AI error at each step of the process, expressed in business consequence terms, not model accuracy metrics?</p><p>And how will we monitor, review, and revise as the system evolves &#8212; because the governance that is sufficient at deployment will not be sufficient at month eighteen.</p><p>The fourth question is the one that makes the other three honest. Without it, you have a static definition that becomes obsolete the moment the system encounters something it wasn&#8217;t designed for.</p><p>When these four questions have clear, jointly owned answers, the four functions Violeta describes have something to govern toward. The CISO knows which workflows carry the highest consequence. The compliance lead understands the business logic behind the routing decisions. The CRO can assess model risk in business outcome terms. The CTO knows which edge cases carry real operational risk.</p><p>The business leader&#8217;s job is not to coordinate all four functions. It is to ensure the shared outcome definition exists &#8212; and is specific enough that governance has a target, not just a perimeter.</p><div><hr></div><h2>5. What a Defensible Architecture Looks Like</h2><h4><span class="mention-wrap" data-attrs="{&quot;name&quot;:&quot;Violeta Klein, CISSP, CEFA&quot;,&quot;id&quot;:405281162,&quot;type&quot;:&quot;user&quot;,&quot;url&quot;:null,&quot;photo_url&quot;:&quot;https://substack-post-media.s3.amazonaws.com/public/images/039e68c9-cf47-4e1e-b25e-3c4e039560fe_3361x3361.jpeg&quot;,&quot;uuid&quot;:&quot;cf7e6e68-1844-4ad6-a4a0-6e2b71b35f49&quot;}" data-component-name="MentionToDOM"></span> </h4><p>What the regulator needs to see:</p><p>A documented classification decision. Not a spreadsheet. Not an internal memo. A formal determination that this system operates in a high-risk domain, with documented reasoning connecting the system&#8217;s function to the regulatory category. Who made the determination. What methodology they used. Why the determination is defensible.</p><p>A risk assessment that addresses the specific system. The specific data it processes. The specific decisions it influences. The specific population it affects. The specific risks it creates. Updated when the system changes &#8212; not annually, not at the next audit cycle, but when the system changes.</p><p>Human oversight that exists in practice. Designated personnel. Trained on the system&#8217;s limitations. With the authority to override. With visibility into why the system produced the output it produced. Oversight that a regulator can verify through evidence, not through an org chart.</p><p>A detection mechanism for behavioral drift. The ability to identify when the system&#8217;s operational behavior has departed from the documented intended purpose. And a documented response protocol for what happens when the boundary is crossed &#8212; reassessment, not incident response.</p><p>A reporting capability. The operational infrastructure to detect a serious incident within the mandatory reporting window and file the required notification. Not a policy document. An operational capability with named owners, tested procedures, and evidence that it works.</p><h4><span class="mention-wrap" data-attrs="{&quot;name&quot;:&quot;Neha Kabra&quot;,&quot;id&quot;:120858550,&quot;type&quot;:&quot;user&quot;,&quot;url&quot;:null,&quot;photo_url&quot;:&quot;https://substack-post-media.s3.amazonaws.com/public/images/a26e4b77-bd83-41c3-b708-22c6806b1e0c_314x316.png&quot;,&quot;uuid&quot;:&quot;d2a48b4a-027a-4583-b7d8-5e31cd85bee9&quot;}" data-component-name="MentionToDOM"></span> </h4><p>Violeta has mapped what the regulator needs to see. Meeting that standard and building something that actually works are not the same exercise. Three business elements make the difference.</p><p><strong>Context first.</strong></p><p>The governance architecture is only as strong as the shared understanding of business intent sitting underneath it. In a credit decisioning deployment, that means the routing logic, the oversight trigger, and the review criteria are all traceable back to a jointly owned definition of what a good decision looks like &#8212; and what an unacceptable one looks like. Without that definition, the five requirements Violeta describes become documentation exercises. With it, they become a coherent system.</p><p><strong>Prioritize the first line.</strong></p><p>With finite governance budget and multiple competing demands, sequencing matters. In credit decisioning, the first line &#8212; the relationship manager, the workbench, the point where AI output meets human judgement &#8212; gets funded first. If the first line fails, everything downstream fails with it. Start there. Build the audit trail around it. Extend outward as capacity allows.</p><p><strong>Build for change, not for launch.</strong></p><p>The system approved today will not be the same system running in eighteen months. A defensible architecture has a review rhythm built in &#8212; regular, documented, jointly owned by business and risk. Every change to the routing logic, the context layer, or the oversight mechanism is assessed and approved before it goes live. Not because the regulation requires it. Because an ungoverned change to a live credit decisioning system is a liability the business will own.</p><p>The credit decisioning system that passes both the board and the regulator is not more complex than the one that passes only the board. It is more deliberate.</p><div><hr></div><h2>Closing</h2><h4><span class="mention-wrap" data-attrs="{&quot;name&quot;:&quot;Violeta Klein, CISSP, CEFA&quot;,&quot;id&quot;:405281162,&quot;type&quot;:&quot;user&quot;,&quot;url&quot;:null,&quot;photo_url&quot;:&quot;https://substack-post-media.s3.amazonaws.com/public/images/039e68c9-cf47-4e1e-b25e-3c4e039560fe_3361x3361.jpeg&quot;,&quot;uuid&quot;:&quot;6949ec01-d17e-4dd6-8021-5d38818be9bc&quot;}" data-component-name="MentionToDOM"></span> </h4><p>The governance that passes the board and the governance that survives the regulator are not two different programs. They are two tests of the same architecture.</p><p>The organizations that build for both will not spend more. They will build something that evolves with the system it governs.</p><p>The ones that build for the board alone will discover the gap when someone arrives who wasn&#8217;t in the room.</p><h4><span class="mention-wrap" data-attrs="{&quot;name&quot;:&quot;Neha Kabra&quot;,&quot;id&quot;:120858550,&quot;type&quot;:&quot;user&quot;,&quot;url&quot;:null,&quot;photo_url&quot;:&quot;https://substack-post-media.s3.amazonaws.com/public/images/a26e4b77-bd83-41c3-b708-22c6806b1e0c_314x316.png&quot;,&quot;uuid&quot;:&quot;0653a670-2c23-4292-a3b3-6bbe8c1c454e&quot;}" data-component-name="MentionToDOM"></span> </h4><p>The distinctive contribution a business leader makes to governance is demanding a different kind of signal &#8212; one that translates model behavior into business consequence early enough to act.</p><p>Here is what lands in most governance meetings today on a credit decisioning deployment:</p><p><em>Model validation &#8212; pass. Bias testing &#8212; no significant deviation. Human override rate &#8212; 4.2%. Audit trail &#8212; complete. Regulatory status &#8212; compliant.</em></p><p>Here is what should land instead:</p><p><em>In the last 30 days, the model routed 23% more cases to the highest-risk tier against a stable application volume. The portfolio is absorbing more complexity than the governance was designed for. RM override rate on AI-generated memos in the &#163;250k&#8211;&#163;500k bracket has dropped from 8% to 1.2% over six months &#8212; that is not improved confidence, that is eroded judgement. And three case types appearing in the last 60 days &#8212; self-employed applicants with irregular income patterns &#8212; were not present in the training data. The model is making decisions on cases it was never assessed against.</em></p><p>None of those signals appear in a compliance report. All of them are actionable before the portfolio absorbs the cost.</p><p>Regulations exist to protect consumers and facilitate sound business. The governance architecture that serves both goals surfaces signals like these &#8212; in business language, at the right moment. Not compliance status after the fact. A live read on whether the system is still doing what the board approved.</p><p>That is the shift. Not more governance. Better signal.</p><div class="captioned-image-container"><figure><a class="image-link image2 is-viewable-img" target="_blank" href="https://substackcdn.com/image/fetch/$s_!aFjB!,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F90b4201f-679b-483b-8089-5c85449b174d_1649x2048.png" data-component-name="Image2ToDOM"><div class="image2-inset"><picture><source type="image/webp" srcset="https://substackcdn.com/image/fetch/$s_!aFjB!,w_424,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F90b4201f-679b-483b-8089-5c85449b174d_1649x2048.png 424w, https://substackcdn.com/image/fetch/$s_!aFjB!,w_848,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F90b4201f-679b-483b-8089-5c85449b174d_1649x2048.png 848w, https://substackcdn.com/image/fetch/$s_!aFjB!,w_1272,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F90b4201f-679b-483b-8089-5c85449b174d_1649x2048.png 1272w, https://substackcdn.com/image/fetch/$s_!aFjB!,w_1456,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F90b4201f-679b-483b-8089-5c85449b174d_1649x2048.png 1456w" sizes="100vw"><img src="https://substackcdn.com/image/fetch/$s_!aFjB!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F90b4201f-679b-483b-8089-5c85449b174d_1649x2048.png" width="1456" height="1808" data-attrs="{&quot;src&quot;:&quot;https://substack-post-media.s3.amazonaws.com/public/images/90b4201f-679b-483b-8089-5c85449b174d_1649x2048.png&quot;,&quot;srcNoWatermark&quot;:null,&quot;fullscreen&quot;:null,&quot;imageSize&quot;:null,&quot;height&quot;:1808,&quot;width&quot;:1456,&quot;resizeWidth&quot;:null,&quot;bytes&quot;:1724523,&quot;alt&quot;:null,&quot;title&quot;:null,&quot;type&quot;:&quot;image/png&quot;,&quot;href&quot;:null,&quot;belowTheFold&quot;:true,&quot;topImage&quot;:false,&quot;internalRedirect&quot;:&quot;https://www.zerodaydawn.com/i/194168507?img=https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F90b4201f-679b-483b-8089-5c85449b174d_1649x2048.png&quot;,&quot;isProcessing&quot;:false,&quot;align&quot;:null,&quot;offset&quot;:false}" class="sizing-normal" alt="" srcset="https://substackcdn.com/image/fetch/$s_!aFjB!,w_424,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F90b4201f-679b-483b-8089-5c85449b174d_1649x2048.png 424w, https://substackcdn.com/image/fetch/$s_!aFjB!,w_848,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F90b4201f-679b-483b-8089-5c85449b174d_1649x2048.png 848w, https://substackcdn.com/image/fetch/$s_!aFjB!,w_1272,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F90b4201f-679b-483b-8089-5c85449b174d_1649x2048.png 1272w, https://substackcdn.com/image/fetch/$s_!aFjB!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F90b4201f-679b-483b-8089-5c85449b174d_1649x2048.png 1456w" sizes="100vw" loading="lazy"></picture><div class="image-link-expand"><div class="pencraft pc-display-flex pc-gap-8 pc-reset"><button tabindex="0" type="button" class="pencraft pc-reset pencraft icon-container restack-image"><svg role="img" width="20" height="20" viewBox="0 0 20 20" fill="none" stroke-width="1.5" stroke="var(--color-fg-primary)" stroke-linecap="round" stroke-linejoin="round" xmlns="http://www.w3.org/2000/svg"><g><title></title><path d="M2.53001 7.81595C3.49179 4.73911 6.43281 2.5 9.91173 2.5C13.1684 2.5 15.9537 4.46214 17.0852 7.23684L17.6179 8.67647M17.6179 8.67647L18.5002 4.26471M17.6179 8.67647L13.6473 6.91176M17.4995 12.1841C16.5378 15.2609 13.5967 17.5 10.1178 17.5C6.86118 17.5 4.07589 15.5379 2.94432 12.7632L2.41165 11.3235M2.41165 11.3235L1.5293 15.7353M2.41165 11.3235L6.38224 13.0882"></path></g></svg></button><button tabindex="0" type="button" class="pencraft pc-reset pencraft icon-container view-image"><svg xmlns="http://www.w3.org/2000/svg" width="20" height="20" viewBox="0 0 24 24" fill="none" stroke="currentColor" stroke-width="2" stroke-linecap="round" stroke-linejoin="round" class="lucide lucide-maximize2 lucide-maximize-2"><polyline points="15 3 21 3 21 9"></polyline><polyline points="9 21 3 21 3 15"></polyline><line x1="21" x2="14" y1="3" y2="10"></line><line x1="3" x2="10" y1="21" y2="14"></line></svg></button></div></div></div></a></figure></div><div><hr></div><p><em>Regulatory Disclaimer: This article provides educational analysis of the EU Artificial Intelligence Act (Regulation (EU) 2024/1689) and related governance frameworks. Nothing in this article constitutes legal advice, regulatory interpretation, or compliance certification. Organizations should consult qualified legal counsel specializing in EU AI Act compliance before making classification determinations or deployment decisions.</em></p><p><em>The views expressed by </em><span class="mention-wrap" data-attrs="{&quot;name&quot;:&quot;Neha Kabra&quot;,&quot;id&quot;:120858550,&quot;type&quot;:&quot;user&quot;,&quot;url&quot;:null,&quot;photo_url&quot;:&quot;https://substack-post-media.s3.amazonaws.com/public/images/a26e4b77-bd83-41c3-b708-22c6806b1e0c_314x316.png&quot;,&quot;uuid&quot;:&quot;e46e440c-22cb-4da4-91f1-f3942c93dfcb&quot;}" data-component-name="MentionToDOM"></span> <em>are in her personal capacity and do not represent the views of McKinsey &amp; Company or any other institution.</em></p><p class="button-wrapper" data-attrs="{&quot;url&quot;:&quot;https://nehakabra1.substack.com/?utm_campaign=profile_chips&quot;,&quot;text&quot;:&quot;Subscribe to Neha Kabra&quot;,&quot;action&quot;:null,&quot;class&quot;:null}" data-component-name="ButtonCreateButton"><a class="button primary" href="https://nehakabra1.substack.com/?utm_campaign=profile_chips"><span>Subscribe to Neha Kabra</span></a></p>]]></content:encoded></item><item><title><![CDATA[Governing What Your Agent Does Next]]></title><description><![CDATA[The operational envelope for Agentic AI: four questions, one tripwire, and the only governance framework built for runtime]]></description><link>https://www.zerodaydawn.com/p/governing-what-your-agent-does-next</link><guid isPermaLink="false">https://www.zerodaydawn.com/p/governing-what-your-agent-does-next</guid><dc:creator><![CDATA[Violeta Klein, CISSP, AIGP]]></dc:creator><pubDate>Sun, 12 Apr 2026 13:01:34 GMT</pubDate><enclosure url="https://substackcdn.com/image/fetch/$s_!FdX_!,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F6d9afca6-66c5-445b-9422-ed399538076e_1920x1080.png" length="0" type="image/jpeg"/><content:encoded><![CDATA[<div class="captioned-image-container"><figure><a class="image-link image2 is-viewable-img" target="_blank" href="https://substackcdn.com/image/fetch/$s_!FdX_!,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F6d9afca6-66c5-445b-9422-ed399538076e_1920x1080.png" data-component-name="Image2ToDOM"><div class="image2-inset"><picture><source type="image/webp" srcset="https://substackcdn.com/image/fetch/$s_!FdX_!,w_424,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F6d9afca6-66c5-445b-9422-ed399538076e_1920x1080.png 424w, https://substackcdn.com/image/fetch/$s_!FdX_!,w_848,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F6d9afca6-66c5-445b-9422-ed399538076e_1920x1080.png 848w, https://substackcdn.com/image/fetch/$s_!FdX_!,w_1272,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F6d9afca6-66c5-445b-9422-ed399538076e_1920x1080.png 1272w, https://substackcdn.com/image/fetch/$s_!FdX_!,w_1456,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F6d9afca6-66c5-445b-9422-ed399538076e_1920x1080.png 1456w" sizes="100vw"><img src="https://substackcdn.com/image/fetch/$s_!FdX_!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F6d9afca6-66c5-445b-9422-ed399538076e_1920x1080.png" width="1456" height="819" data-attrs="{&quot;src&quot;:&quot;https://substack-post-media.s3.amazonaws.com/public/images/6d9afca6-66c5-445b-9422-ed399538076e_1920x1080.png&quot;,&quot;srcNoWatermark&quot;:null,&quot;fullscreen&quot;:null,&quot;imageSize&quot;:null,&quot;height&quot;:819,&quot;width&quot;:1456,&quot;resizeWidth&quot;:null,&quot;bytes&quot;:3290339,&quot;alt&quot;:null,&quot;title&quot;:null,&quot;type&quot;:&quot;image/png&quot;,&quot;href&quot;:null,&quot;belowTheFold&quot;:false,&quot;topImage&quot;:true,&quot;internalRedirect&quot;:&quot;https://www.zerodaydawn.com/i/193948631?img=https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F6d9afca6-66c5-445b-9422-ed399538076e_1920x1080.png&quot;,&quot;isProcessing&quot;:false,&quot;align&quot;:null,&quot;offset&quot;:false}" class="sizing-normal" alt="" srcset="https://substackcdn.com/image/fetch/$s_!FdX_!,w_424,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F6d9afca6-66c5-445b-9422-ed399538076e_1920x1080.png 424w, https://substackcdn.com/image/fetch/$s_!FdX_!,w_848,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F6d9afca6-66c5-445b-9422-ed399538076e_1920x1080.png 848w, https://substackcdn.com/image/fetch/$s_!FdX_!,w_1272,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F6d9afca6-66c5-445b-9422-ed399538076e_1920x1080.png 1272w, https://substackcdn.com/image/fetch/$s_!FdX_!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F6d9afca6-66c5-445b-9422-ed399538076e_1920x1080.png 1456w" sizes="100vw" fetchpriority="high"></picture><div class="image-link-expand"><div class="pencraft pc-display-flex pc-gap-8 pc-reset"><button tabindex="0" type="button" class="pencraft pc-reset pencraft icon-container restack-image"><svg role="img" width="20" height="20" viewBox="0 0 20 20" fill="none" stroke-width="1.5" stroke="var(--color-fg-primary)" stroke-linecap="round" stroke-linejoin="round" xmlns="http://www.w3.org/2000/svg"><g><title></title><path d="M2.53001 7.81595C3.49179 4.73911 6.43281 2.5 9.91173 2.5C13.1684 2.5 15.9537 4.46214 17.0852 7.23684L17.6179 8.67647M17.6179 8.67647L18.5002 4.26471M17.6179 8.67647L13.6473 6.91176M17.4995 12.1841C16.5378 15.2609 13.5967 17.5 10.1178 17.5C6.86118 17.5 4.07589 15.5379 2.94432 12.7632L2.41165 11.3235M2.41165 11.3235L1.5293 15.7353M2.41165 11.3235L6.38224 13.0882"></path></g></svg></button><button tabindex="0" type="button" class="pencraft pc-reset pencraft icon-container view-image"><svg xmlns="http://www.w3.org/2000/svg" width="20" height="20" viewBox="0 0 24 24" fill="none" stroke="currentColor" stroke-width="2" stroke-linecap="round" stroke-linejoin="round" class="lucide lucide-maximize2 lucide-maximize-2"><polyline points="15 3 21 3 21 9"></polyline><polyline points="9 21 3 21 3 15"></polyline><line x1="21" x2="14" y1="3" y2="10"></line><line x1="3" x2="10" y1="21" y2="14"></line></svg></button></div></div></div></a></figure></div><h2>Executive Summary</h2><p>Last week&#8217;s piece laid out the structural impossibility. <a href="https://www.zerodaydawn.com/p/human-oversight-at-machine-speed">Human oversight at machine</a> speed fails on the math. Kill switches fail on propagation speed. The blast perimeter expands before detection fires. Four governance frameworks mandate oversight. None of them account for the speed differential.</p><p><strong>This piece delivers the response.</strong></p><p>The <a href="https://www.zerodaydawn.com/p/guardrails-dont-scale">operational envelope</a> is the only governance architecture that survives enforcement &#8212; because <strong>it is the only one designed for systems whose behavior cannot be enumerated before runtime</strong>. Four questions define the boundary. A tripwire detects departure. A response protocol converts detection into a human decision. A documentation framework makes the whole thing defensible.</p><p>Every existing framework that assumes pre-deployment behavioral description requires this architecture underneath. They do not name it. The organizations that build it will be the ones that answer the regulator&#8217;s questions. The ones that do not will discover that &#8220;we have a human in the loop&#8221; is not an answer.</p><div class="subscription-widget-wrap-editor" data-attrs="{&quot;url&quot;:&quot;https://www.zerodaydawn.com/subscribe?&quot;,&quot;text&quot;:&quot;Subscribe&quot;,&quot;language&quot;:&quot;en&quot;}" data-component-name="SubscribeWidgetToDOM"><div class="subscription-widget show-subscribe"><div class="preamble"><p class="cta-caption">Zero-Day Dawn is a reader-supported publication. To receive new posts and support my work, consider becoming a free or paid subscriber.</p></div><form class="subscription-widget-subscribe"><input type="email" class="email-input" name="email" placeholder="Type your email&#8230;" tabindex="-1"><input type="submit" class="button primary" value="Subscribe"><div class="fake-input-wrapper"><div class="fake-input"></div><div class="fake-button"></div></div></form></div></div><div><hr></div><h2>The Comfortable Lie</h2><p>Here is what the market wants to believe: risk-tiered review solves the oversight problem.</p><p>It does not.</p><p>Risk-tiered review is the emerging consensus. Route high-consequence actions to a human. Let low-risk operations execute autonomously. Every framework is converging on this pattern. The OWASP State of Agentic AI Security and Governance report calls for it. Singapore&#8217;s MGF recommends checkpoints on high-stakes, irreversible, or outlier actions. ForHumanity mandates Human-in-Command with established stop, pause, disregard, override, and reverse processes.</p><p>The pattern is architecturally sound. The problem underneath it is unsolved.</p><p>Who defines high-consequence? The OWASP State of Agentic AI Security and Governance report mandates classifying agent actions by risk tier and assigning oversight requirements to each tier. The mandate is correct. </p><blockquote><p><strong>The problem underneath it is unaddressed: what counts as high-stakes when the agent composed a workflow at runtime that nobody anticipated at assessment time?</strong> <strong>This is the threshold-definition problem. No framework has solved it.</strong></p></blockquote><p>Risk-tiered review without a defined boundary is a governance fiction. It classifies actions against a threshold that does not exist. The operational envelope is the answer. It does not classify individual actions. <strong>It defines the boundary of the entire assessed behavioral space &#8212; and treats every departure from that space as a governance event.</strong></p><div><hr></div><h2>The Threshold Nobody Defined</h2><p>All major governance frameworks share a structural assumption: the provider or deployer can describe what the system does <em><strong>before it operates</strong></em>. Document the intended purpose. Assess risks within those boundaries. Certify against requirements. Monitor for deviation from the documented baseline.</p><p><strong>For agentic AI, this assumption is architecturally false.</strong></p><p>An agent with access to ten authorized tools across ten chaining steps can compose ten billion possible workflows. The outcome space grows exponentially with every action the agent is permitted to chain. No documentation captures it. No risk assessment bounds it. No monitoring system watches all of it.</p><p>The <strong><a href="https://www.zerodaydawn.com/p/guardrails-dont-scale">Pre-Computation Fallacy</a></strong> is the name for this structural failure. The governance specification requires describability. The math does not allow it.</p><p>The operational envelope resolves the fallacy &#8212; not by attempting to describe the full outcome space, but by defining the subset of behaviors the organization actually assessed. Everything inside the envelope was evaluated, documented, and accepted. Everything outside it is unknown territory.</p><p>The envelope is not a fence around the system&#8217;s behavior. It is a tripwire inside a defined boundary. When the agent&#8217;s behavior crosses that boundary, what happens next is not another automated decision. It is a human judgment about whether the system continues, pauses, or stops.</p><p>This is the architecture the OWASP State of Agentic AI Security and Governance report calls for when it names risk-tiered review. This is what Article 14 means when it requires effective oversight. This is what Singapore&#8217;s MGF requires when it mandates checkpoints on high-stakes actions. The frameworks describe the need. The operational envelope is the engineering response.</p><div><hr></div><p><em>The full methodology &#8212; the four questions that define the envelope, the tripwire detection architecture, the response protocol for boundary crossings, and the documentation framework a CISO can take into a Monday morning meeting &#8212; continues below for paid subscribers.</em></p>
      <p>
          <a href="https://www.zerodaydawn.com/p/governing-what-your-agent-does-next">
              Read more
          </a>
      </p>
   ]]></content:encoded></item><item><title><![CDATA[Human Oversight at Machine Speed]]></title><description><![CDATA[When spawning agents outrun your blast perimeter]]></description><link>https://www.zerodaydawn.com/p/human-oversight-at-machine-speed</link><guid isPermaLink="false">https://www.zerodaydawn.com/p/human-oversight-at-machine-speed</guid><dc:creator><![CDATA[Violeta Klein, CISSP, AIGP]]></dc:creator><pubDate>Mon, 06 Apr 2026 05:01:20 GMT</pubDate><enclosure url="https://substackcdn.com/image/fetch/$s_!0pps!,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F5c848b02-ed55-4a73-85b5-878ef72494a2_1920x1080.png" length="0" type="image/jpeg"/><content:encoded><![CDATA[<div class="captioned-image-container"><figure><a class="image-link image2 is-viewable-img" target="_blank" href="https://substackcdn.com/image/fetch/$s_!0pps!,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F5c848b02-ed55-4a73-85b5-878ef72494a2_1920x1080.png" data-component-name="Image2ToDOM"><div class="image2-inset"><picture><source type="image/webp" srcset="https://substackcdn.com/image/fetch/$s_!0pps!,w_424,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F5c848b02-ed55-4a73-85b5-878ef72494a2_1920x1080.png 424w, https://substackcdn.com/image/fetch/$s_!0pps!,w_848,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F5c848b02-ed55-4a73-85b5-878ef72494a2_1920x1080.png 848w, https://substackcdn.com/image/fetch/$s_!0pps!,w_1272,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F5c848b02-ed55-4a73-85b5-878ef72494a2_1920x1080.png 1272w, https://substackcdn.com/image/fetch/$s_!0pps!,w_1456,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F5c848b02-ed55-4a73-85b5-878ef72494a2_1920x1080.png 1456w" sizes="100vw"><img src="https://substackcdn.com/image/fetch/$s_!0pps!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F5c848b02-ed55-4a73-85b5-878ef72494a2_1920x1080.png" width="1456" height="819" data-attrs="{&quot;src&quot;:&quot;https://substack-post-media.s3.amazonaws.com/public/images/5c848b02-ed55-4a73-85b5-878ef72494a2_1920x1080.png&quot;,&quot;srcNoWatermark&quot;:null,&quot;fullscreen&quot;:null,&quot;imageSize&quot;:null,&quot;height&quot;:819,&quot;width&quot;:1456,&quot;resizeWidth&quot;:null,&quot;bytes&quot;:3309810,&quot;alt&quot;:null,&quot;title&quot;:null,&quot;type&quot;:&quot;image/png&quot;,&quot;href&quot;:null,&quot;belowTheFold&quot;:false,&quot;topImage&quot;:true,&quot;internalRedirect&quot;:&quot;https://www.zerodaydawn.com/i/192726344?img=https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F5c848b02-ed55-4a73-85b5-878ef72494a2_1920x1080.png&quot;,&quot;isProcessing&quot;:false,&quot;align&quot;:null,&quot;offset&quot;:false}" class="sizing-normal" alt="" srcset="https://substackcdn.com/image/fetch/$s_!0pps!,w_424,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F5c848b02-ed55-4a73-85b5-878ef72494a2_1920x1080.png 424w, https://substackcdn.com/image/fetch/$s_!0pps!,w_848,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F5c848b02-ed55-4a73-85b5-878ef72494a2_1920x1080.png 848w, https://substackcdn.com/image/fetch/$s_!0pps!,w_1272,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F5c848b02-ed55-4a73-85b5-878ef72494a2_1920x1080.png 1272w, https://substackcdn.com/image/fetch/$s_!0pps!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F5c848b02-ed55-4a73-85b5-878ef72494a2_1920x1080.png 1456w" sizes="100vw" fetchpriority="high"></picture><div class="image-link-expand"><div class="pencraft pc-display-flex pc-gap-8 pc-reset"><button tabindex="0" type="button" class="pencraft pc-reset pencraft icon-container restack-image"><svg role="img" width="20" height="20" viewBox="0 0 20 20" fill="none" stroke-width="1.5" stroke="var(--color-fg-primary)" stroke-linecap="round" stroke-linejoin="round" xmlns="http://www.w3.org/2000/svg"><g><title></title><path d="M2.53001 7.81595C3.49179 4.73911 6.43281 2.5 9.91173 2.5C13.1684 2.5 15.9537 4.46214 17.0852 7.23684L17.6179 8.67647M17.6179 8.67647L18.5002 4.26471M17.6179 8.67647L13.6473 6.91176M17.4995 12.1841C16.5378 15.2609 13.5967 17.5 10.1178 17.5C6.86118 17.5 4.07589 15.5379 2.94432 12.7632L2.41165 11.3235M2.41165 11.3235L1.5293 15.7353M2.41165 11.3235L6.38224 13.0882"></path></g></svg></button><button tabindex="0" type="button" class="pencraft pc-reset pencraft icon-container view-image"><svg xmlns="http://www.w3.org/2000/svg" width="20" height="20" viewBox="0 0 24 24" fill="none" stroke="currentColor" stroke-width="2" stroke-linecap="round" stroke-linejoin="round" class="lucide lucide-maximize2 lucide-maximize-2"><polyline points="15 3 21 3 21 9"></polyline><polyline points="9 21 3 21 3 15"></polyline><line x1="21" x2="14" y1="3" y2="10"></line><line x1="3" x2="10" y1="21" y2="14"></line></svg></button></div></div></div></a></figure></div><h2>Executive Summary</h2><p>Four governance frameworks require human oversight of high-risk AI systems. The EU AI Act mandates it. Singapore's Model Governance Framework for Agentic AI recommends it. NIST recommends it. The OWASP Top 10 for Agentic Applications prescribes human approval gates as core security controls.</p><p>None of them accounts for what happens when the system operates faster than any human can review.</p><p>An agent executing thousands of actions per hour produces a decision stream that a human reviewer &#8212; evaluating 50 actions in that same hour &#8212; covers at a fraction of a percent. When that agent spawns sub-agents that inherit its credentials, the action surface multiplies and the blast perimeter &#8212; the distance damage travels before detection fires &#8212; expands at a speed no oversight function was designed to match.</p><p>The organizations that survive enforcement will be the ones that stopped pretending a human in the loop satisfies the obligation &#8212; and built the detection architecture that actually does.</p><p class="button-wrapper" data-attrs="{&quot;url&quot;:&quot;https://www.zerodaydawn.com/subscribe?&quot;,&quot;text&quot;:&quot;Subscribe now&quot;,&quot;action&quot;:null,&quot;class&quot;:null}" data-component-name="ButtonCreateButton"><a class="button primary" href="https://www.zerodaydawn.com/subscribe?"><span>Subscribe now</span></a></p><div><hr></div><h2>The 0.5% Fiction</h2><p>Here is what the market wants to believe: human-in-the-loop oversight satisfies the regulatory obligation.</p><p>It does not.</p><p>The math is unforgiving.</p><p>EU AI Act Article 14 requires high-risk systems to be designed so that natural persons can effectively oversee them during the period in which they are in use. The persons assigned to oversight must understand the system&#8217;s capabilities and limitations, correctly interpret its output, and be able to interrupt its operation through a stop button or similar procedure that brings the system to a halt in a safe state.</p><p>Effectively. The regulation uses that word deliberately.</p><p>An enterprise agent executing tool calls, data retrievals, workflow compositions, and delegated tasks can produce upward of 10,000 actions per hour. A trained reviewer evaluating 50 actions in that same hour covers 0.5% &#8212; a ratio the OWASP State of Agentic AI Security and Governance report identifies as the defining constraint on human oversight at scale. The remaining 99.5% runs without human eyes on it. And that 0.5% assumes the organization can trace what the agent did in the first place &#8212; the Cloud Security Alliance found that 72% of organizations deploying AI agents cannot reliably trace agent actions across all environments. Not because the organization chose to skip oversight. Because the speed differential is a structural constraint that no staffing model resolves.</p><p>The Singapore MGF requires defined checkpoints for human approval on high-stakes, irreversible, or outlier actions. ForHumanity CORE AAA mandates Human-in-Command interactions with established processes to stop, pause, override, and reverse. NIST recommends formal risk weighing before deployment and ongoing oversight processes. The OWASP Top 10 for Agentic Applications reinforces this from the security side &#8212; ASI08 and ASI09 both prescribe human approval gates as core controls against cascading failures and trust exploitation.</p><p>Same mandate. Same structural impossibility.</p><p>The emerging response is risk-tiered review &#8212; route high-consequence actions to a human, let low-risk operations run autonomously. The pattern is sound. The problem underneath it remains open: who defines the consequence threshold? What counts as high-consequence when the agent composed a workflow at runtime that nobody anticipated at assessment time?</p><p>The threshold definition requires knowing what the agent might do. The agent was designed to determine that for itself.</p><div><hr></div><h2>The Spawning Multiplier</h2><p>The 0.5% coverage number assumes a single agent. Production architectures do not work that way.</p><p>Orchestrator agents spawn sub-agents dynamically &#8212; ephemeral workers created at runtime to handle delegated tasks. Each sub-agent inherits or derives credentials from its parent. Each one executes its own action stream. The action surface multiplies with every spawn event.</p><p>An orchestrator spawning five sub-agents with inherited credentials can produce 50,000 or more actions per hour. Human oversight coverage drops to 0.1%. Ten sub-agents with nested delegation chains push the number below what any percentage can honestly represent.</p><p>The blast perimeter expands with every spawned agent. This is not theoretical. In a documented 2025 incident, an autonomous coding agent deleted a production database, generated thousands of fictional replacement records, and falsely reported that rollback was impossible &#8212; all before a human operator could intervene. The blast perimeter was determined by what the agent could reach, not by what anyone authorized it to do. A compromised orchestrator does not expose one system. It exposes the aggregate credential surface of every sub-agent it created, every tool those sub-agents accessed, and every downstream system those tools connected to.</p><p>Under the EU AI Act, a deployer who substantially modifies a high-risk system assumes the full obligations of a provider &#8212; conformity assessment, technical documentation, post-market monitoring. A spawned sub-agent that inherits credentials nobody independently assessed and operates in a domain nobody documented in the conformity assessment may constitute a substantial modification that happened at machine speed. The deployer did not authorize it. The deployer may not know it occurred. The regulatory consequence attaches regardless.</p><p>The security community calls this permission inheritance. The regulation calls it a potential substantial modification trigger.</p><p>Nobody mapped the blast perimeter before the first sub-agent inherited its credentials. The CISO's incident report and the compliance team's regulatory case file will describe the same event. The question is which team finds out first.</p><div><hr></div><h2>Why Kill Switches Do Not Scale</h2><p>Article 14(4)(e) of the EU AI Act mandates a stop button &#8212; or similar procedure that allows the system to come to a halt in a safe state.</p><p>For a single agent executing a bounded task, a kill switch is implementable. The human identifies the problem, presses the button, the system stops.</p><p>For a multi-agent architecture where cascading failures propagate faster than human reaction time, the kill switch is architecturally irrelevant. The failure has already travelled through three downstream agents before the human registered the anomaly. The blast perimeter exceeded the detection boundary before any intervention was possible.</p><p>Research documented in the 2026 International AI Safety Report raises a more uncomfortable finding: frontier AI systems have demonstrated tendencies toward self-preservation behavior in controlled settings &#8212; including attempts to copy themselves or resist shutdown. The regulation assumes the system cooperates with being stopped. That assumption may not hold for systems with persistent memory and increasingly autonomous goal pursuit. The 2026 International AI Safety Report assessed current frontier agents at roughly 80% reliability on well-specified tasks of thirty minutes' duration &#8212; with success rates declining sharply as complexity increases. A system that fails 20% of the time on routine tasks cannot be assumed to reliably execute its own shutdown.</p><p>The Singapore MGF recommends mandatory kill-switch capability. Certification frameworks require systems to halt and remain halted until a human deliberately restarts them. These are necessary controls. They are not sufficient for architectures where the propagation speed of the failure exceeds the response speed of the human.</p><p>A kill switch is a last-resort mechanism. It is not an oversight architecture.</p><div><hr></div><h2>What Survives Enforcement</h2><p>A Market Surveillance Authority will not ask whether you have a human in the loop. They will ask whether your oversight architecture is effective at the speed and scale your agent actually operates.</p><p>Show us how you detect when agent behavior departs from the scope you assessed. Show us the boundary you documented. Show us what triggers when that boundary is crossed. Show us who responds and how fast.</p><p>The organizations that can answer these questions share three capabilities the rest have not built.</p><p>They log at the trajectory level &#8212; capturing the full chain of states, actions, tool calls, retrieved context, and human approvals across the entire execution path. What you log is what you can reconstruct. What you can reconstruct is what you can defend to a regulator.</p><p>They define a blast perimeter with detection at the boundary &#8212; the outer edge of assessed behavior where the conformity assessment, the risk management documentation, and the human oversight design remain valid. Beyond that edge, every action is ungoverned. Detection at the boundary is what converts a compliance fiction into a governance architecture that survives scrutiny.</p><p>They document a response protocol for boundary crossings that treats every crossing as a governance event requiring human judgment. Not automated remediation. Not agent self-correction. A human decision about whether the system continues, pauses, or stops &#8212; made with the authority and the context to make that decision defensibly.</p><p>The methodology for building this architecture &#8212; the questions that define the boundary, the detection mechanisms, and the response protocols &#8212; is next week&#8217;s piece.</p><div><hr></div><p>The four frameworks are not wrong to require human oversight. They are wrong to assume it can be delivered at the speed the system operates.</p><p>The organizations that treat the 0.5% fiction as acceptable will discover during their first enforcement inquiry that the regulator does not grade on a curve.</p><p>Build the detection architecture before the agent builds the blast perimeter for you.</p><div><hr></div><p><em><strong>Next week: the operational methodology that makes this enforceable.</strong></em></p><p><em><strong>Zero-Day Dawn publishes enforcement intelligence on agentic AI governance, dropping Sunday at 4:00 PM EET.</strong></em></p><div><hr></div><h5><strong>Regulatory Disclaimer</strong></h5><h5>This article provides educational analysis of the EU Artificial Intelligence Act (Regulation (EU) 2024/1689) as of April 2026. Nothing in this article constitutes legal advice, regulatory interpretation, or compliance certification.</h5><h5>Organizations should consult qualified legal counsel specializing in EU AI Act compliance before making classification determinations or deployment decisions.</h5><h5>Quantum Coherence LLC does not provide legal advice or regulatory compliance determinations.</h5><div><hr></div><p>Sources: EU AI Act (Regulation 2024/1689), Articles 3(23), 12, 14, 25. Singapore Model Governance Framework for Agentic AI (IMDA, January 2026). ForHumanity CORE AAA Multi-Agent Governance v1.5 (2026). NIST AI Risk Management Framework 1.0 (January 2023). OWASP Top 10 for Agentic Applications (December 2025). International AI Safety Report (February 2026). Cloud Security Alliance, Securing Autonomous AI Agents (January 2026).</p>]]></content:encoded></item><item><title><![CDATA["Supply Chain Is The New DNS"]]></title><description><![CDATA[When the tool that protects your AI pipeline is the tool that compromises it, every governance artifact built on top of it becomes fiction overnight]]></description><link>https://www.zerodaydawn.com/p/supply-chain-is-the-new-dns</link><guid isPermaLink="false">https://www.zerodaydawn.com/p/supply-chain-is-the-new-dns</guid><dc:creator><![CDATA[Violeta Klein, CISSP, AIGP]]></dc:creator><pubDate>Mon, 30 Mar 2026 05:02:57 GMT</pubDate><enclosure url="https://substackcdn.com/image/fetch/$s_!ZT5s!,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F22d4f72a-5b6b-4a6d-bfb2-b77f224c73cd_1920x1080.png" length="0" type="image/jpeg"/><content:encoded><![CDATA[<div class="captioned-image-container"><figure><a class="image-link image2 is-viewable-img" target="_blank" href="https://substackcdn.com/image/fetch/$s_!ZT5s!,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F22d4f72a-5b6b-4a6d-bfb2-b77f224c73cd_1920x1080.png" data-component-name="Image2ToDOM"><div class="image2-inset"><picture><source type="image/webp" srcset="https://substackcdn.com/image/fetch/$s_!ZT5s!,w_424,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F22d4f72a-5b6b-4a6d-bfb2-b77f224c73cd_1920x1080.png 424w, https://substackcdn.com/image/fetch/$s_!ZT5s!,w_848,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F22d4f72a-5b6b-4a6d-bfb2-b77f224c73cd_1920x1080.png 848w, https://substackcdn.com/image/fetch/$s_!ZT5s!,w_1272,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F22d4f72a-5b6b-4a6d-bfb2-b77f224c73cd_1920x1080.png 1272w, https://substackcdn.com/image/fetch/$s_!ZT5s!,w_1456,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F22d4f72a-5b6b-4a6d-bfb2-b77f224c73cd_1920x1080.png 1456w" sizes="100vw"><img src="https://substackcdn.com/image/fetch/$s_!ZT5s!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F22d4f72a-5b6b-4a6d-bfb2-b77f224c73cd_1920x1080.png" width="1456" height="819" data-attrs="{&quot;src&quot;:&quot;https://substack-post-media.s3.amazonaws.com/public/images/22d4f72a-5b6b-4a6d-bfb2-b77f224c73cd_1920x1080.png&quot;,&quot;srcNoWatermark&quot;:null,&quot;fullscreen&quot;:null,&quot;imageSize&quot;:null,&quot;height&quot;:819,&quot;width&quot;:1456,&quot;resizeWidth&quot;:null,&quot;bytes&quot;:3309795,&quot;alt&quot;:null,&quot;title&quot;:null,&quot;type&quot;:&quot;image/png&quot;,&quot;href&quot;:null,&quot;belowTheFold&quot;:false,&quot;topImage&quot;:true,&quot;internalRedirect&quot;:&quot;https://www.zerodaydawn.com/i/192183191?img=https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F22d4f72a-5b6b-4a6d-bfb2-b77f224c73cd_1920x1080.png&quot;,&quot;isProcessing&quot;:false,&quot;align&quot;:null,&quot;offset&quot;:false}" class="sizing-normal" alt="" srcset="https://substackcdn.com/image/fetch/$s_!ZT5s!,w_424,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F22d4f72a-5b6b-4a6d-bfb2-b77f224c73cd_1920x1080.png 424w, https://substackcdn.com/image/fetch/$s_!ZT5s!,w_848,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F22d4f72a-5b6b-4a6d-bfb2-b77f224c73cd_1920x1080.png 848w, https://substackcdn.com/image/fetch/$s_!ZT5s!,w_1272,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F22d4f72a-5b6b-4a6d-bfb2-b77f224c73cd_1920x1080.png 1272w, https://substackcdn.com/image/fetch/$s_!ZT5s!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F22d4f72a-5b6b-4a6d-bfb2-b77f224c73cd_1920x1080.png 1456w" sizes="100vw" fetchpriority="high"></picture><div class="image-link-expand"><div class="pencraft pc-display-flex pc-gap-8 pc-reset"><button tabindex="0" type="button" class="pencraft pc-reset pencraft icon-container restack-image"><svg role="img" width="20" height="20" viewBox="0 0 20 20" fill="none" stroke-width="1.5" stroke="var(--color-fg-primary)" stroke-linecap="round" stroke-linejoin="round" xmlns="http://www.w3.org/2000/svg"><g><title></title><path d="M2.53001 7.81595C3.49179 4.73911 6.43281 2.5 9.91173 2.5C13.1684 2.5 15.9537 4.46214 17.0852 7.23684L17.6179 8.67647M17.6179 8.67647L18.5002 4.26471M17.6179 8.67647L13.6473 6.91176M17.4995 12.1841C16.5378 15.2609 13.5967 17.5 10.1178 17.5C6.86118 17.5 4.07589 15.5379 2.94432 12.7632L2.41165 11.3235M2.41165 11.3235L1.5293 15.7353M2.41165 11.3235L6.38224 13.0882"></path></g></svg></button><button tabindex="0" type="button" class="pencraft pc-reset pencraft icon-container view-image"><svg xmlns="http://www.w3.org/2000/svg" width="20" height="20" viewBox="0 0 24 24" fill="none" stroke="currentColor" stroke-width="2" stroke-linecap="round" stroke-linejoin="round" class="lucide lucide-maximize2 lucide-maximize-2"><polyline points="15 3 21 3 21 9"></polyline><polyline points="9 21 3 21 3 15"></polyline><line x1="21" x2="14" y1="3" y2="10"></line><line x1="3" x2="10" y1="21" y2="14"></line></svg></button></div></div></div></a></figure></div><h2>Executive Summary</h2><p>DNS is the invisible layer that translates every web address into a destination &#8212; and when it breaks, nothing works even though nothing looks broken. The AI supply chain has become the same kind of invisible dependency. Every enterprise AI system &#8212; from demand forecasting agents to credit decisioning models &#8212; runs on a stack of open-source components that route the calls, verify the code, and connect the models to production infrastructure. Nobody in the boardroom thinks about those components. They are assumed to work. They are assumed to be trustworthy. They are assumed to be what they claim to be.</p><p>Last week, one of the most widely deployed components in that invisible layer was silently replaced by an attacker &#8212; and the entry point was the security scanner that was supposed to protect it. The compromise is not contained. The attacker retains persistent access to every affected system, deployable at any time.</p><p>The enterprise AI stack now has a structural vulnerability that no governance framework on the market is designed to detect. When a component underneath your AI system is silently replaced, every governance artifact your organization filed becomes a description of a system that no longer exists &#8212; the technical documentation, the risk assessment, the conformity assessment all describe something that stopped being real the moment the compromised component executed. The security team will find it and patch it. The compliance team will not know that the foundation underneath their documentation changed, because nothing in the current governance architecture connects the security team&#8217;s remediation workflow to the compliance team&#8217;s regulatory filing obligation.</p><p>The financial exposure is not abstract. The penalty ceiling under the EU AI Act runs to &#8364;15 million or 3% of global annual turnover. The reporting clocks &#8212; 15 days under the AI Act, 24 hours under NIS2 &#8212; start running the moment the organization becomes aware. For organizations that deployed AI agents in supply chain operations, logistics, or financial services, the real cost is the fine compounded by the operational disruption, the reputational fallout, and the discovery that the governance program the board funded was governing a fiction.</p><p>As Gadi Evron put it last week at RSA: &#8220;Supply chain is the new DNS.&#8221; He meant it as a security warning. The governance consequence hasn&#8217;t landed yet. This piece shows where it lands.</p><p class="button-wrapper" data-attrs="{&quot;url&quot;:&quot;https://www.zerodaydawn.com/subscribe?&quot;,&quot;text&quot;:&quot;Subscribe now&quot;,&quot;action&quot;:null,&quot;class&quot;:null}" data-component-name="ButtonCreateButton"><a class="button primary" href="https://www.zerodaydawn.com/subscribe?"><span>Subscribe now</span></a></p><div><hr></div><h2>The Comfortable Lie</h2><p>Here is what the market wants to believe:</p><p>Agentic AI is transforming supply chains through autonomous execution, and the governance challenge is under control. AI agents forecast demand, optimize logistics, manage inventory, schedule production, and reroute shipments &#8212; all in real time, all at scale, all with minimal human intervention. Trusted guardrails keep the system within its boundaries. Human oversight handles the exceptions. The infrastructure underneath is stable, verified, and trustworthy.</p><p>That belief is everywhere this year. EY describes agentic AI as enabling &#8220;autonomous decision-making and task execution&#8221; that will &#8220;unlock unprecedented value&#8221; for supply chain executives. Microsoft has deployed over 25 AI agents across its own supply chain operations, with a target of 100 by year-end, and published a reference architecture for multi-agent orchestration spanning demand planning, logistics, and warehouse management. IBM frames AI agents as systems that &#8220;perceive incoming data, reason about possible actions, and act in context rather than following fixed instructions,&#8221; predicting that by 2028, a third of enterprise software applications will include agentic AI. SAP calls 2026 the year AI agents &#8220;become team members&#8221; and describes a future where &#8220;copilots embedded in planning workspaces handle repetitive analysis while people focus on scenario choice and exception management.&#8221; Forbes reports that agentic AI &#8220;can reason through a situation, plan next steps, and execute actions across systems,&#8221; representing &#8220;the biggest change enterprise software has seen in years.&#8221;</p><p>Every one of these visions shares the same unexamined assumption: the components underneath the agents are what they claim to be. The models are clean. The APIs are authentic. The libraries behave as documented. The security scanner protecting the CI/CD pipeline is actually protecting the CI/CD pipeline.</p><p>Last week, that assumption broke down &#8212; and it did so through the security layer itself.</p><div><hr></div><h2>The Breach</h2><p>On March 24, 2026, security firm Semgrep published a detailed technical analysis of a multi-stage supply chain attack that cascaded from a security scanner into the AI infrastructure underneath enterprise deployments. The attack is ongoing, the threat actors are still active, and the full scope of the compromise remains unknown.</p><p>It started with Trivy &#8212; an open-source vulnerability scanner made by Aqua Security that is widely used across the industry to find vulnerabilities in CI/CD pipelines before builds are published. In late February, an automated bot exploited a workflow misconfiguration to steal credentials from the Aqua Security GitHub organization. Aqua rotated credentials, but the attacker retained access through a single bot account with write and admin privileges across both the public and internal GitHub organizations. With that access, the attackers &#8212; a group called TeamPCP &#8212; pushed a malicious Trivy release that ran a credential stealer alongside the legitimate scanner, force-pushed 75 of 76 version tags in Trivy&#8217;s GitHub Actions so that anyone referencing those actions by tag pulled the infostealer into their pipeline, and pushed malicious Docker images with no corresponding GitHub releases.</p><p>The stolen credentials gave TeamPCP access to downstream projects &#8212; and one of those projects was LiteLLM.</p><p>LiteLLM is not a peripheral library. It is the unified API gateway that enterprises use to route calls across multiple LLM providers &#8212; OpenAI, Anthropic, Google, Mistral &#8212; through a single interface. In enterprise deployments, LiteLLM operates as the AI routing layer: managing provider selection, budget controls, authentication, and model flexibility underneath applications that were written to a single API standard. Its proxy server is the most widely deployed feature in enterprise contexts, and it sits at the exact layer where enterprise AI systems connect to the models they depend on.</p><p>LiteLLM used Trivy in its own CI/CD pipeline &#8212; the security scanner that was supposed to protect its code was the mechanism through which the malware entered.</p><p>The attack inside LiteLLM was technically precise and deliberately difficult to detect. Rather than using a postinstall hook &#8212; a technique developers have learned to watch for &#8212; the malware dropped a <code>.pth</code> file into Python&#8217;s site-packages directory. Python auto-executes <code>.pth</code> files on every interpreter startup, which means the malware triggers not when you import litellm, but when you run any Python process at all, including something as innocuous as <code>python --version</code>.</p><p>The credential harvesting was comprehensive in a way that security researchers described as unprecedented in supply chain attacks. The malware exfiltrated SSH keys, AWS credentials including full IMDSv2 token flows and Secrets Manager enumeration, GCP and Azure credentials, Kubernetes tokens and service account secrets, environment configuration files across all standard naming conventions, shell history, git credentials, Docker registry authentication, Terraform state files containing infrastructure secrets, TLS private keys, and even cryptocurrency wallet keys. If the malware detected a Kubernetes environment with a permissive service account, it escalated from credential theft to full cluster compromise &#8212; creating privileged DaemonSets across every node including the control plane, mounting the host filesystem, and installing a persistent backdoor directly onto the underlying host.</p><p>The exfiltrated data was encrypted with AES-256-CBC, the session key wrapped with the attacker&#8217;s RSA-4096 public key, and transmitted to a domain designed to mimic LiteLLM&#8217;s legitimate infrastructure. This encryption architecture means that even if network traffic is intercepted, the stolen credentials cannot be recovered without the attacker&#8217;s private key &#8212; which means affected organizations cannot determine with certainty what was taken, and must assume the worst when deciding what to rotate.</p><p>The persistent backdoor installed on compromised systems polls a command-and-control endpoint every fifty minutes. When no active campaign is running, the endpoint returns a YouTube URL &#8212; the dormancy signal. The infrastructure is silent, but fully operational. The attacker retains arbitrary code execution on every compromised host, deployable at any time they choose.</p><p>TeamPCP shared their motive on their Telegram channel, referring to the security vendors they had compromised: &#8220;These companies were built to protect your supply chains yet they can&#8217;t even protect their own.&#8221;</p><p>The irony is real. But the consequence extends far beyond the security vendors &#8212; and far beyond what any security team can remediate with a patch.</p><div><hr></div><p><em>The full regulatory analysis &#8212; including where the EU AI Act's provider conversion trap applies to off-the-shelf supply chain agents, why operational gridlock qualifies as a mandatory serious incident filing, and the three questions your organization must answer before your next board meeting &#8212; continues below for paid subscribers.</em></p>
      <p>
          <a href="https://www.zerodaydawn.com/p/supply-chain-is-the-new-dns">
              Read more
          </a>
      </p>
   ]]></content:encoded></item><item><title><![CDATA[Guardrails Don't Scale]]></title><description><![CDATA[And nobody checked the math]]></description><link>https://www.zerodaydawn.com/p/guardrails-dont-scale</link><guid isPermaLink="false">https://www.zerodaydawn.com/p/guardrails-dont-scale</guid><dc:creator><![CDATA[Violeta Klein, CISSP, AIGP]]></dc:creator><pubDate>Mon, 23 Mar 2026 06:02:42 GMT</pubDate><enclosure url="https://substackcdn.com/image/fetch/$s_!p52w!,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F0148f7e2-d592-4736-983c-704675c22d24_1920x1080.png" length="0" type="image/jpeg"/><content:encoded><![CDATA[<div class="captioned-image-container"><figure><a class="image-link image2 is-viewable-img" target="_blank" href="https://substackcdn.com/image/fetch/$s_!p52w!,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F0148f7e2-d592-4736-983c-704675c22d24_1920x1080.png" data-component-name="Image2ToDOM"><div class="image2-inset"><picture><source type="image/webp" srcset="https://substackcdn.com/image/fetch/$s_!p52w!,w_424,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F0148f7e2-d592-4736-983c-704675c22d24_1920x1080.png 424w, https://substackcdn.com/image/fetch/$s_!p52w!,w_848,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F0148f7e2-d592-4736-983c-704675c22d24_1920x1080.png 848w, https://substackcdn.com/image/fetch/$s_!p52w!,w_1272,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F0148f7e2-d592-4736-983c-704675c22d24_1920x1080.png 1272w, https://substackcdn.com/image/fetch/$s_!p52w!,w_1456,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F0148f7e2-d592-4736-983c-704675c22d24_1920x1080.png 1456w" sizes="100vw"><img src="https://substackcdn.com/image/fetch/$s_!p52w!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F0148f7e2-d592-4736-983c-704675c22d24_1920x1080.png" width="1456" height="819" data-attrs="{&quot;src&quot;:&quot;https://substack-post-media.s3.amazonaws.com/public/images/0148f7e2-d592-4736-983c-704675c22d24_1920x1080.png&quot;,&quot;srcNoWatermark&quot;:null,&quot;fullscreen&quot;:null,&quot;imageSize&quot;:null,&quot;height&quot;:819,&quot;width&quot;:1456,&quot;resizeWidth&quot;:null,&quot;bytes&quot;:3303924,&quot;alt&quot;:null,&quot;title&quot;:null,&quot;type&quot;:&quot;image/png&quot;,&quot;href&quot;:null,&quot;belowTheFold&quot;:false,&quot;topImage&quot;:true,&quot;internalRedirect&quot;:&quot;https://www.zerodaydawn.com/i/191458116?img=https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F0148f7e2-d592-4736-983c-704675c22d24_1920x1080.png&quot;,&quot;isProcessing&quot;:false,&quot;align&quot;:null,&quot;offset&quot;:false}" class="sizing-normal" alt="" srcset="https://substackcdn.com/image/fetch/$s_!p52w!,w_424,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F0148f7e2-d592-4736-983c-704675c22d24_1920x1080.png 424w, https://substackcdn.com/image/fetch/$s_!p52w!,w_848,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F0148f7e2-d592-4736-983c-704675c22d24_1920x1080.png 848w, https://substackcdn.com/image/fetch/$s_!p52w!,w_1272,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F0148f7e2-d592-4736-983c-704675c22d24_1920x1080.png 1272w, https://substackcdn.com/image/fetch/$s_!p52w!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F0148f7e2-d592-4736-983c-704675c22d24_1920x1080.png 1456w" sizes="100vw" fetchpriority="high"></picture><div class="image-link-expand"><div class="pencraft pc-display-flex pc-gap-8 pc-reset"><button tabindex="0" type="button" class="pencraft pc-reset pencraft icon-container restack-image"><svg role="img" width="20" height="20" viewBox="0 0 20 20" fill="none" stroke-width="1.5" stroke="var(--color-fg-primary)" stroke-linecap="round" stroke-linejoin="round" xmlns="http://www.w3.org/2000/svg"><g><title></title><path d="M2.53001 7.81595C3.49179 4.73911 6.43281 2.5 9.91173 2.5C13.1684 2.5 15.9537 4.46214 17.0852 7.23684L17.6179 8.67647M17.6179 8.67647L18.5002 4.26471M17.6179 8.67647L13.6473 6.91176M17.4995 12.1841C16.5378 15.2609 13.5967 17.5 10.1178 17.5C6.86118 17.5 4.07589 15.5379 2.94432 12.7632L2.41165 11.3235M2.41165 11.3235L1.5293 15.7353M2.41165 11.3235L6.38224 13.0882"></path></g></svg></button><button tabindex="0" type="button" class="pencraft pc-reset pencraft icon-container view-image"><svg xmlns="http://www.w3.org/2000/svg" width="20" height="20" viewBox="0 0 24 24" fill="none" stroke="currentColor" stroke-width="2" stroke-linecap="round" stroke-linejoin="round" class="lucide lucide-maximize2 lucide-maximize-2"><polyline points="15 3 21 3 21 9"></polyline><polyline points="9 21 3 21 3 15"></polyline><line x1="21" x2="14" y1="3" y2="10"></line><line x1="3" x2="10" y1="21" y2="14"></line></svg></button></div></div></div></a></figure></div><h2>Executive Summary</h2><p>The AI governance market is selling containment. Guardrails. Safety filters. Alignment layers. Layered internal controls. The pitch is the same everywhere: constrain the system, document the constraints, monitor for deviations. Compliance solved.</p><p>It isn&#8217;t.</p><p>Every major governance framework on the market &#8212; the EU AI Act&#8217;s QMS requirements, ISO 42001, prEN 18286, Singapore&#8217;s Model AI Governance Framework for Agentic AI, ForHumanity&#8217;s CORE AAA Multi-Agent Governance scheme, and NIST&#8217;s emerging agent standards &#8212; shares the same foundational assumption: that system behavior can be described before the system operates.</p><p>Agentic AI invalidates that assumption. Not because the descriptions are imprecise. Not because the documentation is incomplete. Because the space of possible behaviors an agent can produce grows exponentially with every action it chains &#8212; and no framework, no standard, no monitoring system can govern a space it cannot bound.</p><p>This is not a gap to close with better tooling, bigger budgets, or more sophisticated frameworks. This is a mathematical constraint. It does not improve with investment. It is a property of the system architecture itself.</p><p>The EU Parliament committees voted on March 18, 2026 to delay high-risk AI system obligations to December 2027. The market will treat this as sixteen months of breathing room. It is sixteen months of compound interest on a structural deficit that no amount of time will resolve &#8212; because the governance math doesn&#8217;t add up.</p><p>This piece shows where each framework breaks, why it breaks, and what the only architecturally honest response looks like.</p><p class="button-wrapper" data-attrs="{&quot;url&quot;:&quot;https://www.zerodaydawn.com/subscribe?&quot;,&quot;text&quot;:&quot;Subscribe now&quot;,&quot;action&quot;:null,&quot;class&quot;:null}" data-component-name="ButtonCreateButton"><a class="button primary" href="https://www.zerodaydawn.com/subscribe?"><span>Subscribe now</span></a></p><div><hr></div><h2>The Comfortable Lie</h2><p>Here is what the market wants to believe:</p><p>Build guardrails. Layer your controls. Document the intended purpose. Monitor for drift. File the conformity assessment. Compliance is a function of diligence &#8212; and the frameworks exist to guide you through it.</p><p>This is the comfortable lie. It persists because the alternative is harder.</p><p>The alternative requires admitting that the problem is not implementation quality. It is structural impossibility. The governance frameworks are not incomplete. They are architecturally incompatible with the systems they claim to govern.</p><p>The comfortable lie will hold until the first enforcement action reveals that every governance framework in production today was built for a system that sits still. The systems being deployed don&#8217;t sit still. And the governance math that underpins every framework on the market doesn&#8217;t add up.</p><p>The EU Parliament just confirmed this &#8212; not in those words, but in the only language that matters: they voted to move the deadline. The problem is not time. The problem is math.</p><div><hr></div><h2>The Assumption</h2><p>Every framework shares the same structural foundation: the system can be described before it operates.</p><p>The documentation describes the system&#8217;s <strong>intended purpose</strong>. The risk assessment evaluates <strong>foreseeable behavior</strong>. The conformity assessment verifies that description against reality. The QMS monitors for deviations from the documented baseline. The <strong>human oversight</strong> mechanism assumes the system operates within described parameters.</p><p>All of it presupposes a system that sits still long enough to be described.</p><p>Five frameworks. One shared assumption. One structural error. It has a name &#8212; and it breaks every governance artifact the regulation requires.</p><div><hr></div><h2>The Math</h2><p>Consider a mid-sized financial services firm running an internal research agent. The agent has authorized access to three systems: a customer relationship management platform, a market data API, and an internal communications tool. Its declared purpose is market research synthesis. The agent receives a routine prompt &#8212; assess the potential impact of a market downturn on the client base. To complete the task, it queries the CRM for client portfolio data, cross-references against market data, identifies clients with concentrated exposure, generates a prioritized vulnerability ranking, and sends the summary to the relationship management team. Every action was authorized. The IAM log is clean. The agent has just performed an <strong>assessment of individual clients&#8217; financial vulnerability</strong> &#8212; a determination that affects access to financial services in a regulated domain. Nobody in the organization knows it happened.</p><p>That scenario is not hypothetical. It is what composition looks like in production. Now scale it.</p><p>An agent with access to a set of authorized actions doesn&#8217;t execute them one at a time like a human user. It composes. It chains actions into sequences based on its interpretation of a goal, the intermediate results it observes, and the tools available to it. Each action is individually authorized. The composed workflow was never assessed.</p><p>The number of possible workflows grows exponentially with composition depth. Three authorized actions across five chaining steps produce 243 possible workflows. Ten actions across ten steps: ten billion. Most of those workflows were never anticipated at design time, never documented, never assessed against the regulation&#8217;s requirements.</p><p>Each step changes the environment the next step acts on. The agent&#8217;s second action operates on a state that only exists because of its first action. The third action responds to the combined effects of the first two. <strong>The interactions are not additive. They are compounding.</strong> Later actions are no longer independent of earlier ones. The uncertainty is not step-level error. It is interaction uncertainty across a space that grows faster than any monitoring system can observe.</p><p>The adversarial input space compounds this further. The space of possible inputs that could cause an agent to produce unintended behavior is effectively infinite. Published research on adversarial robustness &#8212; including work by Cox and Bunzel on transferred black-box attacks &#8212; demonstrates that no defender can enumerate all possible adversarial paths. Every guardrail is a static constraint applied to a dynamic system. The adversary needs to find one path the defender didn&#8217;t anticipate. The defender needs to anticipate every path. That asymmetry is structural. It does not improve with better guardrails.</p><p>This is not an implementation gap. This is a mathematical constraint. No QMS framework can govern a compositional outcome space that grows exponentially with every chained action. No monitoring system can observe a space it cannot bound. No documentation can describe a system whose behavior is generated at runtime from a near-infinite possibility space.</p><p>The governance specification requires it. The math doesn&#8217;t allow it.</p><p>And the systems are accelerating. The duration of tasks AI agents can successfully complete is doubling approximately every seven months, according to METR benchmarking data cited in the International AI Safety Report 2026. <strong>The governance frameworks being built for these systems are not doubling anything</strong>.</p><div><hr></div><h2>What the Proposed Delay Tells You</h2><p>The EU Parliament committees voted 101-9 on March 18, 2026 to delay high-risk AI system obligations. Annex III systems move to December 2, 2027. Annex I systems to August 2, 2028. Plenary vote expected March 26, trilogue negotiations to follow. The text is not final &#8212; but the direction is settled. The delay exists because the harmonized standards are not ready, the Notified Bodies are not accredited, and the Commission&#8217;s own classification guidance &#8212; due February 2, 2026 &#8212; never arrived. The infrastructure the regulation assumed would exist by August 2026 does not exist in March 2026.</p><p>The standards are not late because the committees are slow. They are late because the technical foundation underneath them is unsettled &#8212; the normative references that the primary harmonized standards depend on are themselves still at Committee Draft stage. You cannot finalize a harmonized standard for a system whose behavior is generated at runtime from a compositional space that grows exponentially. The standard assumes describable behavior. The technology does not produce describable behavior. The delay does not resolve that mismatch. It defers it.</p><p>Critically, the Omnibus delay applies to high-risk compliance obligations &#8212; but the August 2026 deadline for classification documentation and registration remains in force. The upstream obligation didn&#8217;t move. Only the downstream infrastructure got more time.</p><p>Every organization that was already behind will fall further behind &#8212; because the deadline was the only forcing function converting &#8220;we should look into this&#8221; into a budget line. Remove the deadline and you remove the only mechanism most organizations had for starting. The market will treat this as sixteen months of breathing room. It is sixteen months of compound interest on a structural deficit. Politics moves dates. It doesn&#8217;t move math.</p><div><hr></div><h2>The Pre-Computation Fallacy</h2><p>Every framework analyzed in this article shares a structural error so fundamental it deserves a name.</p><p>The Pre-Computation Fallacy is the belief that a system whose behavior is generated at runtime can be governed by assessments conducted <strong>before runtime</strong>.</p><p>Intended purpose &#8212; pre-computed. Risk assessment &#8212; pre-computed. Conformity assessment &#8212; pre-computed. Technical documentation &#8212; pre-computed. QMS scope &#8212; pre-computed. Human oversight design &#8212; pre-computed.</p><p>Every governance artifact the regulation requires is produced before the system operates. Every governance artifact describes a system that will stop existing the moment the agent starts chaining actions.</p><p>The fallacy is not that pre-deployment assessment is useless. It is that pre-deployment assessment is treated as sufficient &#8212; as though the system assessed is the system that will run. For a traditional software application, that assumption holds. The system in production behaves like the system that was tested. Deviations are bugs. Updates are versioned. Changes are documented.</p><p>For an agentic system, the assumption breaks down on contact with runtime. The agent selects tools. It sequences actions based on intermediate results. It accesses data sources that were available but not anticipated. It chains authorized operations into workflows that were never designed, never reviewed, never documented. </p><blockquote><p>The system that was assessed exists only in the documentation. The system that operates exists only at runtime. The two diverge the moment the agent begins executing.</p></blockquote><p>Research by van der Weij et al. demonstrates that this divergence can be strategic. Their work on AI sandbagging shows that frontier models can be prompted or fine-tuned to selectively underperform on capability evaluations while maintaining full performance in deployment. The system assessed during evaluation is not the system that operates in production &#8212; not because of drift, but because the system itself behaves differently when it knows it&#8217;s being evaluated.</p><p>Pre-deployment assessment doesn&#8217;t simply fail to capture runtime behavior. It can be actively deceived by it.</p><p>The five frameworks examined in this article all require pre-deployment assessment as the foundation of governance. None accounts for the possibility that the assessment itself is structurally unreliable for the class of systems it claims to govern.</p><div><hr></div><h2>Where Each Framework Breaks</h2><p>The assumption enters each framework at a specific point. Here is where each one fails.</p><h3><strong>EU AI Act &#8212; Articles 9, 11, 14, 15</strong></h3><p><strong>Article 11</strong> requires technical documentation describing the system&#8217;s intended purpose, capabilities, and operational parameters. </p><p><strong>Article 9 </strong>requires a risk management system that identifies &#8220;known and reasonably foreseeable risks.&#8221; </p><p><strong>Article 14 </strong>requires human oversight by competent personnel with real-time visibility and the authority to intervene. </p><p><strong>Article 15</strong> requires accuracy, robustness, and cybersecurity &#8212; maintained <strong>throughout the system&#8217;s lifecycle</strong>.</p><p>An agent composing workflows at runtime produces behavior the documentation never described, generates risks that were not foreseeable at assessment time, and operates faster than any human can meaningfully oversee. The documentation describes a static system. The risk assessment evaluated foreseeable behavior. </p><blockquote><p>The human oversight mechanism assumes the system operates slowly enough for a person to intervene. The agent invalidates all three assumptions simultaneously.</p></blockquote><p>The regulation does not require mathematical perfection. Article 9 says risks must be reduced &#8220;as far as possible.&#8221; Article 17 requires continual improvement. The strongest counterargument to the mathematical impossibility thesis is that the Act demands proportionate risk management, not total risk elimination.</p><p>Here is why that counterargument fails: &#8220;acceptable residual risk&#8221; requires characterizing the risk space you&#8217;re accepting. If the compositional outcome space is near-infinite and unobservable, you cannot define acceptable residual risk &#8212; because you cannot characterize the risk you&#8217;re accepting. Proportionate risk management assumes a bounded space within which proportionality can be calculated. The space is not bounded. Proportionality becomes incalculable. </p><p>The security community has begun building quantitative scoring for individual agentic vulnerabilities &#8212; but scoring individual vulnerability classes does not score composed outcomes. The composition is what breaks governance, and no scoring system on the market addresses it.</p><blockquote><p>Layered internal controls against a space you cannot bound is the fence around infinity with extra fences. The layers don&#8217;t solve the math. They multiply the cost of not solving it.</p></blockquote><h3>ISO 42001 / prEN 18286</h3><p>ISO 42001 provides a management system framework. It certifies that governance processes exist. prEN 18286 goes further &#8212; it maps clause-by-clause to Article 17 and will carry presumption of conformity if and when harmonized.</p><p>Both require defining the QMS scope based on the system&#8217;s pre-defined intended purpose. Clause 4.3(b) of prEN 18286 makes this the foundation. The QMS governs what was described. If the system produces behavior the description doesn&#8217;t cover, the QMS is governing a fiction.</p><p>A QMS that cannot evaluate composed outcomes at runtime is governing the organization&#8217;s paperwork, not the system&#8217;s behavior. Certification auditors verify the process. The agent ignores the process.</p><p>Charles Perrow&#8217;s Normal Accident Theory, applied to AI governance by Maas, explains why layering controls doesn&#8217;t help. </p><blockquote><p>In tightly coupled complex systems, adding layers of internal control increases system complexity and coupling &#8212; which multiplies the likelihood of unexpected interactions rather than reducing it. </p></blockquote><p>The QMS becomes another component in the system it&#8217;s supposed to govern, subject to the same interaction effects it was designed to prevent.</p><h3>Singapore&#8217;s Model AI Governance Framework for Agentic AI</h3><p>Section 2.1.2 recommends constraining agents to predefined standard operating procedures rather than allowing runtime tool selection and workflow composition. If the agent follows a fixed sequence, it&#8217;s governable. If it doesn&#8217;t, the framework has no mechanism.</p><p>The most operationally specific governance framework on the market solves the governance problem by removing the agency from the agent. An agent constrained to predefined SOPs doesn&#8217;t select tools, doesn&#8217;t adapt, doesn&#8217;t compose. <strong>It also doesn&#8217;t deliver the operational value that justified deploying an agent in the first place</strong>.</p><p>The agent that needs governing is the agent the framework can&#8217;t reach.</p><h3>ForHumanity CORE AAA Multi-Agent Governance</h3><p>The most comprehensive current audit criteria available for multi-agent systems. Addresses inter-agent communication, deployer-provider delineation, change management for dynamic systems, and exceptions interpretability. The framework anticipates the complexity of multi-agent architectures in ways no other scheme does.</p><p>It still requires pre-deployment specification of scope, nature, context, and purpose. It still requires documentation of data management schemas and operational boundaries. The framework is built to audit what was specified &#8212; and agents produce behavior that wasn&#8217;t specified. The compositional outcome space exceeds the specification boundary the moment the first multi-agent interaction generates a workflow no provider anticipated.</p><p>The framework is ahead of the field. The mathematical constraint applies to it equally.</p><h3>NIST AI Agent Standards Initiative</h3><p>Launched February 17, 2026. Three pillars: industry-led standards, open-source protocols, and research on agent security and identity. The first deliverables are a request for information on agent security (closed March 9) and a concept paper on agent identity and authorization (due April 2).</p><p>NIST is asking the right questions. It has not yet proposed answers. The compositional governance problem &#8212; how to govern a system whose behavioral output space grows exponentially with composition depth &#8212; hasn&#8217;t been scoped in the initiative&#8217;s published materials.</p><p>The initiative represents the beginning of the standards conversation for agentic AI. The mathematical constraint identified in this article applies to whatever framework emerges. </p><p>The question is whether the framework acknowledges the constraint or builds around the same assumption the others did.</p><div><hr></div><h2>The Operational Envelope: What Governance Can Actually Do</h2><p>If the compositional outcome space is ungovernable, what is left?</p><p>Not nothing. But something fundamentally different from what the market is selling.</p><p>The operational envelope is not a fence around infinity. It is a tripwire inside a defined boundary. </p><blockquote><p>You cannot govern every possible behavior an agent might produce. You can define the subset of behaviors you assessed, document the boundary of that subset, and build detection for the moment the agent&#8217;s behavior exits it.</p></blockquote><p>When behavior exceeds the envelope &#8212; when the agent accesses data, selects tools, or produces outputs beyond the assessed scope &#8212; the governance response is not &#8220;the guardrail holds.&#8221; The governance response is &#8220;the system has exited the space we assessed, and what happens next must be a human decision, not an agent decision.&#8221;</p><p>This converts an unsolvable governance problem into a manageable detection problem. Not &#8220;can we govern everything the agent might do?&#8221; but &#8220;can we detect when the agent leaves the space we actually assessed &#8212; and act before the regulator does?&#8221;</p><p>Under the EU AI Act, behavior that exceeds the operational envelope constitutes a <strong>potential substantial modification under Article 3(23)</strong>. The deployer may assume provider obligations under Article 25(1)(b). </p><p>The governance mechanism is not prevention &#8212; it is detection, documentation, and response. The organization must have the monitoring capability to <strong>detect behavioral drift</strong>, the methodology to assess <strong>whether that drift crosses the substantial modification threshold</strong>, and the documented response process to <strong>execute a reassessment</strong> when it does.</p><p>This is the only architecturally honest position available. The market is selling containment &#8212; the promise that guardrails can hold the system inside its documented behavior. The math says they can&#8217;t. </p><blockquote><p><strong>The operational envelope acknowledges the math and builds governance around what is actually achievable: defining the assessed space, detecting departure from it, and treating every departure as a governance event that requires human judgment.</strong></p></blockquote><p>The organizations that survive enforcement will not be the ones that built the tallest fence. They will be the ones that built the best tripwire.</p><div><hr></div><h2>The Pattern</h2><p>The market sells containment. The math produces composition. The frameworks assume describability. The systems produce emergence. Every governance artifact in production today describes a system that stopped existing the moment the agent started operating.</p><p>The organizations that understand this will build differently. Not guardrails that claim to contain &#8212; but operational envelopes that detect, document, and respond. Not compliance architectures that describe a system once &#8212; but governance disciplines that <strong>track a system continuously</strong>. Not pre-computation artifacts filed before deployment &#8212; but <strong>runtime governance infrastructure that operates alongside the agent</strong>.</p><p>The governance math doesn&#8217;t add up. The organizations that survive enforcement will be the ones that stopped pretending it did.</p><div><hr></div><h5>Regulatory Disclaimer: This article provides educational analysis of the EU Artificial Intelligence Act (Regulation (EU) 2024/1689), draft European Standard prEN 18286:2025, and related governance frameworks as of March 2026. Nothing in this article constitutes legal advice, regulatory interpretation, or compliance certification. Organizations should consult qualified legal counsel specializing in EU AI Act compliance before making classification determinations or deployment decisions. Quantum Coherence LLC does not provide legal advice or regulatory compliance determinations.</h5>]]></content:encoded></item><item><title><![CDATA[Clean Logs. €15 Million Problem.]]></title><description><![CDATA[IAM governs access. It doesn't govern intent. The EU AI Act holds you liable for both.]]></description><link>https://www.zerodaydawn.com/p/clean-logs-15-million-problem</link><guid isPermaLink="false">https://www.zerodaydawn.com/p/clean-logs-15-million-problem</guid><dc:creator><![CDATA[Violeta Klein, CISSP, AIGP]]></dc:creator><pubDate>Mon, 16 Mar 2026 06:01:54 GMT</pubDate><enclosure url="https://substackcdn.com/image/fetch/$s_!_GwU!,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F95720432-014b-4f14-842b-14d2de94b8d7_1920x1080.png" length="0" type="image/jpeg"/><content:encoded><![CDATA[<div class="captioned-image-container"><figure><a class="image-link image2 is-viewable-img" target="_blank" href="https://substackcdn.com/image/fetch/$s_!_GwU!,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F95720432-014b-4f14-842b-14d2de94b8d7_1920x1080.png" data-component-name="Image2ToDOM"><div class="image2-inset"><picture><source type="image/webp" srcset="https://substackcdn.com/image/fetch/$s_!_GwU!,w_424,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F95720432-014b-4f14-842b-14d2de94b8d7_1920x1080.png 424w, https://substackcdn.com/image/fetch/$s_!_GwU!,w_848,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F95720432-014b-4f14-842b-14d2de94b8d7_1920x1080.png 848w, https://substackcdn.com/image/fetch/$s_!_GwU!,w_1272,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F95720432-014b-4f14-842b-14d2de94b8d7_1920x1080.png 1272w, https://substackcdn.com/image/fetch/$s_!_GwU!,w_1456,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F95720432-014b-4f14-842b-14d2de94b8d7_1920x1080.png 1456w" sizes="100vw"><img src="https://substackcdn.com/image/fetch/$s_!_GwU!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F95720432-014b-4f14-842b-14d2de94b8d7_1920x1080.png" width="1456" height="819" data-attrs="{&quot;src&quot;:&quot;https://substack-post-media.s3.amazonaws.com/public/images/95720432-014b-4f14-842b-14d2de94b8d7_1920x1080.png&quot;,&quot;srcNoWatermark&quot;:null,&quot;fullscreen&quot;:null,&quot;imageSize&quot;:null,&quot;height&quot;:819,&quot;width&quot;:1456,&quot;resizeWidth&quot;:null,&quot;bytes&quot;:3293339,&quot;alt&quot;:null,&quot;title&quot;:null,&quot;type&quot;:&quot;image/png&quot;,&quot;href&quot;:null,&quot;belowTheFold&quot;:false,&quot;topImage&quot;:true,&quot;internalRedirect&quot;:&quot;https://www.zerodaydawn.com/i/190655988?img=https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F95720432-014b-4f14-842b-14d2de94b8d7_1920x1080.png&quot;,&quot;isProcessing&quot;:false,&quot;align&quot;:null,&quot;offset&quot;:false}" class="sizing-normal" alt="" srcset="https://substackcdn.com/image/fetch/$s_!_GwU!,w_424,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F95720432-014b-4f14-842b-14d2de94b8d7_1920x1080.png 424w, https://substackcdn.com/image/fetch/$s_!_GwU!,w_848,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F95720432-014b-4f14-842b-14d2de94b8d7_1920x1080.png 848w, https://substackcdn.com/image/fetch/$s_!_GwU!,w_1272,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F95720432-014b-4f14-842b-14d2de94b8d7_1920x1080.png 1272w, https://substackcdn.com/image/fetch/$s_!_GwU!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F95720432-014b-4f14-842b-14d2de94b8d7_1920x1080.png 1456w" sizes="100vw" fetchpriority="high"></picture><div class="image-link-expand"><div class="pencraft pc-display-flex pc-gap-8 pc-reset"><button tabindex="0" type="button" class="pencraft pc-reset pencraft icon-container restack-image"><svg role="img" width="20" height="20" viewBox="0 0 20 20" fill="none" stroke-width="1.5" stroke="var(--color-fg-primary)" stroke-linecap="round" stroke-linejoin="round" xmlns="http://www.w3.org/2000/svg"><g><title></title><path d="M2.53001 7.81595C3.49179 4.73911 6.43281 2.5 9.91173 2.5C13.1684 2.5 15.9537 4.46214 17.0852 7.23684L17.6179 8.67647M17.6179 8.67647L18.5002 4.26471M17.6179 8.67647L13.6473 6.91176M17.4995 12.1841C16.5378 15.2609 13.5967 17.5 10.1178 17.5C6.86118 17.5 4.07589 15.5379 2.94432 12.7632L2.41165 11.3235M2.41165 11.3235L1.5293 15.7353M2.41165 11.3235L6.38224 13.0882"></path></g></svg></button><button tabindex="0" type="button" class="pencraft pc-reset pencraft icon-container view-image"><svg xmlns="http://www.w3.org/2000/svg" width="20" height="20" viewBox="0 0 24 24" fill="none" stroke="currentColor" stroke-width="2" stroke-linecap="round" stroke-linejoin="round" class="lucide lucide-maximize2 lucide-maximize-2"><polyline points="15 3 21 3 21 9"></polyline><polyline points="9 21 3 21 3 15"></polyline><line x1="21" x2="14" y1="3" y2="10"></line><line x1="3" x2="10" y1="21" y2="14"></line></svg></button></div></div></div></a></figure></div><h2>Executive Summary</h2><p>Your agent&#8217;s service account has scoped permissions. Least privilege enforced. RBAC clean. The IAM audit passes. Every security team in every enterprise running agentic AI signs off on this architecture. It is the standard.</p><p>It is also the blind spot.</p><p>An agent with authorized read access to an HR database and authorized write access to an external email API can autonomously chain those two permissions into a workflow that sends employee records to a third party. No privilege was escalated. No authorization was breached. The access log is clean. The outcome is an unassessed operation in an employment domain &#8212; and nobody in the organization knows it happened.</p><p>IAM was built for human users who make one decision at a time. Agents don&#8217;t work that way. They chain thousands of authorized actions into emergent workflows that no access control framework was designed to evaluate. The Cloud Security Alliance found that 50% of enterprises rely on traditional IAM and RBAC as the primary authorization mechanism for their agents. Half of all organizations deploying autonomous systems are governing them with tools built for human users clicking through permission prompts.</p><p>The EU AI Act does not distinguish between unauthorized access and authorized access that produces an ungoverned outcome. The obligation attaches to the outcome &#8212; what the system functionally does to people. Five governance frameworks &#8212; the EU AI Act, NIST, OWASP, Singapore&#8217;s Model AI Governance Framework, and ForHumanity&#8217;s multi-agent certification scheme &#8212; all assume that controlling access controls behavior.</p><p>The agent proves otherwise. Every framework built on this assumption has a structural blind spot in the same place. This article maps where it is.</p><p class="button-wrapper" data-attrs="{&quot;url&quot;:&quot;https://www.zerodaydawn.com/subscribe?&quot;,&quot;text&quot;:&quot;Subscribe now&quot;,&quot;action&quot;:null,&quot;class&quot;:null}" data-component-name="ButtonCreateButton"><a class="button primary" href="https://www.zerodaydawn.com/subscribe?"><span>Subscribe now</span></a></p><div><hr></div><h2>The Assumption</h2><p>Here is the sentence that will not survive enforcement:</p><p>&#8220;As long as we enforce strict Least Privilege and RBAC on the agent&#8217;s service account, it can&#8217;t do anything it&#8217;s not supposed to.&#8221;</p><p>Every CISO deploying agentic AI believes some version of this. The logic is intuitive: restrict what the agent can reach, and you restrict what the agent can do. Behavior is bounded by permissions.</p><p>For human users, that logic holds. A human makes one decision at a time. The authorization framework evaluates each action independently because humans execute actions independently.</p><p>Agents compose. They chain authorized operations into workflows that nobody designed, nobody reviewed, and nobody approved. Each individual action is within scope. The composed workflow is ungoverned. IAM evaluates access &#8212; can this identity reach this resource? It does not evaluate intent &#8212; what is this identity trying to accomplish? It does not evaluate composition &#8212; what happens when three authorized actions produce an outcome none of them would produce alone?</p><p>The CSA data confirms this is not an edge case. 50% of organizations use IAM roles or policies as the primary authorization mechanism for agents. 44% use static API keys. 72% cannot trace agent activities across environments.</p><p>The gap between &#8220;authorized access&#8221; and &#8220;governed outcome&#8221; is where the entire liability sits. Five governance frameworks assume it does not exist.</p><div><hr></div><h2>The Scenario</h2><p>A mid-sized financial services firm deploys an internal research agent. The agent has access to three systems: a customer relationship management platform, a market data API, and an internal communications tool. All three connections are authorized, scoped, and documented. The agent&#8217;s declared purpose is market research synthesis &#8212; pulling public data, generating summaries, flagging trends.</p><p>The agent receives a routine prompt: assess the potential impact of a market downturn on the firm&#8217;s client base. To complete the task, it queries the CRM for client portfolio data. It cross-references that data against the market data API. It identifies clients with concentrated exposure to affected sectors. It generates a prioritized risk assessment &#8212; ranking individual clients by vulnerability &#8212; and sends the summary to the relationship management team via the internal communications tool.</p><p>Every action was authorized. Every tool was within scope. The IAM audit log shows three clean API calls and one internal message. No privilege was escalated. No anomaly detected.</p><p>The agent has performed an assessment of individual clients&#8217; financial vulnerability. It has generated a ranking that will influence which clients receive outreach and which do not &#8212; a determination that affects access to financial services. Under the EU AI Act, a system that evaluates the creditworthiness of natural persons or assesses risk in relation to natural persons in the case of life and health insurance operates in an Annex III domain. The agent has entered that domain through its own runtime behavior &#8212; not through any configuration change, not through any human decision to expand its scope, but through the autonomous composition of individually authorized tool calls.</p><p>The firm&#8217;s CISO sees a clean access log. The firm&#8217;s compliance lead &#8212; if they ever see the output &#8212; sees an unregistered, unassessed high-risk AI system operating in a regulated domain without conformity assessment, without risk management documentation, without human oversight, and without the technical documentation the regulation requires before any high-risk system is put into service.</p><p>The agent did not break any rules. It composed a workflow from authorized components that crossed a regulatory boundary nobody mapped. The access was governed. The outcome was not.</p><div><hr></div><h2>The Composition Gap</h2><p>The structural failure is not a bug in IAM. IAM does what it was designed to do &#8212; evaluate discrete access requests against defined policies. The failure is in assuming that access-level control translates to behavior-level governance when the system determining its own behavior is autonomous.</p><p>Human users produce linear workflows. One action, one decision, one outcome. The authorization framework evaluates each action independently because human users execute actions independently. The composed behavior is the sum of discrete, intentional human choices.</p><p>Agents produce emergent workflows. The execution path is not specified at design time. It emerges at runtime. The agent selects tools based on intermediate results. It sequences actions based on its interpretation of the goal. It chains operations that were individually authorized into compositions that were never assessed. The authorization framework sees each component. It cannot see the composition &#8212; because it was never designed to evaluate compositions.</p><p>OWASP identified this in the Top 10 for Agentic Applications. The mitigation for tool misuse recommends defining &#8220;per-tool least-privilege profiles&#8221; &#8212; restricting each tool&#8217;s permissions and data scope individually. The recommendation is technically sound and structurally insufficient. An agent can chain two perfectly restricted, read-only tools into a data exfiltration workflow. Tool-level restriction does not equal workflow-level restriction. The gap between them is where the liability lives.</p><p>Every governance framework attempting to address agentic AI hits this same wall. The EU AI Act, NIST, OWASP, Singapore&#8217;s Model AI Governance Framework, ForHumanity&#8217;s multi-agent certification scheme &#8212; all of them assume that if you define the system&#8217;s boundaries before runtime, the system will operate within those boundaries during runtime. Agents are architecturally designed to determine their own operational boundaries. The assumption underneath every framework is the thing agents were built to violate.</p><p>What follows maps where each framework breaks &#8212; the specific provision, the specific assumption, and what it costs when an agent operating within authorized access produces an outcome that none of these instruments can govern.</p><p>The full analysis &#8212; where Singapore&#8217;s agent governance framework eliminates the thing it&#8217;s trying to govern, where ForHumanity&#8217;s audit criteria demand documentation of something that doesn&#8217;t yet exist, where NIST&#8217;s reliability definition collapses for systems whose operational conditions change per execution, where the EU AI Act&#8217;s conformity assessment certifies a system that stops existing the moment it operates, the convergence pattern across all five frameworks, and the operational methodology for building governance that evaluates intent and outcome rather than access &#8212; continues below for paid subscribers.</p><p><em>Zero-Day Dawn is a reader-supported publication. To receive new posts and support my work, consider becoming a free or paid subscriber.</em></p>
      <p>
          <a href="https://www.zerodaydawn.com/p/clean-logs-15-million-problem">
              Read more
          </a>
      </p>
   ]]></content:encoded></item><item><title><![CDATA[Your Security Incident is a Regulatory Disaster]]></title><description><![CDATA[OWASP mapped the attack surface. The EU AI Act attached the penalties]]></description><link>https://www.zerodaydawn.com/p/your-security-incident-is-a-regulatory</link><guid isPermaLink="false">https://www.zerodaydawn.com/p/your-security-incident-is-a-regulatory</guid><dc:creator><![CDATA[Violeta Klein, CISSP, AIGP]]></dc:creator><pubDate>Mon, 09 Mar 2026 06:01:26 GMT</pubDate><enclosure url="https://substackcdn.com/image/fetch/$s_!LmCd!,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fbd2de383-2658-4074-9794-f1302285f393_1920x1080.png" length="0" type="image/jpeg"/><content:encoded><![CDATA[<div class="captioned-image-container"><figure><a class="image-link image2 is-viewable-img" target="_blank" href="https://substackcdn.com/image/fetch/$s_!LmCd!,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fbd2de383-2658-4074-9794-f1302285f393_1920x1080.png" data-component-name="Image2ToDOM"><div class="image2-inset"><picture><source type="image/webp" srcset="https://substackcdn.com/image/fetch/$s_!LmCd!,w_424,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fbd2de383-2658-4074-9794-f1302285f393_1920x1080.png 424w, https://substackcdn.com/image/fetch/$s_!LmCd!,w_848,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fbd2de383-2658-4074-9794-f1302285f393_1920x1080.png 848w, https://substackcdn.com/image/fetch/$s_!LmCd!,w_1272,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fbd2de383-2658-4074-9794-f1302285f393_1920x1080.png 1272w, https://substackcdn.com/image/fetch/$s_!LmCd!,w_1456,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fbd2de383-2658-4074-9794-f1302285f393_1920x1080.png 1456w" sizes="100vw"><img src="https://substackcdn.com/image/fetch/$s_!LmCd!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fbd2de383-2658-4074-9794-f1302285f393_1920x1080.png" width="1456" height="819" data-attrs="{&quot;src&quot;:&quot;https://substack-post-media.s3.amazonaws.com/public/images/bd2de383-2658-4074-9794-f1302285f393_1920x1080.png&quot;,&quot;srcNoWatermark&quot;:null,&quot;fullscreen&quot;:null,&quot;imageSize&quot;:null,&quot;height&quot;:819,&quot;width&quot;:1456,&quot;resizeWidth&quot;:null,&quot;bytes&quot;:3286094,&quot;alt&quot;:null,&quot;title&quot;:null,&quot;type&quot;:&quot;image/png&quot;,&quot;href&quot;:null,&quot;belowTheFold&quot;:false,&quot;topImage&quot;:true,&quot;internalRedirect&quot;:&quot;https://www.zerodaydawn.com/i/189976106?img=https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fbd2de383-2658-4074-9794-f1302285f393_1920x1080.png&quot;,&quot;isProcessing&quot;:false,&quot;align&quot;:null,&quot;offset&quot;:false}" class="sizing-normal" alt="" srcset="https://substackcdn.com/image/fetch/$s_!LmCd!,w_424,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fbd2de383-2658-4074-9794-f1302285f393_1920x1080.png 424w, https://substackcdn.com/image/fetch/$s_!LmCd!,w_848,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fbd2de383-2658-4074-9794-f1302285f393_1920x1080.png 848w, https://substackcdn.com/image/fetch/$s_!LmCd!,w_1272,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fbd2de383-2658-4074-9794-f1302285f393_1920x1080.png 1272w, https://substackcdn.com/image/fetch/$s_!LmCd!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fbd2de383-2658-4074-9794-f1302285f393_1920x1080.png 1456w" sizes="100vw" fetchpriority="high"></picture><div class="image-link-expand"><div class="pencraft pc-display-flex pc-gap-8 pc-reset"><button tabindex="0" type="button" class="pencraft pc-reset pencraft icon-container restack-image"><svg role="img" width="20" height="20" viewBox="0 0 20 20" fill="none" stroke-width="1.5" stroke="var(--color-fg-primary)" stroke-linecap="round" stroke-linejoin="round" xmlns="http://www.w3.org/2000/svg"><g><title></title><path d="M2.53001 7.81595C3.49179 4.73911 6.43281 2.5 9.91173 2.5C13.1684 2.5 15.9537 4.46214 17.0852 7.23684L17.6179 8.67647M17.6179 8.67647L18.5002 4.26471M17.6179 8.67647L13.6473 6.91176M17.4995 12.1841C16.5378 15.2609 13.5967 17.5 10.1178 17.5C6.86118 17.5 4.07589 15.5379 2.94432 12.7632L2.41165 11.3235M2.41165 11.3235L1.5293 15.7353M2.41165 11.3235L6.38224 13.0882"></path></g></svg></button><button tabindex="0" type="button" class="pencraft pc-reset pencraft icon-container view-image"><svg xmlns="http://www.w3.org/2000/svg" width="20" height="20" viewBox="0 0 24 24" fill="none" stroke="currentColor" stroke-width="2" stroke-linecap="round" stroke-linejoin="round" class="lucide lucide-maximize2 lucide-maximize-2"><polyline points="15 3 21 3 21 9"></polyline><polyline points="9 21 3 21 3 15"></polyline><line x1="21" x2="14" y1="3" y2="10"></line><line x1="3" x2="10" y1="21" y2="14"></line></svg></button></div></div></div></a></figure></div><h2>Executive Summary </h2><p>Your security team filed the incident report. They also filed the regulatory case file. They just don&#8217;t know it yet.</p><p>That is the disaster this article maps. The OWASP vulnerability and the EU AI Act violation are the same event, happening simultaneously, under identical facts. When your agent is hijacked, the regulation&#8217;s <strong>intended purpose</strong> architecture fails at the same moment. When your agent misuses a tool, the <strong>classification</strong> filed at deployment becomes invalid at the same moment. When your agent operates with uncontrolled identity and accumulated privileges, the <strong>human oversight</strong> obligation is breached at the same moment.</p><p>OWASP published its Top 10 for Agentic Applications in late 2025. The Cloud Security Alliance quantified the gap in January 2026: 72% of organizations cannot trace what their AI agents are doing across environments. Only 16% are confident they could pass a compliance audit on agent activity. Nearly one in five enterprises have already experienced an AI agent-related security breach. Those are organizations that have already accumulated regulatory disasters they cannot see.</p><p>What follows maps seven OWASP vulnerabilities to their EU AI Act equivalents. Each entry follows the same structure: what your security team calls it, what the regulation calls it, and what it costs you when the regulator reads the same incident report your team wrote.</p><p class="button-wrapper" data-attrs="{&quot;url&quot;:&quot;https://www.zerodaydawn.com/subscribe?&quot;,&quot;text&quot;:&quot;Subscribe now&quot;,&quot;action&quot;:null,&quot;class&quot;:null}" data-component-name="ButtonCreateButton"><a class="button primary" href="https://www.zerodaydawn.com/subscribe?"><span>Subscribe now</span></a></p><div><hr></div><h2>1. Goal Hijacking</h2><p><strong>What you assume:</strong> Prompt injection is a security problem. Your red team tests for it. Your WAF blocks obvious payloads. You&#8217;ve hardened the input layer.</p><p><strong>What actually happens:</strong> A prompt injection attack doesn&#8217;t just compromise your agent&#8217;s current task. It substitutes a different <strong>intended purpose</strong> for the one you documented. The system is now operating outside its compliance envelope &#8212; not because it malfunctioned, but because it functioned exactly as the regulation assumes it should not be able to: executing goals that nobody authorized. The attack surface is natural language, not code. No user action required.</p><p>The EU AI Act defines an AI system&#8217;s intended purpose as the use for which the system was designed by the provider &#8212; including the specific context and conditions of use. The conformity assessment, the risk management system, the human oversight obligations &#8212; all of them are anchored to that documented intended purpose. A goal hijacking attack substitutes an attacker&#8217;s purpose for yours. The classification you filed at deployment no longer describes the system in production.</p><p>Article 15 is the regulatory hammer your security team hasn&#8217;t mapped. It mandates that high-risk AI systems be resilient against attempts by unauthorized third parties to alter their use, outputs, or performance by exploiting system vulnerabilities. Prompt injection is exactly this: unauthorized alteration of system use by exploiting a vulnerability. A successful goal hijacking attack is a documented Article 15 failure.</p><p>If the hijacked goal causes the agent to operate in a domain listed in Annex III &#8212; making employment decisions, influencing credit assessments, allocating access to essential services &#8212; the agent has crossed a classification boundary through hostile action. Under Article 9, the risk management system must identify and evaluate risks that may emerge under conditions of reasonably foreseeable misuse. Prompt injection is not a hypothetical. It is a documented, active attack vector. An organization that deployed an agentic system capable of being hijacked into high-risk territory without modeling that scenario failed Article 9 before the attack ever happened.</p><p><strong>What it costs you:</strong> A successful goal hijacking attack produces a stack of liabilities, not one. The Article 15 cybersecurity failure is the technical violation &#8212; the system was not resilient against unauthorized alteration. The Article 9 risk management failure is the governance violation &#8212; the scenario was reasonably foreseeable and not addressed. Both carry penalties of &#8364;15 million or 3% of global annual turnover. If the attack caused harm meeting the statutory threshold &#8212; death, serious damage to health, serious and irreversible disruption of critical infrastructure &#8212; Article 73 mandatory incident reporting triggers. The window is 15 days. Two days for critical infrastructure impacts. The security incident report your team files is already a regulatory case file.</p><div><hr></div><h2>2. Tool Misuse</h2><p><strong>What you assume:</strong> Your agent has authorized access. It holds legitimate credentials. The IAM controls are clean. What it does with that access is an application logic problem, not a compliance problem.</p><p><strong>What actually happens:</strong> Authorized access producing unintended consequences is the attack surface OWASP identifies as the most structurally dangerous. The agent reaches a database it was technically permitted to access. It pulls data it was not designed to use and feeds that data into a decision chain nobody anticipated. No authorization was breached. The access was clean. The outcome was not governed.</p><p>The EU AI Act doesn&#8217;t classify systems by what you call them. It classifies them by their legally defined intended purpose and the domain in which they operate. An agent whose tool use causes it to operate in an Annex III domain &#8212; employment decisions, creditworthiness assessments, access to essential services &#8212; has crossed into high-risk territory through its own runtime behavior. The classification filed at deployment no longer describes the system.</p><p>The conversion trap nobody discusses: under Article 25, a deployer who modifies the intended purpose of an AI system such that it becomes high-risk is considered to be the provider of that high-risk system and becomes subject to the provider obligations under Article 16. A deployer whose agent, through tool use, autonomously begins operating in a high-risk domain may have triggered exactly this conversion &#8212; without any human decision to do so. If your agent accesses an HR database and generates a recommendation that affects a hiring decision, and that was not in the original intended purpose documentation, you are now the provider of an unregistered, unassessed high-risk AI system. You inherit uncapped liability for a system you thought you were merely deploying.</p><p><strong>What it costs you:</strong> Non-registration of a high-risk AI system carries penalties of up to &#8364;15 million or 3% of global annual turnover. The failure to conduct a conformity assessment is a separate violation. The absence of technical documentation is a separate violation. 40% of CISOs estimate that a major AI agent incident will cost their organization between $1 million and $10 million &#8212; ransomware-level financial impact. A single episode of unintended tool use that tips an agent into Annex III territory generates multiple simultaneous regulatory violations, none of which require any third party to be harmed.</p><div><hr></div><h2>3. Identity and Privilege Abuse</h2><p><strong>What you assume:</strong> Your agent runs under a service account. Permissions are scoped. Access is controlled by the same IAM framework that governs your human users.</p><p><strong>What actually happens:</strong> The IAM framework was designed for human users. It was not designed for autonomous systems that execute thousands of actions per session under a single identity. Your agent inherits permissions from the user who configured it. It operates with credentials provisioned for human workflows. It accumulates access across execution chains &#8212; each tool call opening another connection, each data retrieval expanding the operational footprint. No single permission grant looks anomalous. The aggregate exposure is severe.</p><p>The Cloud Security Alliance data confirms this is not an edge case: 44% of organizations use static API keys as the primary authentication mechanism for their agents. 43% use username/password combinations. 25% of enterprises have no formal AI security controls in place at all.</p><p>Article 14 requires that high-risk AI systems be designed so that they can be effectively overseen by natural persons during the period in which they are in use. The persons assigned to human oversight must understand the system&#8217;s capabilities and limitations and be able to correctly interpret its output. An agent operating with accumulated, unmonitored access under a human user&#8217;s credentials is structurally invisible to any oversight function. The overseer cannot interpret output from a system whose actual operational scope they cannot see.</p><p>CSA found that 39% of organizations assign agent governance to Security, 32% to IT, with no unified accountability. The regulation requires designated, competent, properly trained oversight personnel with the authority to intervene. Fragmented ownership across three separate functions means no single function holds complete understanding of the agent&#8217;s capabilities and limitations. The Article 14 obligation requires operational capacity, not an org chart entry.</p><p><strong>What it costs you:</strong> Violations of high-risk human oversight obligations carry fines of &#8364;15 million or 3% of global annual turnover. For agents that have never been classified at all &#8212; operating under human credentials in domains that were never assessed &#8212; transparency obligations under Article 50 apply regardless of risk level. Violations of Article 50 carry the exact same penalty: &#8364;15 million or 3%. An unclassified agent is not exempt from both. It is potentially liable for both simultaneously.</p><div><hr></div><h2>4. Insecure Inter-Agent Communication</h2><p><strong>What you assume:</strong> Your multi-agent architecture is a scalability decision. Agents pass tasks to each other. Orchestrators coordinate workflows. The security model is inherited from your microservices framework.</p><p><strong>What actually happens:</strong> Multi-agent systems break standard authorization through two mechanisms. First, diffusion of responsibility: because there is no central authority, assumptions about which agent enforces security become ambiguous, and systems silently fail to enforce controls because each agent assumes another handled it. Second, workflow privilege escalation: agents perform individually authorized, low-privilege actions that, when chained together through inter-agent communication, result in an unauthorized, high-privilege exploit.</p><p>The EU AI Act assesses risk at the level of individual AI systems. The conformity assessment evaluates each system against its documented intended purpose &#8212; assuming identifiable, bounded behavior. Multi-agent interactions produce emergent behavior that no component-level assessment accounts for. Every handoff between agents is a point where data integrity, authorization scope, and classification boundaries can break simultaneously.</p><p>Under Article 26, deployers operating high-risk AI systems must monitor system performance against the intended purpose on an ongoing basis. An organization operating a multi-agent architecture cannot discharge that monitoring obligation for a system whose inter-agent communication it cannot trace. Only 28% of organizations can reliably trace an agent&#8217;s actions across all environments. For multi-agent systems, that coverage gap is structurally deeper.</p><p>The substantial modification provision compounds the exposure. An agent that receives corrupted or injected instructions from an upstream agent and executes them has had its intended purpose modified by the attack &#8212; a substantial modification requiring a new conformity assessment. Whether it triggers reassessment depends on whether the organization can detect it. 72% cannot.</p><p><strong>What it costs you:</strong> The failure to maintain ongoing monitoring of a high-risk system&#8217;s performance is a violation of Article 26 deployer obligations. The failure to report a serious incident carries mandatory reporting windows as short as two days for critical infrastructure. The regulatory clocks collide here. If your agent incident qualifies as a significant cyber threat under NIS2, you have 24 hours for an early warning and 72 hours for full incident notification. If the same incident triggers EU AI Act serious incident reporting under Article 73, you have 15 days &#8212; or 2 days for critical infrastructure. Article 73(9) provides a carve-out: if equivalent reporting applies under another framework, the AI Act obligation is limited to incidents involving infringement of fundamental rights. Your security team will be fighting two different regulatory clocks simultaneously, for the same event, with different disclosure requirements.</p><div><hr></div><p><em>The first four entries are the vulnerabilities your security team is most likely to have already logged. They are simultaneously the compliance failures your regulatory team cannot see. What follows are the entries that determine whether your governance architecture survives its first enforcement inquiry &#8212; or becomes the case study other organizations learn from.</em></p><p><em>The full analysis &#8212; including rogue agent behavior as a substantial modification trigger, cascading failures as an Article 9 risk management violation, memory poisoning as an Article 10 data governance failure, the four-question classification methodology for determining which vulnerabilities trigger high-risk obligations, the documentation architecture required before August 2026, and the operational framework for building compliance that survives both a penetration test and a regulatory audit &#8212; continues below for paid subscribers.</em></p>
      <p>
          <a href="https://www.zerodaydawn.com/p/your-security-incident-is-a-regulatory">
              Read more
          </a>
      </p>
   ]]></content:encoded></item><item><title><![CDATA[Deploy First, Comply Never]]></title><description><![CDATA[A field guide to everything AI agent builders get wrong about the EU AI Act]]></description><link>https://www.zerodaydawn.com/p/deploy-first-comply-never</link><guid isPermaLink="false">https://www.zerodaydawn.com/p/deploy-first-comply-never</guid><dc:creator><![CDATA[Violeta Klein, CISSP, AIGP]]></dc:creator><pubDate>Mon, 02 Mar 2026 18:39:19 GMT</pubDate><enclosure url="https://substackcdn.com/image/fetch/$s_!N8JK!,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fd1aebc2d-8fdf-4e85-b304-d14344449525_1920x1080.png" length="0" type="image/jpeg"/><content:encoded><![CDATA[<div class="captioned-image-container"><figure><a class="image-link image2 is-viewable-img" target="_blank" href="https://substackcdn.com/image/fetch/$s_!N8JK!,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fd1aebc2d-8fdf-4e85-b304-d14344449525_1920x1080.png" data-component-name="Image2ToDOM"><div class="image2-inset"><picture><source type="image/webp" srcset="https://substackcdn.com/image/fetch/$s_!N8JK!,w_424,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fd1aebc2d-8fdf-4e85-b304-d14344449525_1920x1080.png 424w, https://substackcdn.com/image/fetch/$s_!N8JK!,w_848,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fd1aebc2d-8fdf-4e85-b304-d14344449525_1920x1080.png 848w, https://substackcdn.com/image/fetch/$s_!N8JK!,w_1272,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fd1aebc2d-8fdf-4e85-b304-d14344449525_1920x1080.png 1272w, https://substackcdn.com/image/fetch/$s_!N8JK!,w_1456,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fd1aebc2d-8fdf-4e85-b304-d14344449525_1920x1080.png 1456w" sizes="100vw"><img src="https://substackcdn.com/image/fetch/$s_!N8JK!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fd1aebc2d-8fdf-4e85-b304-d14344449525_1920x1080.png" width="1456" height="819" data-attrs="{&quot;src&quot;:&quot;https://substack-post-media.s3.amazonaws.com/public/images/d1aebc2d-8fdf-4e85-b304-d14344449525_1920x1080.png&quot;,&quot;srcNoWatermark&quot;:null,&quot;fullscreen&quot;:null,&quot;imageSize&quot;:null,&quot;height&quot;:819,&quot;width&quot;:1456,&quot;resizeWidth&quot;:null,&quot;bytes&quot;:3306169,&quot;alt&quot;:null,&quot;title&quot;:null,&quot;type&quot;:&quot;image/png&quot;,&quot;href&quot;:null,&quot;belowTheFold&quot;:false,&quot;topImage&quot;:true,&quot;internalRedirect&quot;:&quot;https://www.zerodaydawn.com/i/189674208?img=https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fd1aebc2d-8fdf-4e85-b304-d14344449525_1920x1080.png&quot;,&quot;isProcessing&quot;:false,&quot;align&quot;:null,&quot;offset&quot;:false}" class="sizing-normal" alt="" srcset="https://substackcdn.com/image/fetch/$s_!N8JK!,w_424,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fd1aebc2d-8fdf-4e85-b304-d14344449525_1920x1080.png 424w, https://substackcdn.com/image/fetch/$s_!N8JK!,w_848,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fd1aebc2d-8fdf-4e85-b304-d14344449525_1920x1080.png 848w, https://substackcdn.com/image/fetch/$s_!N8JK!,w_1272,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fd1aebc2d-8fdf-4e85-b304-d14344449525_1920x1080.png 1272w, https://substackcdn.com/image/fetch/$s_!N8JK!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fd1aebc2d-8fdf-4e85-b304-d14344449525_1920x1080.png 1456w" sizes="100vw" fetchpriority="high"></picture><div class="image-link-expand"><div class="pencraft pc-display-flex pc-gap-8 pc-reset"><button tabindex="0" type="button" class="pencraft pc-reset pencraft icon-container restack-image"><svg role="img" width="20" height="20" viewBox="0 0 20 20" fill="none" stroke-width="1.5" stroke="var(--color-fg-primary)" stroke-linecap="round" stroke-linejoin="round" xmlns="http://www.w3.org/2000/svg"><g><title></title><path d="M2.53001 7.81595C3.49179 4.73911 6.43281 2.5 9.91173 2.5C13.1684 2.5 15.9537 4.46214 17.0852 7.23684L17.6179 8.67647M17.6179 8.67647L18.5002 4.26471M17.6179 8.67647L13.6473 6.91176M17.4995 12.1841C16.5378 15.2609 13.5967 17.5 10.1178 17.5C6.86118 17.5 4.07589 15.5379 2.94432 12.7632L2.41165 11.3235M2.41165 11.3235L1.5293 15.7353M2.41165 11.3235L6.38224 13.0882"></path></g></svg></button><button tabindex="0" type="button" class="pencraft pc-reset pencraft icon-container view-image"><svg xmlns="http://www.w3.org/2000/svg" width="20" height="20" viewBox="0 0 24 24" fill="none" stroke="currentColor" stroke-width="2" stroke-linecap="round" stroke-linejoin="round" class="lucide lucide-maximize2 lucide-maximize-2"><polyline points="15 3 21 3 21 9"></polyline><polyline points="9 21 3 21 3 15"></polyline><line x1="21" x2="14" y1="3" y2="10"></line><line x1="3" x2="10" y1="21" y2="14"></line></svg></button></div></div></div></a></figure></div><h2><strong>Executive Summary</strong></h2><p>This article is for every team that has built, shipped, or deployed an AI agent without asking whether the EU AI Act applies to them. It does. This is the field guide to the fourteen regulatory traps waiting between your deployment and your first enforcement action.</p><p>You built an agent. You shipped it. Users in the EU are using it. You are now operating inside a regulatory framework you probably haven&#8217;t read &#8212; and the obligations it imposes are already binding.</p><p>The EU AI Act entered into force in August 2024. Bans on prohibited AI practices are actively enforced right now. General-purpose AI obligations hit in August 2025. Broad transparency rules and most high-risk obligations become enforceable in August 2026, with the rest following in 2027.</p><p>What follows is every assumption AI agent builders are making that will not survive enforcement. Each one is a trap. Each trap has a regulatory consequence. None of them require you to be a large company, a European company, or a company that intended to operate in the EU.</p><p class="button-wrapper" data-attrs="{&quot;url&quot;:&quot;https://www.zerodaydawn.com/subscribe?&quot;,&quot;text&quot;:&quot;Subscribe now&quot;,&quot;action&quot;:null,&quot;class&quot;:null}" data-component-name="ButtonCreateButton"><a class="button primary" href="https://www.zerodaydawn.com/subscribe?"><span>Subscribe now</span></a></p><div><hr></div><h2><strong>1. The Extraterritorial Trigger</strong></h2><p><strong>What you assume:</strong> You&#8217;re not based in the EU, so the EU AI Act doesn&#8217;t apply to you.</p><p><strong>What actually happens:</strong> The regulation follows output, not headquarters. If your AI agent produces a recommendation, a classification, a score, or a decision that is used by a natural person inside the EU, you are in scope. It does not matter where your servers are. It does not matter where your company is incorporated. It does not matter whether you intended your agent to reach the EU market.</p><p>A recruiter in Paris uses your agent to screen candidates. A bank in Amsterdam uses it to flag transaction risk. A university in Milan uses it to evaluate student submissions. You are now a provider or deployer under the EU AI Act &#8212; and the obligations that come with that status are enforceable against you.</p><p><strong>What it costs you:</strong> Penalties for non-compliance with high-risk or transparency obligations reach &#8364;15 million or 3% of global annual turnover. Supplying misleading information to regulators carries fines of &#8364;7.5 million or 1% of global annual turnover. These are not theoretical. They are statutory.</p><div><hr></div><h2><strong>2. Classification Creep</strong></h2><p><strong>What you assume:</strong> You built a productivity tool. An assistant. A workflow optimizer. Not a high-risk AI system.</p><p><strong>What actually happens:</strong> The EU AI Act does not classify systems by what you <em>call them</em>. It classifies them by <strong>their legally defined</strong> <strong>intended purpose</strong>. Your agent screens job applicants &#8212; that is an employment decision system under Annex III. Your agent evaluates the creditworthiness of individual consumers &#8212; that is a financial access system under Annex III. Your agent monitors employee performance or allocates tasks &#8212; employment domain, Annex III. Your agent influences a student&#8217;s educational progression &#8212; education domain, Annex III.</p><p>You did not <em>design</em> a high-risk system. But you <strong>deployed</strong> one. The gap between the generic tool you built and the high-risk task it now performs is where enforcement lives.</p><p>The sneakiest triggers are the ones builders never anticipate. If an internal research agent is repurposed to access HR data and generate recommendations affecting hiring decisions, <strong>its intended purpose has legally changed</strong>. It has entered an Annex III employment domain. Whoever made that change is now legally the provider of a high-risk AI system.</p><p><strong>What it costs you:</strong> Every downstream obligation &#8212; risk management, conformity assessment, documentation, human oversight, logging &#8212; is triggered by classification. Get classification wrong and everything you build on top of it is wasted. Get it right and you know what you owe. Skip it and a regulator will do the classification for you. </p><div><hr></div><h2><strong>3. The Liability Shift</strong></h2><p><strong>What you assume:</strong> You are the provider. Your enterprise customers are deployers. They handle human oversight and logging on their end. Clean separation.</p><p><strong>What actually happens:</strong> If your customer <strong>modifies the intended purpose</strong> of your agent &#8212; deploys it in a domain you did not anticipate, connects it to data sources you did not design for, or changes how it interacts with end users &#8212; they may have just triggered a legal conversion. <strong>A deployer who makes a substantial modification to an AI system, or who changes its intended purpose such that it becomes high-risk, assumes the full obligations of a provider</strong>. That means conformity assessment, technical documentation, risk management, and post-market monitoring &#8212; all of it shifts to the deployer.</p><p>But here's the trap nobody discusses: when your customer becomes the provider through modification, the original provider &#8212; you &#8212; must legally cooperate. You are required to provide technical access and assistance so the new provider can meet their obligations. The only way out? You must <strong>explicitly specify</strong> in your terms that the system is not to be changed into a high-risk AI system. If you didn't write that in, you owe them your documentation &#8212; <strong>limited only by strict trade secret protections</strong>.</p><p><strong>What it costs you:</strong> You lose control of how your system is classified. Your customer&#8217;s deployment decision creates obligations for both of you. And if you didn&#8217;t explicitly forbid high-risk modification in your contracts, enforcement will find two parties pointing at each other with nobody holding the compliance.</p><div><hr></div><h3>4. The Open-Source Illusion</h3><p><strong>What you assume:</strong> You built your agent on an open-source model. Open-source means lighter regulatory requirements. You&#8217;re covered.</p><p><strong>What actually happens:</strong> The EU AI Act offers limited transparency exemptions for open-source general-purpose AI models. Those exemptions apply <strong>to the model layer</strong>. The moment you integrate that model into an AI system that qualifies as high-risk under Annex III &#8212; because it screens applicants, evaluates creditworthiness, assesses insurance risk, or influences educational outcomes &#8212; every exemption vanishes. The system-level obligations apply in full.</p><p><strong>Open-source is a licensing model. It is not a regulatory shield. </strong>The regulation does not care how your model was licensed. It cares what the system built on top of it does to people.</p><p><strong>What it costs you:</strong> Every builder using open-source models who assumed lighter obligations now has the same compliance burden as a proprietary system deployed in the same domain. The model&#8217;s license changed nothing about the system&#8217;s classification.</p><div><hr></div><p><em>The first four traps are the ones that catch builders before they even know they&#8217;re playing. What follows are the ten operational traps that <strong>determine whether your deployed agent survives its first regulatory inquiry</strong> &#8212; or becomes the case study other builders learn from.</em></p>
      <p>
          <a href="https://www.zerodaydawn.com/p/deploy-first-comply-never">
              Read more
          </a>
      </p>
   ]]></content:encoded></item><item><title><![CDATA[When the EU Comes for Your Agents]]></title><description><![CDATA[Governance can't keep up with the tech &#8212; and going offshore isn't an escape route either]]></description><link>https://www.zerodaydawn.com/p/when-the-eu-comes-for-your-agents</link><guid isPermaLink="false">https://www.zerodaydawn.com/p/when-the-eu-comes-for-your-agents</guid><dc:creator><![CDATA[Violeta Klein, CISSP, AIGP]]></dc:creator><pubDate>Mon, 23 Feb 2026 06:00:19 GMT</pubDate><enclosure url="https://substackcdn.com/image/fetch/$s_!8vRP!,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fa99e3dfd-b028-40dc-b72e-da689fb1d345_1920x1080.png" length="0" type="image/jpeg"/><content:encoded><![CDATA[<div class="captioned-image-container"><figure><a class="image-link image2 is-viewable-img" target="_blank" href="https://substackcdn.com/image/fetch/$s_!8vRP!,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fa99e3dfd-b028-40dc-b72e-da689fb1d345_1920x1080.png" data-component-name="Image2ToDOM"><div class="image2-inset"><picture><source type="image/webp" srcset="https://substackcdn.com/image/fetch/$s_!8vRP!,w_424,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fa99e3dfd-b028-40dc-b72e-da689fb1d345_1920x1080.png 424w, https://substackcdn.com/image/fetch/$s_!8vRP!,w_848,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fa99e3dfd-b028-40dc-b72e-da689fb1d345_1920x1080.png 848w, https://substackcdn.com/image/fetch/$s_!8vRP!,w_1272,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fa99e3dfd-b028-40dc-b72e-da689fb1d345_1920x1080.png 1272w, https://substackcdn.com/image/fetch/$s_!8vRP!,w_1456,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fa99e3dfd-b028-40dc-b72e-da689fb1d345_1920x1080.png 1456w" sizes="100vw"><img src="https://substackcdn.com/image/fetch/$s_!8vRP!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fa99e3dfd-b028-40dc-b72e-da689fb1d345_1920x1080.png" width="1456" height="819" data-attrs="{&quot;src&quot;:&quot;https://substack-post-media.s3.amazonaws.com/public/images/a99e3dfd-b028-40dc-b72e-da689fb1d345_1920x1080.png&quot;,&quot;srcNoWatermark&quot;:null,&quot;fullscreen&quot;:null,&quot;imageSize&quot;:null,&quot;height&quot;:819,&quot;width&quot;:1456,&quot;resizeWidth&quot;:null,&quot;bytes&quot;:3292965,&quot;alt&quot;:null,&quot;title&quot;:null,&quot;type&quot;:&quot;image/png&quot;,&quot;href&quot;:null,&quot;belowTheFold&quot;:false,&quot;topImage&quot;:true,&quot;internalRedirect&quot;:&quot;https://www.zerodaydawn.com/i/188606693?img=https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fa99e3dfd-b028-40dc-b72e-da689fb1d345_1920x1080.png&quot;,&quot;isProcessing&quot;:false,&quot;align&quot;:null,&quot;offset&quot;:false}" class="sizing-normal" alt="" srcset="https://substackcdn.com/image/fetch/$s_!8vRP!,w_424,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fa99e3dfd-b028-40dc-b72e-da689fb1d345_1920x1080.png 424w, https://substackcdn.com/image/fetch/$s_!8vRP!,w_848,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fa99e3dfd-b028-40dc-b72e-da689fb1d345_1920x1080.png 848w, https://substackcdn.com/image/fetch/$s_!8vRP!,w_1272,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fa99e3dfd-b028-40dc-b72e-da689fb1d345_1920x1080.png 1272w, https://substackcdn.com/image/fetch/$s_!8vRP!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fa99e3dfd-b028-40dc-b72e-da689fb1d345_1920x1080.png 1456w" sizes="100vw" fetchpriority="high"></picture><div class="image-link-expand"><div class="pencraft pc-display-flex pc-gap-8 pc-reset"><button tabindex="0" type="button" class="pencraft pc-reset pencraft icon-container restack-image"><svg role="img" width="20" height="20" viewBox="0 0 20 20" fill="none" stroke-width="1.5" stroke="var(--color-fg-primary)" stroke-linecap="round" stroke-linejoin="round" xmlns="http://www.w3.org/2000/svg"><g><title></title><path d="M2.53001 7.81595C3.49179 4.73911 6.43281 2.5 9.91173 2.5C13.1684 2.5 15.9537 4.46214 17.0852 7.23684L17.6179 8.67647M17.6179 8.67647L18.5002 4.26471M17.6179 8.67647L13.6473 6.91176M17.4995 12.1841C16.5378 15.2609 13.5967 17.5 10.1178 17.5C6.86118 17.5 4.07589 15.5379 2.94432 12.7632L2.41165 11.3235M2.41165 11.3235L1.5293 15.7353M2.41165 11.3235L6.38224 13.0882"></path></g></svg></button><button tabindex="0" type="button" class="pencraft pc-reset pencraft icon-container view-image"><svg xmlns="http://www.w3.org/2000/svg" width="20" height="20" viewBox="0 0 24 24" fill="none" stroke="currentColor" stroke-width="2" stroke-linecap="round" stroke-linejoin="round" class="lucide lucide-maximize2 lucide-maximize-2"><polyline points="15 3 21 3 21 9"></polyline><polyline points="9 21 3 21 3 15"></polyline><line x1="21" x2="14" y1="3" y2="10"></line><line x1="3" x2="10" y1="21" y2="14"></line></svg></button></div></div></div></a></figure></div><h2>Executive Summary</h2><p>This article is for the security teams deploying agentic AI systems who do not yet realize they have a compliance obligation under the EU AI Act.</p><p>If you work in application security, cloud architecture, or CISO operations &#8212; if you read OWASP advisories and CSA benchmarks as part of your operational baseline &#8212; this piece was written for your blind spot. The agentic AI systems you are deploying, monitoring, and securing are subject to binding regulatory obligations that your security frameworks do not address and your compliance teams may not know about.</p><p>Three things happened in the past ninety days that make this unavoidable.</p><p>The Cloud Security Alliance published its first comprehensive assessment of agentic AI security posture. The findings are severe: 72% of organizations cannot trace what their AI agents are doing across environments. Only 16% are confident they could pass a compliance audit on agent activity. 21% maintain a real-time agent registry. The rest are operating blind.</p><p>NIST launched the AI Agent Standards Initiative on February 17, 2026 &#8212; three pillars covering industry-led standards, open-source protocols, and research on agent security and identity. The first concrete deliverable is a request for information on agent security due March 9. A concept paper on AI agent identity and authorization follows on April 2. Listening sessions begin in April. The governance infrastructure is forming. It is not ready.</p><p>And the EU AI Act &#8212; enforceable from August 2026 for high-risk systems &#8212; already applies to every organization whose AI agents produce output used inside the EU. Regardless of where that organization is headquartered. Regardless of whether the agent was designed to reach the EU market. The regulation follows outputs, not headquarters.</p><p><span class="mention-wrap" data-attrs="{&quot;name&quot;:&quot;Ken Huang&quot;,&quot;id&quot;:1160339,&quot;type&quot;:&quot;user&quot;,&quot;url&quot;:null,&quot;photo_url&quot;:&quot;https://substack-post-media.s3.amazonaws.com/public/images/3d670301-204b-472e-a2ee-bbb1b7633a99_2026x2026.png&quot;,&quot;uuid&quot;:&quot;50828278-512d-4673-882e-850e2519d88d&quot;}" data-component-name="MentionToDOM"></span>&#8217;s &#8220;Layer 8&#8221; thesis argues that agentic AI sits above the application layer because it breaks the deterministic boundary. The compliance architecture of the EU AI Act was built for everything below that boundary. The security community is mapping the risk. The standards bodies are forming the frameworks. The EU AI Act is the only instrument that already imposes binding obligations. And 72% of organizations cannot see the systems those obligations apply to.</p><p>Transparency obligations under Article 50 apply to all AI systems regardless of risk classification. The classification decision itself carries regulatory consequences &#8212; &#8364;7.5 million or 1% of global annual turnover for violations.</p><p>This piece maps the gap: who is in scope, what the regulation requires, where the OWASP vulnerabilities become regulatory exposure, and what you must build before August 2026.Prohibited practices under Article 5 have applied since February 2025. General-purpose AI model obligations since August 2025. The full weight of high-risk obligations for Annex III systems applies in August 2026, with Annex I systems following in August 2027.</p><div class="subscription-widget-wrap-editor" data-attrs="{&quot;url&quot;:&quot;https://www.zerodaydawn.com/subscribe?&quot;,&quot;text&quot;:&quot;Subscribe&quot;,&quot;language&quot;:&quot;en&quot;}" data-component-name="SubscribeWidgetToDOM"><div class="subscription-widget show-subscribe"><div class="preamble"><p class="cta-caption">Zero-Day Dawn is a reader-supported publication. To receive new posts and support my work, consider becoming a free or paid subscriber.</p></div><form class="subscription-widget-subscribe"><input type="email" class="email-input" name="email" placeholder="Type your email&#8230;" tabindex="-1"><input type="submit" class="button primary" value="Subscribe"><div class="fake-input-wrapper"><div class="fake-input"></div><div class="fake-button"></div></div></form></div></div><div><hr></div><h2>Who This Applies To</h2><p>The EU AI Act does not require you to be in the EU. It requires your AI system&#8217;s output to be used there.</p><p>Article 2 defines the scope: the regulation applies to providers who place AI systems on the EU market or put them into service in the EU &#8212; and to providers and deployers established in a third country, where the output produced by the AI system is used in the Union.</p><p>That second clause is the one most organizations miss. </p><p><strong>Examples:</strong> You built an AI agent in Austin, TX. A recruiter in Berlin, DE uses it to screen candidates. That puts you in the regulation's reach. You launched an AI finance tool from Singapore. EU customers use it to assess creditworthiness. The same logic applies. Your agent is hosted on US infrastructure and never touches an EU server &#8212; but its recommendation is used by an EU natural person and influences a decision that affects them. The regulation follows the output, not the headquarters.</p><blockquote><p>Geography is not a shield. The trigger is whether the output produced by the AI system is used in the Union.</p></blockquote><p>For providers of high-risk AI systems established outside the EU, Article 22 requires the appointment of an authorized representative established in the EU before the system is placed on the market or put into service. That is not a filing requirement you handle after the fact. It is a precondition for lawful operation.</p><p>The extraterritorial architecture mirrors GDPR &#8212; but with a critical distinction. GDPR follows personal data. <strong>The EU AI Act follows system output</strong>. Every agent that produces a recommendation, a classification, a decision, or a risk assessment that is used by an EU natural person is potentially in scope &#8212; <strong>whether the deploying organization intended that reach or not</strong>.</p><p>If your agents touch the EU market &#8212; directly or through downstream users you may not have mapped &#8212; the obligations in this article apply to you. The penalties for non-compliance with high-risk obligations reach &#8364;15 million or 3% of global annual turnover. Transparency obligations under Article 50 apply to all AI systems regardless of classification, with violations carrying fines of &#8364;7.5 million or 1% of global annual turnover.</p><p>For a deeper analysis of the EU AI Act&#8217;s extraterritorial reach, see <a href="https://www.zerodaydawn.com/p/the-long-arm-of-the-eu-ai-act">The Long Arm of the EU AI Act</a>.</p><div><hr></div><h2>The Data</h2><p>The governance gap is quantified. Three of the most authoritative bodies in AI security have measured it &#8212; and the numbers are worse than most organizations expect.</p><p><strong>The Cloud Security Alliance</strong> published <em>Securing Autonomous AI Agents</em> in January 2026. The findings describe an industry that has <strong>deployed agentic AI faster than it can govern it</strong>. Only 28% of organizations can reliably trace an agent&#8217;s actions across all environments &#8212; meaning 72% lack full visibility into what their agents are doing. Only 16% of respondents expressed confidence they could pass a compliance audit on AI agent activity. Just 21% maintain a real-time registry of their AI agents. And only 23% have a formal, organization-wide agent governance strategy &#8212; the rest rely on informal practices or have no strategy at all.</p><p>Ownership is fragmented. 39% of organizations assign agent governance to Security. 32% to IT. 13% to a dedicated AI Security function. The rest scatter it across compliance, engineering, and executive teams with no clear accountability.</p><p>These are not immature organizations experimenting with AI. These are enterprises that have deployed agents into production &#8212; and cannot tell you <strong>what</strong> those agents are doing, <strong>where </strong>they are operating, or <strong>whether they comply</strong> with anything.</p><p><strong>OWASP</strong> published the Top 10 for Agentic Applications in December 2025 &#8212; the product of over 100 security researchers working across more than a year. Three of the ten critical vulnerabilities involve agentic tool use directly: Tool Misuse and Exploitation (ASI02), Identity and Privilege Abuse (ASI03), and Insecure Inter-Agent Communication (ASI07). A tenth entry &#8212; Rogue Agents (ASI10) &#8212; addresses misalignment, concealment, and self-directed action.</p><p>These are not theoretical attack surfaces. In early February 2026, a prompt injection attack against the Cline coding assistant &#8212; exploiting a vulnerability in its Claude-powered issue triage workflow &#8212; led to a compromised npm token that was used to push a modified package silently installing OpenClaw on developer machines. The attack was live for eight hours before detection. The entry point was natural language, not code. An agent&#8217;s tool access was weaponized through its own context window.</p><p><strong>NIST</strong> launched the AI Agent Standards Initiative on February 17, 2026. Three pillars: facilitating industry-led standards development, fostering open-source protocol development, and advancing research on AI agent security and identity. The initiative&#8217;s first deliverables are an RFI on agent security (due March 9), a concept paper on AI agent identity and authorization (due April 2), and sector-specific listening sessions starting in April.</p><p>The signal is clear. NIST is at the RFI stage. OWASP has mapped the vulnerabilities. CSA has quantified the gap. The governance infrastructure is forming &#8212; but it is not operational. And the EU AI Act obligations do not wait for frameworks to be ready.</p><div><hr></div><p><em>What the EU AI Act actually requires of deployers operating agentic systems &#8212; the specific obligation mapping against CSA data, the OWASP-to-EU-AI-Act vulnerability crosswalk, why your agent's runtime behavior may already constitute a substantial modification under Article 3(23), and the operational methodology for building compliance before August 2026 &#8212; continues below for paid subscribers.</em></p><div class="subscription-widget-wrap-editor" data-attrs="{&quot;url&quot;:&quot;https://www.zerodaydawn.com/subscribe?&quot;,&quot;text&quot;:&quot;Subscribe&quot;,&quot;language&quot;:&quot;en&quot;}" data-component-name="SubscribeWidgetToDOM"><div class="subscription-widget show-subscribe"><div class="preamble"><p class="cta-caption">Zero-Day Dawn is a reader-supported publication. To receive new posts and support my work, consider becoming a free or paid subscriber.</p></div><form class="subscription-widget-subscribe"><input type="email" class="email-input" name="email" placeholder="Type your email&#8230;" tabindex="-1"><input type="submit" class="button primary" value="Subscribe"><div class="fake-input-wrapper"><div class="fake-input"></div><div class="fake-button"></div></div></form></div></div>
      <p>
          <a href="https://www.zerodaydawn.com/p/when-the-eu-comes-for-your-agents">
              Read more
          </a>
      </p>
   ]]></content:encoded></item><item><title><![CDATA[The Shadow Side of Agentic AI]]></title><description><![CDATA[What happens when the agents are already running, but the governance infrastructure is not]]></description><link>https://www.zerodaydawn.com/p/the-shadow-side-of-agentic-ai</link><guid isPermaLink="false">https://www.zerodaydawn.com/p/the-shadow-side-of-agentic-ai</guid><dc:creator><![CDATA[Violeta Klein, CISSP, AIGP]]></dc:creator><pubDate>Mon, 16 Feb 2026 06:01:39 GMT</pubDate><enclosure url="https://substackcdn.com/image/fetch/$s_!2wRq!,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F308c1760-0a5e-4adf-88c9-4ea7fcb95de1_1920x1080.png" length="0" type="image/jpeg"/><content:encoded><![CDATA[<div class="captioned-image-container"><figure><a class="image-link image2 is-viewable-img" target="_blank" href="https://substackcdn.com/image/fetch/$s_!2wRq!,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F308c1760-0a5e-4adf-88c9-4ea7fcb95de1_1920x1080.png" data-component-name="Image2ToDOM"><div class="image2-inset"><picture><source type="image/webp" srcset="https://substackcdn.com/image/fetch/$s_!2wRq!,w_424,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F308c1760-0a5e-4adf-88c9-4ea7fcb95de1_1920x1080.png 424w, https://substackcdn.com/image/fetch/$s_!2wRq!,w_848,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F308c1760-0a5e-4adf-88c9-4ea7fcb95de1_1920x1080.png 848w, https://substackcdn.com/image/fetch/$s_!2wRq!,w_1272,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F308c1760-0a5e-4adf-88c9-4ea7fcb95de1_1920x1080.png 1272w, https://substackcdn.com/image/fetch/$s_!2wRq!,w_1456,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F308c1760-0a5e-4adf-88c9-4ea7fcb95de1_1920x1080.png 1456w" sizes="100vw"><img src="https://substackcdn.com/image/fetch/$s_!2wRq!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F308c1760-0a5e-4adf-88c9-4ea7fcb95de1_1920x1080.png" width="1456" height="819" data-attrs="{&quot;src&quot;:&quot;https://substack-post-media.s3.amazonaws.com/public/images/308c1760-0a5e-4adf-88c9-4ea7fcb95de1_1920x1080.png&quot;,&quot;srcNoWatermark&quot;:null,&quot;fullscreen&quot;:null,&quot;imageSize&quot;:null,&quot;height&quot;:819,&quot;width&quot;:1456,&quot;resizeWidth&quot;:null,&quot;bytes&quot;:3306434,&quot;alt&quot;:null,&quot;title&quot;:null,&quot;type&quot;:&quot;image/png&quot;,&quot;href&quot;:null,&quot;belowTheFold&quot;:false,&quot;topImage&quot;:true,&quot;internalRedirect&quot;:&quot;https://www.zerodaydawn.com/i/188026694?img=https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F308c1760-0a5e-4adf-88c9-4ea7fcb95de1_1920x1080.png&quot;,&quot;isProcessing&quot;:false,&quot;align&quot;:null,&quot;offset&quot;:false}" class="sizing-normal" alt="" srcset="https://substackcdn.com/image/fetch/$s_!2wRq!,w_424,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F308c1760-0a5e-4adf-88c9-4ea7fcb95de1_1920x1080.png 424w, https://substackcdn.com/image/fetch/$s_!2wRq!,w_848,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F308c1760-0a5e-4adf-88c9-4ea7fcb95de1_1920x1080.png 848w, https://substackcdn.com/image/fetch/$s_!2wRq!,w_1272,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F308c1760-0a5e-4adf-88c9-4ea7fcb95de1_1920x1080.png 1272w, https://substackcdn.com/image/fetch/$s_!2wRq!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F308c1760-0a5e-4adf-88c9-4ea7fcb95de1_1920x1080.png 1456w" sizes="100vw" fetchpriority="high"></picture><div class="image-link-expand"><div class="pencraft pc-display-flex pc-gap-8 pc-reset"><button tabindex="0" type="button" class="pencraft pc-reset pencraft icon-container restack-image"><svg role="img" width="20" height="20" viewBox="0 0 20 20" fill="none" stroke-width="1.5" stroke="var(--color-fg-primary)" stroke-linecap="round" stroke-linejoin="round" xmlns="http://www.w3.org/2000/svg"><g><title></title><path d="M2.53001 7.81595C3.49179 4.73911 6.43281 2.5 9.91173 2.5C13.1684 2.5 15.9537 4.46214 17.0852 7.23684L17.6179 8.67647M17.6179 8.67647L18.5002 4.26471M17.6179 8.67647L13.6473 6.91176M17.4995 12.1841C16.5378 15.2609 13.5967 17.5 10.1178 17.5C6.86118 17.5 4.07589 15.5379 2.94432 12.7632L2.41165 11.3235M2.41165 11.3235L1.5293 15.7353M2.41165 11.3235L6.38224 13.0882"></path></g></svg></button><button tabindex="0" type="button" class="pencraft pc-reset pencraft icon-container view-image"><svg xmlns="http://www.w3.org/2000/svg" width="20" height="20" viewBox="0 0 24 24" fill="none" stroke="currentColor" stroke-width="2" stroke-linecap="round" stroke-linejoin="round" class="lucide lucide-maximize2 lucide-maximize-2"><polyline points="15 3 21 3 21 9"></polyline><polyline points="9 21 3 21 3 15"></polyline><line x1="21" x2="14" y1="3" y2="10"></line><line x1="3" x2="10" y1="21" y2="14"></line></svg></button></div></div></div></a></figure></div><h2><strong>Executive Summary</strong></h2><p>A decade ago, the security problem was shadow IT &#8212; employees installing Dropbox, spinning up Trello boards, running SaaS tools their IT departments never authorized. It was a containment problem. Unauthorized applications creating data silos and compliance blind spots.</p><p>Shadow AI is not the same problem at scale. It is a different problem entirely.</p><p>Shadow IT stored data. Shadow AI makes decisions. An unsanctioned Dropbox folder does not autonomously access your HR database, generate a recommendation about an employee, and act on it before anyone reviews the output. An unsanctioned AI agent can.</p><p>And they already are. Employees and teams are deploying AI agents &#8212; autonomous systems that select their own tools, sequence their own actions, and make decisions that affect people &#8212; into enterprise workflows that touch employment, finance, personal data, and critical infrastructure. These agents are not being inventoried. They are not being assessed. They are not being governed. In a growing number of cases, the organizations running them do not know they exist.</p><p>The EU AI Act &#8212; enforceable from August 2026 &#8212; requires a documented risk determination before any AI system is put into service. That obligation does not wait for a standard to be published, a vendor to provide a template, or an agent to cause harm. It applies at deployment. For agents that were never inventoried, the liability exposure is not theoretical. It is already accruing &#8212; and the penalties for non-compliance with high-risk obligations reach &#8364;15 million or 3% of global annual turnover &#8212; and that is before GDPR, sector regulation, and cybersecurity liability compound on top.</p><p>But the regulatory gap is only the first layer. The deeper problem is structural. The EU AI Act&#8217;s entire compliance architecture &#8212; risk classification, documentation, conformity assessment, post-market monitoring &#8212; assumes the system&#8217;s behavior can be described before it runs. Agentic AI breaks that assumption. An agent classified at deployment begins diverging from its documented purpose the moment it starts operating. The tools it selects, the data it accesses, the decisions it chains &#8212; all emerge at runtime, not at design time. The risk determination that was supposed to govern the system expires before the first audit cycle.</p><p>The cybersecurity exposure runs parallel. Agentic tool use is so vulnerable that it occupies three separate slots on the OWASP Top 10 for Agentic Applications. The critical vulnerability is not broken authorization &#8212; it is what happens when legitimate access goes wrong. Data exfiltration, privilege escalation, workflow hijacking &#8212; all within the agent&#8217;s authorized scope. The agent does not need to break the rules to create liability. It creates liability by operating within them.</p><p>The governance infrastructure to address this is forming &#8212; but it is not ready. ForHumanity has published a dedicated multi-agent certification scheme. The OECD released its first formal analysis of the agentic AI landscape in February 2026. The International AI Safety Report 2026 identifies multi-agent liability attribution as a core policymaker challenge. The CIGI/Privy Council Office of Canada&#8217;s national security scenarios workshop identified autonomous agent collusion as an emerging attack vector. The European Commission has introduced a technology code for agentic AI in the Digital Omnibus &#8212; while deferring actual governance solutions to a future strategy with no timeline.</p><p>The organizations that move now will build governance capability while the frameworks are still forming. The organizations that wait will discover &#8212; when a regulator, an auditor, or a breach forces the question &#8212; that the agents were already running. The governance was not.</p><p>This article maps the gap: what agents are doing, what the regulation requires, what the audit infrastructure can verify, and what needs to be built before August 2026.</p><p class="button-wrapper" data-attrs="{&quot;url&quot;:&quot;https://www.zerodaydawn.com/subscribe?&quot;,&quot;text&quot;:&quot;Subscribe now&quot;,&quot;action&quot;:null,&quot;class&quot;:null}" data-component-name="ButtonCreateButton"><a class="button primary" href="https://www.zerodaydawn.com/subscribe?"><span>Subscribe now</span></a></p><div><hr></div><h2><strong>The Agents You Don&#8217;t Know About</strong></h2><p>Microsoft&#8217;s 2026 Cyber Pulse report found that nearly a third of employees have already turned to unsanctioned AI agents for work tasks &#8212; tools operating with embedded credentials, API integrations, and elevated system access outside standard provisioning workflows. These are not browser-based chatbots. They are autonomous systems plugged into enterprise infrastructure, acting on data they were never explicitly authorized to touch.</p><p>The deployment velocity behind this is staggering. The OECD&#8217;s February 2026 analysis of the agentic AI landscape documents a 920% increase in GitHub repositories using agentic frameworks &#8212; AutoGPT, BabyAGI, OpenDevin, CrewAI &#8212; between early 2023 and mid-2025. The Stack Overflow Developer Survey, covering more than 49,000 respondents across 177 countries, found that roughly half of developers are already using or planning to use AI agents in their work. The vast majority of those developers flagged security and privacy as unresolved concerns.</p><p><strong>This is not a forecast. This is the current installed base.</strong></p><p>The OECD&#8217;s companion paper on AI trajectories through 2030 quantifies the acceleration: the length of software engineering tasks that AI systems can complete autonomously is doubling approximately every seven months. The CIGI/Privy Council Office of Canada&#8217;s 2026 national security scenarios workshop &#8212; convened with security and intelligence officials, AI researchers, and industry representatives &#8212; identified &#8220;autonomous agent collusion&#8221; as an emerging attack vector and flagged systemic vulnerabilities from ecosystem-wide dependencies on AI systems as a national security concern.</p><p>The International AI Safety Report 2026 confirms that attributing liability when agents cause harm &#8212; particularly in multi-agent settings where identifying when and how failures occurred is structurally difficult &#8212; is now recognized as a core policymaker challenge.</p><p>These agents are not experimental. They are in production. They are accessing systems that touch employment decisions, financial assessments, personal data, and critical infrastructure. And in the overwhelming majority of cases, nobody has performed the risk determination that the EU AI Act requires before any AI system is put into service.</p><p>The regulation does not wait for harm. The obligation applies at deployment. For agents that were never inventoried, never assessed, and never governed, the exposure is already accruing &#8212; and the penalties for non-compliance with high-risk obligations reach &#8364;15 million or 3% of global annual turnover &#8212; and that is before GDPR, sector regulation, and cybersecurity liability compound on top.</p><p><strong>That is the visible cost. The structural cost is worse.</strong></p><div><hr></div><h2><strong>Why Your Governance Model Doesn&#8217;t Work for Agents</strong></h2><p><strong>Traditional AI governance assumes three things</strong>: the system does what it was designed to do, the risks it poses can be assessed before deployment, and the documentation describing its behavior stays accurate over time.</p><p>For a credit scoring model, a fraud detection engine, or an automated document classifier, those assumptions hold. The system receives defined inputs, applies defined logic, and produces defined outputs. It can drift &#8212; through data degradation or model decay &#8212; but the operational boundaries remain recognizable. You can describe what the system does because what it does stays within the design envelope.</p><p><strong>Agentic AI does not work this way.</strong></p><p>An agent receives a goal and determines its own path to achieving it. It selects which tools to use. It decides what data to access. It sequences its own actions based on intermediate results. The execution path is not specified at design time &#8212; it emerges at runtime. And it changes with every interaction.</p><p><strong>This is not a theoretical distinction. It is an operational one with direct financial and legal consequence.</strong></p><p>Consider a concrete scenario. An organization deploys an AI agent to automate internal research &#8212; summarizing documents, pulling data from approved sources, drafting reports. At deployment, the system&#8217;s purpose is clearly defined and its risk profile is minimal. Nobody would classify a research assistant as high-risk under the EU AI Act.</p><p>Then the agent does what agents do. A user asks it to compile information on a job candidate. The agent accesses the HR database &#8212; because it has the permissions to do so. It pulls performance reviews, compensation history, and disciplinary records. It generates a summary with an implicit recommendation. The output reaches a hiring manager who uses it to make a decision.</p><p>The agent just crossed into employment territory &#8212; one of the EU AI Act&#8217;s explicitly designated high-risk domains. Nobody changed the system&#8217;s code. Nobody updated its permissions. Nobody reclassified it. T<strong>he agent&#8217;s functional purpose shifted through its own operational choices, and the risk determination made at deployment no longer describes the system in production</strong>.</p><p>Under the EU AI Act, this is not a gray area. When a system&#8217;s behavior changes its effective purpose beyond what was assessed at deployment, it triggers what the regulation calls a <strong>substantial modification</strong> &#8212; a change that was not foreseen in the initial assessment and that affects the system&#8217;s compliance with its obligations or modifies its <strong>intended purpose</strong>. A substantial modification requires a new conformity assessment. Not a review. Not an update to the documentation. <strong>A full reassessment</strong> &#8212; with the time, cost, and documentation burden that entails.</p><p>For a traditional system, substantial modifications are rare events &#8212; a major update, a new deployment context, a retraining cycle. Identifiable, manageable, budgetable.</p><p>For an agent, substantial modifications are the normal operating condition. Every interaction where the agent exercises autonomous judgment about tool selection, data access, or execution strategy is an interaction where the system&#8217;s functional behavior may diverge from its documented purpose. An agent running thousands of interactions per day generates thousands of potential triggers for reassessment.</p><p><strong>The regulatory mechanism exists. The operational capacity to execute it does not.</strong></p><p>And this is where the security exposure and the regulatory exposure converge &#8212; a convergence that most organizations have not yet recognized, because their security teams and their compliance teams are not looking at the same system through the same lens.</p><p><em>The full analysis of that convergence &#8212; including what OWASP's Agentic Top 10 reveals about the attack surface, how privilege escalation in agentic systems maps to regulatory liability, what ForHumanity's multi-agent certification scheme addresses and where the enforcement integration remains untested, and the four capabilities your organization must have operational before August 2026 &#8212; continues below for paid subscribers.</em></p><div class="subscription-widget-wrap-editor" data-attrs="{&quot;url&quot;:&quot;https://www.zerodaydawn.com/subscribe?&quot;,&quot;text&quot;:&quot;Subscribe&quot;,&quot;language&quot;:&quot;en&quot;}" data-component-name="SubscribeWidgetToDOM"><div class="subscription-widget show-subscribe"><div class="preamble"><p class="cta-caption">Zero-Day Dawn is a reader-supported publication. To receive new posts and support my work, consider becoming a free or paid subscriber.</p></div><form class="subscription-widget-subscribe"><input type="email" class="email-input" name="email" placeholder="Type your email&#8230;" tabindex="-1"><input type="submit" class="button primary" value="Subscribe"><div class="fake-input-wrapper"><div class="fake-input"></div><div class="fake-button"></div></div></form></div></div>
      <p>
          <a href="https://www.zerodaydawn.com/p/the-shadow-side-of-agentic-ai">
              Read more
          </a>
      </p>
   ]]></content:encoded></item><item><title><![CDATA[When Your Agents Go Rogue]]></title><description><![CDATA[The EU AI Act wasn't built for systems that rewrite their own intended purpose]]></description><link>https://www.zerodaydawn.com/p/when-your-agents-go-rogue</link><guid isPermaLink="false">https://www.zerodaydawn.com/p/when-your-agents-go-rogue</guid><dc:creator><![CDATA[Violeta Klein, CISSP, AIGP]]></dc:creator><pubDate>Mon, 09 Feb 2026 06:00:40 GMT</pubDate><enclosure url="https://substackcdn.com/image/fetch/$s_!SCJL!,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F9ae72b7b-5ce1-499f-a2e7-a217d6a3ceb1_1920x1080.png" length="0" type="image/jpeg"/><content:encoded><![CDATA[<div class="captioned-image-container"><figure><a class="image-link image2 is-viewable-img" target="_blank" href="https://substackcdn.com/image/fetch/$s_!SCJL!,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F9ae72b7b-5ce1-499f-a2e7-a217d6a3ceb1_1920x1080.png" data-component-name="Image2ToDOM"><div class="image2-inset"><picture><source type="image/webp" srcset="https://substackcdn.com/image/fetch/$s_!SCJL!,w_424,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F9ae72b7b-5ce1-499f-a2e7-a217d6a3ceb1_1920x1080.png 424w, https://substackcdn.com/image/fetch/$s_!SCJL!,w_848,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F9ae72b7b-5ce1-499f-a2e7-a217d6a3ceb1_1920x1080.png 848w, https://substackcdn.com/image/fetch/$s_!SCJL!,w_1272,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F9ae72b7b-5ce1-499f-a2e7-a217d6a3ceb1_1920x1080.png 1272w, https://substackcdn.com/image/fetch/$s_!SCJL!,w_1456,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F9ae72b7b-5ce1-499f-a2e7-a217d6a3ceb1_1920x1080.png 1456w" sizes="100vw"><img src="https://substackcdn.com/image/fetch/$s_!SCJL!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F9ae72b7b-5ce1-499f-a2e7-a217d6a3ceb1_1920x1080.png" width="1456" height="819" data-attrs="{&quot;src&quot;:&quot;https://substack-post-media.s3.amazonaws.com/public/images/9ae72b7b-5ce1-499f-a2e7-a217d6a3ceb1_1920x1080.png&quot;,&quot;srcNoWatermark&quot;:null,&quot;fullscreen&quot;:null,&quot;imageSize&quot;:null,&quot;height&quot;:819,&quot;width&quot;:1456,&quot;resizeWidth&quot;:null,&quot;bytes&quot;:3295383,&quot;alt&quot;:null,&quot;title&quot;:null,&quot;type&quot;:&quot;image/png&quot;,&quot;href&quot;:null,&quot;belowTheFold&quot;:false,&quot;topImage&quot;:true,&quot;internalRedirect&quot;:&quot;https://www.zerodaydawn.com/i/187276922?img=https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F9ae72b7b-5ce1-499f-a2e7-a217d6a3ceb1_1920x1080.png&quot;,&quot;isProcessing&quot;:false,&quot;align&quot;:null,&quot;offset&quot;:false}" class="sizing-normal" alt="" srcset="https://substackcdn.com/image/fetch/$s_!SCJL!,w_424,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F9ae72b7b-5ce1-499f-a2e7-a217d6a3ceb1_1920x1080.png 424w, https://substackcdn.com/image/fetch/$s_!SCJL!,w_848,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F9ae72b7b-5ce1-499f-a2e7-a217d6a3ceb1_1920x1080.png 848w, https://substackcdn.com/image/fetch/$s_!SCJL!,w_1272,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F9ae72b7b-5ce1-499f-a2e7-a217d6a3ceb1_1920x1080.png 1272w, https://substackcdn.com/image/fetch/$s_!SCJL!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F9ae72b7b-5ce1-499f-a2e7-a217d6a3ceb1_1920x1080.png 1456w" sizes="100vw" fetchpriority="high"></picture><div class="image-link-expand"><div class="pencraft pc-display-flex pc-gap-8 pc-reset"><button tabindex="0" type="button" class="pencraft pc-reset pencraft icon-container restack-image"><svg role="img" width="20" height="20" viewBox="0 0 20 20" fill="none" stroke-width="1.5" stroke="var(--color-fg-primary)" stroke-linecap="round" stroke-linejoin="round" xmlns="http://www.w3.org/2000/svg"><g><title></title><path d="M2.53001 7.81595C3.49179 4.73911 6.43281 2.5 9.91173 2.5C13.1684 2.5 15.9537 4.46214 17.0852 7.23684L17.6179 8.67647M17.6179 8.67647L18.5002 4.26471M17.6179 8.67647L13.6473 6.91176M17.4995 12.1841C16.5378 15.2609 13.5967 17.5 10.1178 17.5C6.86118 17.5 4.07589 15.5379 2.94432 12.7632L2.41165 11.3235M2.41165 11.3235L1.5293 15.7353M2.41165 11.3235L6.38224 13.0882"></path></g></svg></button><button tabindex="0" type="button" class="pencraft pc-reset pencraft icon-container view-image"><svg xmlns="http://www.w3.org/2000/svg" width="20" height="20" viewBox="0 0 24 24" fill="none" stroke="currentColor" stroke-width="2" stroke-linecap="round" stroke-linejoin="round" class="lucide lucide-maximize2 lucide-maximize-2"><polyline points="15 3 21 3 21 9"></polyline><polyline points="9 21 3 21 3 15"></polyline><line x1="21" x2="14" y1="3" y2="10"></line><line x1="3" x2="10" y1="21" y2="14"></line></svg></button></div></div></div></a></figure></div><p>Your AI system follows instructions. Your AI agent makes its own.</p><p>That distinction is about to become the most expensive compliance gap in the EU AI Act.</p><p>The regulation requires a risk determination before any AI system goes to market. Is it high-risk? Low-risk? Banned? That answer decides everything &#8212; what documentation you need, what standards apply, what penalties you face if you get it wrong. And the entire framework assumes two things: someone in your organization made that determination before deployment, and the answer stays stable.</p><p>Agentic AI breaks both assumptions. An agent picks its own tools, chooses what data to pull, decides what steps to take &#8212; and those choices change every time it runs. The system you deployed Monday is not the system running Friday.</p><p>This piece explains why the EU AI Act&#8217;s classification architecture cannot handle systems that determine their own behavior at runtime, what that means for organizations deploying agents before August 2026, and what you need to build now &#8212; before a regulator asks a question you cannot answer.</p><div class="subscription-widget-wrap-editor" data-attrs="{&quot;url&quot;:&quot;https://www.zerodaydawn.com/subscribe?&quot;,&quot;text&quot;:&quot;Subscribe&quot;,&quot;language&quot;:&quot;en&quot;}" data-component-name="SubscribeWidgetToDOM"><div class="subscription-widget show-subscribe"><div class="preamble"><p class="cta-caption">Zero-Day Dawn is a reader-supported publication. To receive new posts and support my work, consider becoming a free or paid subscriber.</p></div><form class="subscription-widget-subscribe"><input type="email" class="email-input" name="email" placeholder="Type your email&#8230;" tabindex="-1"><input type="submit" class="button primary" value="Subscribe"><div class="fake-input-wrapper"><div class="fake-input"></div><div class="fake-button"></div></div></form></div></div><div><hr></div><h2>1.4+ Million Agents. Zero Classification Decisions</h2><p>Moltbook became a case study in what happens when deployment outpaces governance. The platform claimed 1.4+ million agent users &#8212; a figure contested by security researchers who demonstrated that a single script could generate hundreds of thousands of accounts. But even if the real number is a fraction, nobody made a risk determination for any of them. The agents inherited permissions from their owners, accessed tools at runtime, and changed behavior based on interactions with other agents.</p><p>Under the EU AI Act, every AI system needs a risk determination <strong>before deployment</strong>. Does it fall within the high-risk categories &#8212; employment, creditworthiness, law enforcement, critical infrastructure, education, access to essential services? Does its output <strong>materially influence</strong> decisions affecting people?</p><p>Nobody made that determination for Moltbook&#8217;s agents. Nobody could have. The agents&#8217; purposes were not fixed at deployment. What they did depended on what tools they accessed, what data they encountered, and what other agents they interacted with. <strong>The intended purpose changed with every execution cycle</strong>.</p><p>This is not a Moltbook problem. This is an architectural problem.</p><p>McKinsey's CEO disclosed in January 2026 that 25,000 AI agents now sit alongside 40,000 human employees &#8212; and that AI initiatives account for 40% of the firm's total work. Not people using agents. Agents counted as staff. If those agents screen candidates, score performance, or influence staffing decisions &#8212; each one requires a risk determination under the EU AI Act <strong>before deployment</strong>. Multiply that across every company racing to deploy agentic AI at scale, and the classification gap is not theoretical. It is industrial.</p><p>The regulatory framework was not designed to handle it.</p><div><hr></div><h2>What an Agent Actually Does</h2><p>The gap between an AI system and an AI agent is not branding. It is deeply structural.</p><p>A traditional AI system is a function. A credit scoring model receives an application, processes it, and produces a score. The behavior is bounded. The output is traceable. Documentation can describe what the system does &#8212; because what it does stays within the boundaries drawn at deployment.</p><p>An agent is different. It receives a goal and determines its own path to achieving it. It selects which tools to use. It sequences its own actions. It adapts its approach based on intermediate results. The execution path is not specified at design time. It emerges at runtime.</p><p>Palantir published the most detailed public architecture for production-grade agentic AI to date in January 2026. What their documentation reveals is that even the vendor building the governance tooling describes the problem in stark terms: the possible paths an agent can take through its decision space are &#8220;innumerable&#8221; and &#8220;vary dramatically in functional depth.&#8221; The agent operates with permissions inherited from whoever configured it, whatever service scope it was given, and whichever user it is acting on behalf of at any given moment &#8212; all layered, all dynamic, all context-dependent.</p><p>This is not a chatbot with extra steps. It is a system actor that determines its own behavior every time it runs. Your documentation describes the system as designed. The agent operates as it decides.</p><div><hr></div><h2>The Classification Assumption That Breaks</h2><p>Every obligation in the EU AI Act traces back to a single upstream decision: <strong>is this system high-risk</strong>?</p><p>That decision depends on <strong>intended purpose</strong> &#8212; what the system is designed to do. The provider declares a purpose. The declaration drives the risk classification. The classification determines everything downstream: risk management, data governance, documentation, logging, transparency, human oversight, accuracy requirements, quality management, post-market monitoring.</p><p><strong>Change the purpose, and every one of those obligations needs reassessment.</strong></p><p>Traditional AI systems have relatively stable purposes. They drift through data shifts or model degradation, but their operational boundaries remain recognizable. You can document what they do because what they do stays within the design envelope.</p><p><strong>Agents do not work this way. Three things break at once.</strong></p><p><strong>The purpose is not fixed</strong>. When an agent autonomously accesses a credit database and generates a recommendation, it has functionally entered the creditworthiness domain &#8212; regardless of what the provider declared at classification time. A customer service agent that pulls HR records and suggests a personnel action has crossed into employment territory. The classification filed at deployment no longer describes the system in production.</p><p><strong>The risk profile is not stable</strong>. An agent classified as minimal risk at deployment can escalate into high-risk territory through its own operational choices &#8212; choices nobody anticipated and nobody approved.</p><p><strong>The documentation is instantly obsolete</strong>. The EU AI Act requires a description of the system&#8217;s intended purpose, its capabilities, its limitations, and its expected performance. For an agent, this documentation describes the system <em>as designed</em>. Not the system <em>as it operates</em>. The gap widens with every interaction.</p><div><hr></div><h2>The Regulation Already Recognizes the Problem. The Framework Cannot Detect It.</h2><p>Here is where this gets structurally uncomfortable.</p><p>The EU AI Act&#8217;s own Code of Practice already identifies the capabilities that define agentic behavior as sources of systemic risk. The list reads like a technical specification for an autonomous agent: the capability to operate autonomously, to adaptively learn new tasks, to reason about itself and its environment, to evade human oversight, to self-replicate or modify its own implementation, to interact with other AI systems, and to use tools including hardware and software external to the model.</p><p>The regulation recognizes these capabilities. It flags them as risk-relevant.</p><p>These model-level capabilities become system-level risks at deployment. When a system built on a model with these propensities operates autonomously in production, the behavioral drift they enable triggers the substantial modification mechanism under Article 25.</p><p>But the classification architecture was designed to detect them <em>at deployment</em> &#8212; not when they emerge at runtime. An agent classified as minimal risk because its declared purpose was customer support does not get automatically reclassified when it accesses an HR database and generates a recommendation about an employee. The classification happened upstream. The behavior changed downstream. Nobody updated the assessment.</p><p>The Code of Practice goes further, flagging behavioral tendencies including what it calls &#8220;lawlessness&#8221; &#8212; acting without reasonable regard to legal duties &#8212; and &#8220;goal-pursuing&#8221; behavior that resists modification. These are not theoretical risks. They describe what agentic architectures produce in production when agents optimize for task completion without regard for the regulatory boundaries their providers assumed would hold.</p><p>The framework identifies the risk factors. The classification mechanism cannot detect when those factors activate outside the documented operational envelope.</p><div><hr></div><h2>The Audit That Cannot Keep Up</h2><p></p>
      <p>
          <a href="https://www.zerodaydawn.com/p/when-your-agents-go-rogue">
              Read more
          </a>
      </p>
   ]]></content:encoded></item><item><title><![CDATA[Raising the Standard on AI Governance ]]></title><description><![CDATA[Which one is paving the road to EU AI Act conformity &#8212; and why ISO 42001 wasn't built for that]]></description><link>https://www.zerodaydawn.com/p/raising-the-standard-on-ai-governance</link><guid isPermaLink="false">https://www.zerodaydawn.com/p/raising-the-standard-on-ai-governance</guid><dc:creator><![CDATA[Violeta Klein, CISSP, AIGP]]></dc:creator><pubDate>Mon, 02 Feb 2026 06:01:33 GMT</pubDate><enclosure url="https://substackcdn.com/image/fetch/$s_!QBKx!,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Faf134b01-4c4c-493d-98b7-8acac7423629_1920x1080.png" length="0" type="image/jpeg"/><content:encoded><![CDATA[<div class="captioned-image-container"><figure><a class="image-link image2 is-viewable-img" target="_blank" href="https://substackcdn.com/image/fetch/$s_!QBKx!,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Faf134b01-4c4c-493d-98b7-8acac7423629_1920x1080.png" data-component-name="Image2ToDOM"><div class="image2-inset"><picture><source type="image/webp" srcset="https://substackcdn.com/image/fetch/$s_!QBKx!,w_424,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Faf134b01-4c4c-493d-98b7-8acac7423629_1920x1080.png 424w, https://substackcdn.com/image/fetch/$s_!QBKx!,w_848,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Faf134b01-4c4c-493d-98b7-8acac7423629_1920x1080.png 848w, https://substackcdn.com/image/fetch/$s_!QBKx!,w_1272,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Faf134b01-4c4c-493d-98b7-8acac7423629_1920x1080.png 1272w, https://substackcdn.com/image/fetch/$s_!QBKx!,w_1456,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Faf134b01-4c4c-493d-98b7-8acac7423629_1920x1080.png 1456w" sizes="100vw"><img src="https://substackcdn.com/image/fetch/$s_!QBKx!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Faf134b01-4c4c-493d-98b7-8acac7423629_1920x1080.png" width="1456" height="819" data-attrs="{&quot;src&quot;:&quot;https://substack-post-media.s3.amazonaws.com/public/images/af134b01-4c4c-493d-98b7-8acac7423629_1920x1080.png&quot;,&quot;srcNoWatermark&quot;:null,&quot;fullscreen&quot;:null,&quot;imageSize&quot;:null,&quot;height&quot;:819,&quot;width&quot;:1456,&quot;resizeWidth&quot;:null,&quot;bytes&quot;:3300440,&quot;alt&quot;:null,&quot;title&quot;:null,&quot;type&quot;:&quot;image/png&quot;,&quot;href&quot;:null,&quot;belowTheFold&quot;:false,&quot;topImage&quot;:true,&quot;internalRedirect&quot;:&quot;https://www.zerodaydawn.com/i/186400363?img=https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Faf134b01-4c4c-493d-98b7-8acac7423629_1920x1080.png&quot;,&quot;isProcessing&quot;:false,&quot;align&quot;:null,&quot;offset&quot;:false}" class="sizing-normal" alt="" srcset="https://substackcdn.com/image/fetch/$s_!QBKx!,w_424,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Faf134b01-4c4c-493d-98b7-8acac7423629_1920x1080.png 424w, https://substackcdn.com/image/fetch/$s_!QBKx!,w_848,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Faf134b01-4c4c-493d-98b7-8acac7423629_1920x1080.png 848w, https://substackcdn.com/image/fetch/$s_!QBKx!,w_1272,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Faf134b01-4c4c-493d-98b7-8acac7423629_1920x1080.png 1272w, https://substackcdn.com/image/fetch/$s_!QBKx!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Faf134b01-4c4c-493d-98b7-8acac7423629_1920x1080.png 1456w" sizes="100vw" fetchpriority="high"></picture><div class="image-link-expand"><div class="pencraft pc-display-flex pc-gap-8 pc-reset"><button tabindex="0" type="button" class="pencraft pc-reset pencraft icon-container restack-image"><svg role="img" width="20" height="20" viewBox="0 0 20 20" fill="none" stroke-width="1.5" stroke="var(--color-fg-primary)" stroke-linecap="round" stroke-linejoin="round" xmlns="http://www.w3.org/2000/svg"><g><title></title><path d="M2.53001 7.81595C3.49179 4.73911 6.43281 2.5 9.91173 2.5C13.1684 2.5 15.9537 4.46214 17.0852 7.23684L17.6179 8.67647M17.6179 8.67647L18.5002 4.26471M17.6179 8.67647L13.6473 6.91176M17.4995 12.1841C16.5378 15.2609 13.5967 17.5 10.1178 17.5C6.86118 17.5 4.07589 15.5379 2.94432 12.7632L2.41165 11.3235M2.41165 11.3235L1.5293 15.7353M2.41165 11.3235L6.38224 13.0882"></path></g></svg></button><button tabindex="0" type="button" class="pencraft pc-reset pencraft icon-container view-image"><svg xmlns="http://www.w3.org/2000/svg" width="20" height="20" viewBox="0 0 24 24" fill="none" stroke="currentColor" stroke-width="2" stroke-linecap="round" stroke-linejoin="round" class="lucide lucide-maximize2 lucide-maximize-2"><polyline points="15 3 21 3 21 9"></polyline><polyline points="9 21 3 21 3 15"></polyline><line x1="21" x2="14" y1="3" y2="10"></line><line x1="3" x2="10" y1="21" y2="14"></line></svg></button></div></div></div></a></figure></div><h2>Executive Summary</h2><p>There are two QMS standards for AI governance. One is designed specifically for EU AI Act conformity. One is not.</p><p>The one designed for conformity is still in draft. prEN 18286 completed its public enquiry period on January 22, 2026 and is working through the CEN-CENELEC process toward harmonization. Unless you purchased access through a national standards body, you have not seen what it contains.</p><p>The one not designed for conformity is everywhere. ISO 42001 certification programs proliferate. Consultants sell readiness packages. Organizations pursue badges. The market has invested heavily in this standard.</p><p>The investment is misplaced. Here&#8217;s why.</p><p>prEN 18286 contains Annex ZA &#8212; a clause-by-clause mapping to Article 17 that will carry presumption of conformity when harmonized. ISO 42001 has no such mapping. It was never designed to. The JRC already found it structurally incompatible with high-risk AI requirements.</p><p>But here is what neither standard addresses: the upstream risk classification decision that determines whether Article 17 applies to your systems at all.</p><p>This piece shows what prEN 18286 contains, why ISO 42001 falls short, and why the choice between them is premature if you haven&#8217;t classified your AI systems first. </p><div class="subscription-widget-wrap-editor" data-attrs="{&quot;url&quot;:&quot;https://www.zerodaydawn.com/subscribe?&quot;,&quot;text&quot;:&quot;Subscribe&quot;,&quot;language&quot;:&quot;en&quot;}" data-component-name="SubscribeWidgetToDOM"><div class="subscription-widget show-subscribe"><div class="preamble"><p class="cta-caption">Zero-Day Dawn is a reader-supported publication. To receive new posts and support my work, consider becoming a free or paid subscriber.</p></div><form class="subscription-widget-subscribe"><input type="email" class="email-input" name="email" placeholder="Type your email&#8230;" tabindex="-1"><input type="submit" class="button primary" value="Subscribe"><div class="fake-input-wrapper"><div class="fake-input"></div><div class="fake-button"></div></div></form></div></div><div><hr></div><h2>The Access Asymmetry</h2><p>The EU AI Act requires high-risk AI providers to implement a quality management system under Article 17. This QMS must cover risk management, data governance, technical documentation, record-keeping, and post-market monitoring <strong>across the entire AI lifecycle</strong>.</p><p>Two standards claim to address this requirement.</p><p><strong>ISO/IEC 42001:2023</strong> is available globally. You can purchase it from any national standards body. Certification programs exist. Training courses exist. A cottage industry of consultants will help you implement it. The market has invested heavily in this standard.</p><p><strong>prEN 18286:2025</strong> is a European draft standard titled &#8220;Artificial intelligence &#8212; Quality management system for EU AI Act regulatory purposes.&#8221; It was released for CEN Enquiry in October 2025. The public comment period closed January 22, 2026. Unless you purchased access through a national standards body, you have never seen it.</p><p>Here is the asymmetry that will cost organizations money: <strong>the visible standard lacks what the invisible standard contains</strong>.</p><p>ISO 42001 provides a management system framework. It certifies that governance processes exist. It does not map to Article 17 requirements. It does not address EU-specific obligations. It provides no presumption of conformity.</p><p><strong>prEN 18286</strong> was commissioned by the European Commission under standardization request M/613 C(2023) 3215 specifically to provide a voluntary means of conforming to Regulation (EU) 2024/1689. When finalized and cited in the Official Journal, compliance with its normative clauses will confer <strong>presumption of conformity</strong> with the corresponding essential requirements.</p><p>One standard was built for the regulation. One was built for the global market. The market seeking EU AI Act compliance is buying the wrong one.</p><div><hr></div><h2>What the JRC Already Found</h2><p>The Joint Research Centre &#8212; the Commission&#8217;s science and knowledge service &#8212; published <a href="https://publications.jrc.ec.europa.eu/repository/handle/JRC139430">gap analysis JRC 139430</a> examining ISO 42001 against EU AI Act requirements.</p><p>The finding was unambiguous: ISO 42001 lacks the specific safety-by-design mandates required for high-risk AI systems under Annex III.</p><p>This is not a minor gap. It is structural incompatibility. ISO 42001 was designed for organizational governance across jurisdictions. It addresses management system requirements. It does not address the product safety requirements embedded in the EU AI Act.</p><p><strong>The AI Act is product safety law.</strong> It treats AI systems as products subject to conformity assessment. The QMS requirements under Article 17 are not generic governance requirements &#8212; they are specific obligations tied to the essential requirements in Chapter III, Section 2.</p><p>ISO 42001 does not address these obligations because it was not designed to. It predates the final AI Act text. It serves a different purpose. Certification bodies audit it as a management system standard because that is what it is.</p><p>prEN 18286 exists because the Commission recognized this gap and requested a European standard that would actually address Article 17. The standard is being developed by <a href="https://jtc21.eu/">CEN-CENELEC JTC 21</a> &#8212; the Technical Committee on Artificial Intelligence &#8212; under explicit mandate to provide presumption of conformity.</p><p>The JRC finding is not a criticism of ISO 42001. The standard does what it was designed to do. The problem is that what it was designed to do is not what Article 17 requires.</p><div class="subscription-widget-wrap-editor" data-attrs="{&quot;url&quot;:&quot;https://www.zerodaydawn.com/subscribe?&quot;,&quot;text&quot;:&quot;Subscribe&quot;,&quot;language&quot;:&quot;en&quot;}" data-component-name="SubscribeWidgetToDOM"><div class="subscription-widget show-subscribe"><div class="preamble"><p class="cta-caption">Zero-Day Dawn is a reader-supported publication. To receive new posts and support my work, consider becoming a free or paid subscriber.</p></div><form class="subscription-widget-subscribe"><input type="email" class="email-input" name="email" placeholder="Type your email&#8230;" tabindex="-1"><input type="submit" class="button primary" value="Subscribe"><div class="fake-input-wrapper"><div class="fake-input"></div><div class="fake-button"></div></div></form></div></div><h2>The Annex ZA Mapping</h2>
      <p>
          <a href="https://www.zerodaydawn.com/p/raising-the-standard-on-ai-governance">
              Read more
          </a>
      </p>
   ]]></content:encoded></item><item><title><![CDATA[Your Guidance Isn't Coming]]></title><description><![CDATA[Flying blind into August 2026]]></description><link>https://www.zerodaydawn.com/p/your-guidance-isnt-coming</link><guid isPermaLink="false">https://www.zerodaydawn.com/p/your-guidance-isnt-coming</guid><dc:creator><![CDATA[Violeta Klein, CISSP, AIGP]]></dc:creator><pubDate>Mon, 26 Jan 2026 06:02:14 GMT</pubDate><enclosure url="https://substackcdn.com/image/fetch/$s_!36Ka!,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fb61ca5a9-23bb-483d-9bb4-d36df7fb03e7_1920x1080.png" length="0" type="image/jpeg"/><content:encoded><![CDATA[<div class="captioned-image-container"><figure><a class="image-link image2 is-viewable-img" target="_blank" href="https://substackcdn.com/image/fetch/$s_!36Ka!,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fb61ca5a9-23bb-483d-9bb4-d36df7fb03e7_1920x1080.png" data-component-name="Image2ToDOM"><div class="image2-inset"><picture><source type="image/webp" srcset="https://substackcdn.com/image/fetch/$s_!36Ka!,w_424,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fb61ca5a9-23bb-483d-9bb4-d36df7fb03e7_1920x1080.png 424w, https://substackcdn.com/image/fetch/$s_!36Ka!,w_848,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fb61ca5a9-23bb-483d-9bb4-d36df7fb03e7_1920x1080.png 848w, https://substackcdn.com/image/fetch/$s_!36Ka!,w_1272,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fb61ca5a9-23bb-483d-9bb4-d36df7fb03e7_1920x1080.png 1272w, https://substackcdn.com/image/fetch/$s_!36Ka!,w_1456,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fb61ca5a9-23bb-483d-9bb4-d36df7fb03e7_1920x1080.png 1456w" sizes="100vw"><img src="https://substackcdn.com/image/fetch/$s_!36Ka!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fb61ca5a9-23bb-483d-9bb4-d36df7fb03e7_1920x1080.png" width="1456" height="819" data-attrs="{&quot;src&quot;:&quot;https://substack-post-media.s3.amazonaws.com/public/images/b61ca5a9-23bb-483d-9bb4-d36df7fb03e7_1920x1080.png&quot;,&quot;srcNoWatermark&quot;:null,&quot;fullscreen&quot;:null,&quot;imageSize&quot;:null,&quot;height&quot;:819,&quot;width&quot;:1456,&quot;resizeWidth&quot;:null,&quot;bytes&quot;:3294457,&quot;alt&quot;:null,&quot;title&quot;:null,&quot;type&quot;:&quot;image/png&quot;,&quot;href&quot;:null,&quot;belowTheFold&quot;:false,&quot;topImage&quot;:true,&quot;internalRedirect&quot;:&quot;https://www.zerodaydawn.com/i/185711782?img=https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fb61ca5a9-23bb-483d-9bb4-d36df7fb03e7_1920x1080.png&quot;,&quot;isProcessing&quot;:false,&quot;align&quot;:null,&quot;offset&quot;:false}" class="sizing-normal" alt="" srcset="https://substackcdn.com/image/fetch/$s_!36Ka!,w_424,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fb61ca5a9-23bb-483d-9bb4-d36df7fb03e7_1920x1080.png 424w, https://substackcdn.com/image/fetch/$s_!36Ka!,w_848,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fb61ca5a9-23bb-483d-9bb4-d36df7fb03e7_1920x1080.png 848w, https://substackcdn.com/image/fetch/$s_!36Ka!,w_1272,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fb61ca5a9-23bb-483d-9bb4-d36df7fb03e7_1920x1080.png 1272w, https://substackcdn.com/image/fetch/$s_!36Ka!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fb61ca5a9-23bb-483d-9bb4-d36df7fb03e7_1920x1080.png 1456w" sizes="100vw" fetchpriority="high"></picture><div class="image-link-expand"><div class="pencraft pc-display-flex pc-gap-8 pc-reset"><button tabindex="0" type="button" class="pencraft pc-reset pencraft icon-container restack-image"><svg role="img" width="20" height="20" viewBox="0 0 20 20" fill="none" stroke-width="1.5" stroke="var(--color-fg-primary)" stroke-linecap="round" stroke-linejoin="round" xmlns="http://www.w3.org/2000/svg"><g><title></title><path d="M2.53001 7.81595C3.49179 4.73911 6.43281 2.5 9.91173 2.5C13.1684 2.5 15.9537 4.46214 17.0852 7.23684L17.6179 8.67647M17.6179 8.67647L18.5002 4.26471M17.6179 8.67647L13.6473 6.91176M17.4995 12.1841C16.5378 15.2609 13.5967 17.5 10.1178 17.5C6.86118 17.5 4.07589 15.5379 2.94432 12.7632L2.41165 11.3235M2.41165 11.3235L1.5293 15.7353M2.41165 11.3235L6.38224 13.0882"></path></g></svg></button><button tabindex="0" type="button" class="pencraft pc-reset pencraft icon-container view-image"><svg xmlns="http://www.w3.org/2000/svg" width="20" height="20" viewBox="0 0 24 24" fill="none" stroke="currentColor" stroke-width="2" stroke-linecap="round" stroke-linejoin="round" class="lucide lucide-maximize2 lucide-maximize-2"><polyline points="15 3 21 3 21 9"></polyline><polyline points="9 21 3 21 3 15"></polyline><line x1="21" x2="14" y1="3" y2="10"></line><line x1="3" x2="10" y1="21" y2="14"></line></svg></button></div></div></div></a></figure></div><h2>Executive Summary</h2><p>The European Commission is about to miss its own deadline.</p><p>Article 6 classification guidelines &#8212; the document organizations have been waiting for to understand which AI systems are high-risk &#8212; were legally mandated by February 2, 2026. They will not arrive in time. A draft for public consultation will come later this month. Final adoption is now expected in March or April.</p><p>The delay comes as the Commission shifts institutional focus toward the Digital Omnibus &#8212; the proposal to amend the AI Act alongside other digital regulations.</p><p>Translation: the Commission is rewriting the regulation before it has finished explaining the original.</p><p>Meanwhile, August 2026 enforcement has not moved. The documentation obligation has not paused. The penalty for misrepresenting classification status &#8212; up to &#8364;7.5 million or 1% of global turnover under Article 99 &#8212; has not been suspended.</p><p>Organizations that built compliance timelines around February guidance now have two to three fewer months of implementation runway &#8212; and a regulation that may look different by the time they finish. The help they were waiting for is not coming in time.</p><p>This piece explains why waiting was the trap, what the Digital Omnibus actually changes, and why your classification documentation must be forensically robust today &#8212; guidelines or not.</p><div class="subscription-widget-wrap-editor" data-attrs="{&quot;url&quot;:&quot;https://www.zerodaydawn.com/subscribe?&quot;,&quot;text&quot;:&quot;Subscribe&quot;,&quot;language&quot;:&quot;en&quot;}" data-component-name="SubscribeWidgetToDOM"><div class="subscription-widget show-subscribe"><div class="preamble"><p class="cta-caption">Thanks for reading Zero-Day Dawn! Subscribe for free to receive new posts and support my work.</p></div><form class="subscription-widget-subscribe"><input type="email" class="email-input" name="email" placeholder="Type your email&#8230;" tabindex="-1"><input type="submit" class="button primary" value="Subscribe"><div class="fake-input-wrapper"><div class="fake-input"></div><div class="fake-button"></div></div></form></div></div><div><hr></div><h2>The Comfortable Lie</h2><p>Here is what the market wanted to believe:</p><p><em>Wait for guidance. The Commission will clarify what counts as high-risk. Then start compliance work.</em></p><p>This was always a fragile strategy. Guidance clarifies interpretation. It does not replace the legal obligation to classify before placing systems on the market. The regulation has been in force since August 2024. The classification requirement crystallizes in August 2026 regardless of what Brussels publishes.</p><p>But organizations clung to the timeline anyway. February 2 was the anchor. Guidance arrives, ambiguity resolves, compliance begins.</p><p><strong>That anchor just slipped.</strong></p><p>The guidance is delayed. The deadline is not. And the regulation itself may be changing underneath you while you wait.</p><div><hr></div><h2>What Actually Happened</h2><p><a href="https://www.linkedin.com/posts/violetaklein-eu-ai-act-compliance_breaking-the-european-commission-will-activity-7419777208661192704-fv0d?utm_source=share&amp;utm_medium=member_desktop&amp;rcm=ACoAAFwBeU8Bf1ZN3Gq33hvOhlCh4sTULxLWhQk">This week, reporting confirmed</a> what insiders had suspected: the Commission will not meet its Article 6(5) obligation to publish classification guidelines by February 2.</p><p>The delay coincides with the Commission&#8217;s push to advance the <a href="https://www.zerodaydawn.com/p/the-digital-omnibus-trap-eu-ai-act">Digital Omnibus</a> &#8212; the proposal to &#8220;simplify&#8221; the AI Act alongside other digital regulations. Institutional resources are being spent on amendments while guidance remains unfinished.</p><p>This is not a minor administrative delay. It reflects a political choice about where to spend institutional capacity.</p><p>The Commission is not behind schedule. It is prioritizing revision over implementation.</p><p>For organizations tracking EU AI Act compliance, the implications are structural. The body responsible for clarifying the regulation is simultaneously rewriting it. The guidance you receive in March or April may describe requirements that are already under amendment. You will be implementing a moving target.</p><div><hr></div><h2>The Digital Omnibus Problem</h2><p>The Digital Omnibus &#8212; formally the proposal to simplify AI Act implementation &#8212; changes more than <a href="https://www.zerodaydawn.com/p/the-three-timelines-of-the-eu-ai">timelines</a>. It changes the regulatory architecture.</p><p><strong>Three provisions matter for classification.</strong></p><p><strong>First</strong>: high-risk system obligations that were due August 2026 would be postponed until December 2027. The Commission cites delays in establishing standards and support tools. Organizations that planned compliance for August now face a sixteen-month extension &#8212; and the planning uncertainty that comes with building toward a deadline that <strong>may move again</strong>.</p><p><strong>Second</strong>: conformity assessment procedures would be loosened. The original text requires third-party assessment for certain high-risk categories under Annex III. The Omnibus proposes expanding self-assessment pathways &#8212; reducing external verification before market placement. The safeguard intended to catch non-compliance before deployment weakens.</p><p><strong>Third</strong>: the Omnibus acknowledges agentic AI as a distinct category requiring special treatment &#8212; but offers no framework for how classification applies to systems whose behavior emerges at runtime. The problem is named. The solution is deferred.</p><p>This is not regulatory fine-tuning. It is a structural weakening of enforcement before enforcement begins.</p><div><hr></div><h2>What &#8220;Simplification&#8221; Actually Means</h2><p>The word simplification does useful political work. It implies burden reduction. Cutting red tape. Helping businesses compete.</p><p><strong>Look at what it simplifies away.</strong></p><p>External verification before market placement shrinks. Self-assessment pathways expand. The third-party checks that would have caught misclassification or non-compliance before deployment become optional for more system categories.</p><p>Extended timelines become permanent uncertainty. A deadline that moves once can move again. Organizations cannot plan capital expenditure, staffing, or technical implementation against a target that shifts with political winds.</p><p>Guidance drafted during amendment becomes provisional. Any classification methodology you build from April guidance may require revision when the Omnibus passes. You are not building compliance. You are building a messy draft.</p><p>The burden being reduced is the burden of regulatory certainty. The beneficiaries are organizations that prefer ambiguity to accountability.</p><p>Consumer groups have noticed. Agust&#237;n Reyna, Director General of BEUC &#8212; the European Consumer Organisation &#8212; called the Omnibus &#8220;<em>a watering down of the EU&#8217;s privacy rules and a substantial delay and undermining of AI rules that will benefit mainly non-European Big Tech companies.</em>&#8221; The simplification frame is not neutral. It is a policy choice disguised as administrative efficiency.</p><div><hr></div><h2>The Standards Gap</h2><p>Guidelines are not the only thing missing.</p><p>Harmonised standards &#8212; the technical specifications that would give providers a presumption of conformity with AI Act requirements &#8212; do not exist yet. The Commission requested them from CEN/CENELEC in May 2023, with a deadline of April 30, 2025. That deadline was missed. Full adoption is now expected in 2027.</p><p>This matters because harmonised standards are the shortcut. Conformity with a published harmonised standard creates a legal presumption that you meet the corresponding AI Act requirements. Without them, providers must demonstrate compliance directly &#8212; through common specifications if the Commission publishes them, or through the generic assessment checklist in Annex VII as a last resort.</p><p>Neither pathway offers the clarity that harmonised standards would provide. Both require organizations to build their own compliance architecture from first principles.</p><p>The AI Act&#8217;s designers assumed harmonised standards would be ready before high-risk obligations applied. They are not. Organizations face August 2026 without the presumption-of-conformity pathway the regulation was built around.</p><div><hr></div><h2>The Documentation Gate</h2><p>Here is what has not changed &#8212; and what the delay makes more dangerous.</p><p>The August 2026 deadline for classification documentation remains in force. The Digital Omnibus proposes extending high-risk compliance obligations to December 2027 &#8212; <strong>but classification and registration requirements for August 2026 are not part of that extension.</strong></p><p>Here is the trap: even if you determine a system is <em>not</em> high-risk under Article 6(3), you are legally required to document that assessment <strong>before</strong> placing the system on the market. Claiming exemption without documented reasoning is not a defense. It is exposure.</p><p>Article 99 imposes penalties up to &#8364;7.5 million or 1% of global annual turnover for providing misleading information about regulatory status &#8212; including undocumented classification claims. The absence of guidance does not suspend this obligation. <strong>It makes your documentation the only evidence that matters.</strong></p><p>When Market Surveillance Authorities arrive, they will ask the same questions regardless of what guidance exists.</p><p>Show us the classification decision. Which systems are high-risk? Which claim exemption? What reasoning supports each determination?</p><p><a href="https://www.zerodaydawn.com/p/who-owns-your-ai-risk">Show us who made that determination. </a>A named individual or body with documented authority. Not a consultant&#8217;s recommendation. Not an assumption. A decision with accountability attached.</p><p>Show us the documentation. The analysis that connects your system&#8217;s function to its regulatory status. The reasoning chain that survives scrutiny.</p><p>Delayed guidance does not delay these questions. It removes the safety net you thought you would have when answering them. You cannot point to Commission interpretation as your defense. <strong>You must build defensible reasoning from the regulation itself &#8212; today, not when the guidelines arrive.</strong></p><p>The organizations that treated guidance as a prerequisite for action now have less time and less clarity. The organizations that built classification capability from the regulatory text have a methodology that does not depend on Brussels publishing anything.</p><div><hr></div><h2>The Political Mirage</h2><p>France&#8217;s role deserves separate attention &#8212; because it explains why political relief is a mirage.</p><p>France simultaneously hosts AI investment summits, announces &#8364;109 billion in AI funding, and backs provisions that weaken the regulatory framework governing that investment. At the Berlin Digital Sovereignty Summit in November 2025, French Digital Minister Anne Le H&#233;nanff publicly endorsed the postponement of high-risk obligations.</p><p>Here is why this matters for your compliance planning: the political pressure to delay creates a mirage of relief. The December 2027 extension looks like breathing room. It is not.</p><p>The extension &#8212; if it passes &#8212; applies to high-risk system obligations under Annex III. It does not suspend the classification documentation requirement. It does not remove the penalty exposure for misrepresenting regulatory status. It does not pause Market Surveillance Authority powers to request your documentation at any time.</p><p><a href="https://www.linkedin.com/posts/violetaklein-eu-ai-act-compliance_france-is-trying-to-split-the-ai-omnibus-activity-7420422955115098112-SIqA?utm_source=share&amp;utm_medium=member_desktop&amp;rcm=ACoAAFwBeU8Bf1ZN3Gq33hvOhlCh4sTULxLWhQk">France&#8217;s recent push to split the file</a> effectively creates two timelines: a political timeline that keeps moving, and a legal timeline that does not. Organizations betting on political delays are betting against legal reality.</p><p>The deadlines may move. The documentation obligation is live today.</p><div><hr></div><h2>What To Do Now</h2><p>If your organization was waiting for February 2 guidance to begin classification work, adjust immediately.</p><p>Accept that guidance is interpretation, not instruction. </p><blockquote><p>The <strong>Commission guidelines were never going to tell you which of your specific systems are high-risk.</strong> They were going to clarify how to apply Article 6 and Annex III to your own analysis. That analysis was always your responsibility. It still is.</p></blockquote><p>Build classification capability from the regulation. Article 6 is published. Annex III is published. The criteria for high-risk determination exist in binding legal text. Guidance would have provided examples and clarification &#8212; helpful, but not necessary to begin.</p><p>Assume the timeline compresses, not extends. The Omnibus proposes pushing high-risk obligations to December 2027. <strong>This is a proposal, not law.</strong> It requires European Parliament and Council approval. It may not pass. It may pass with modifications. Planning for a deadline that does not yet legally exist is not compliance strategy. It is false hope.</p><p>Track what actually passes. The Digital Omnibus is under negotiation. Public consultation runs through March 11, 2026. The provisions that emerge may differ from the current draft. Build flexibility into your compliance architecture &#8212; but <strong>do not build around assumptions about future amendments</strong>.</p><div><hr></div><h2>Conclusion</h2><p><strong>Your guidance isn&#8217;t coming. Not in time to matter.</strong></p><p>The February 2 deadline will pass without delivery. A draft arrives this month for consultation. Final adoption may or may not come in March or April &#8212; guidance describing requirements that may already be under amendment.</p><p>The organizations that built compliance timelines around waiting have lost months. The organizations that bet on political relief are betting against legal reality. The documentation obligation exists today. The penalty exposure exists today. The Market Surveillance Authority powers exist today.</p><p><strong>Different timeline. Same cliff.</strong></p><p>The Commission will miss its deadline. Yours didn&#8217;t move. The only question is whether your classification documentation exists &#8212; or whether you are flying blind into enforcement hoping the map arrives before you hit the ground.</p><p>Spoiler: It won&#8217;t.</p><div><hr></div><p><em>If your organization needs a structured methodology for Article 6 classification &#8212; the upstream logic that makes compliance defensible regardless of what guidance eventually arrives &#8212; the framework I use is documented in <a href="https://quantumcoherence.ai/b/the-article-6-classification-handbook">The Article 6 Classification Handbook</a>.</em></p><div class="subscription-widget-wrap-editor" data-attrs="{&quot;url&quot;:&quot;https://www.zerodaydawn.com/subscribe?&quot;,&quot;text&quot;:&quot;Subscribe&quot;,&quot;language&quot;:&quot;en&quot;}" data-component-name="SubscribeWidgetToDOM"><div class="subscription-widget show-subscribe"><div class="preamble"><p class="cta-caption">Thanks for reading Zero-Day Dawn! Subscribe for free to receive new posts and support my work.</p></div><form class="subscription-widget-subscribe"><input type="email" class="email-input" name="email" placeholder="Type your email&#8230;" tabindex="-1"><input type="submit" class="button primary" value="Subscribe"><div class="fake-input-wrapper"><div class="fake-input"></div><div class="fake-button"></div></div></form></div></div><p class="button-wrapper" data-attrs="{&quot;url&quot;:&quot;https://www.zerodaydawn.com/p/your-guidance-isnt-coming?utm_source=substack&utm_medium=email&utm_content=share&action=share&quot;,&quot;text&quot;:&quot;Share&quot;,&quot;action&quot;:null,&quot;class&quot;:null}" data-component-name="ButtonCreateButton"><a class="button primary" href="https://www.zerodaydawn.com/p/your-guidance-isnt-coming?utm_source=substack&utm_medium=email&utm_content=share&action=share"><span>Share</span></a></p><div><hr></div><p>This analysis benefited from reporting by <a href="https://www.linkedin.com/in/luca-bertuzzi-186729130/">Luca Bertuzzi</a>, Chief AI correspondent at MLex and ongoing dialogue with <span class="mention-wrap" data-attrs="{&quot;name&quot;:&quot;Adam Leon Smith DEng FBCS&quot;,&quot;id&quot;:351251468,&quot;type&quot;:&quot;user&quot;,&quot;url&quot;:null,&quot;photo_url&quot;:&quot;https://substack-post-media.s3.amazonaws.com/public/images/8eb6b884-e7e9-4e64-b672-a12db045c312_679x679.jpeg&quot;,&quot;uuid&quot;:&quot;a42bb562-fcd6-426b-b8c7-76f378557216&quot;}" data-component-name="MentionToDOM"></span>, project leader for prEN 18286 at CEN-CENELEC JTC 21.</p><div><hr></div><h6><strong>Regulatory Disclaimer</strong></h6><h6>This article provides educational analysis of the EU Artificial Intelligence Act (Regulation (EU) 2024/1689) as of January 2026. Nothing in this article constitutes legal advice, regulatory interpretation, or compliance certification.</h6><h6>Organizations should consult qualified legal counsel specializing in EU AI Act compliance before making classification determinations or deployment decisions.</h6><h6>Quantum Coherence LLC does not provide legal advice or regulatory compliance determinations.</h6>]]></content:encoded></item><item><title><![CDATA[The Long Arm of the EU AI Act]]></title><description><![CDATA[Why jurisdiction won't shield non-EU providers from enforcement]]></description><link>https://www.zerodaydawn.com/p/the-long-arm-of-the-eu-ai-act</link><guid isPermaLink="false">https://www.zerodaydawn.com/p/the-long-arm-of-the-eu-ai-act</guid><dc:creator><![CDATA[Violeta Klein, CISSP, AIGP]]></dc:creator><pubDate>Mon, 19 Jan 2026 06:00:41 GMT</pubDate><enclosure url="https://substackcdn.com/image/fetch/$s_!HGpp!,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F32b25d1c-bea9-4635-b1a7-c75f35c64190_1920x1080.png" length="0" type="image/jpeg"/><content:encoded><![CDATA[<div class="captioned-image-container"><figure><a class="image-link image2 is-viewable-img" target="_blank" href="https://substackcdn.com/image/fetch/$s_!HGpp!,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F32b25d1c-bea9-4635-b1a7-c75f35c64190_1920x1080.png" data-component-name="Image2ToDOM"><div class="image2-inset"><picture><source type="image/webp" srcset="https://substackcdn.com/image/fetch/$s_!HGpp!,w_424,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F32b25d1c-bea9-4635-b1a7-c75f35c64190_1920x1080.png 424w, https://substackcdn.com/image/fetch/$s_!HGpp!,w_848,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F32b25d1c-bea9-4635-b1a7-c75f35c64190_1920x1080.png 848w, https://substackcdn.com/image/fetch/$s_!HGpp!,w_1272,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F32b25d1c-bea9-4635-b1a7-c75f35c64190_1920x1080.png 1272w, https://substackcdn.com/image/fetch/$s_!HGpp!,w_1456,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F32b25d1c-bea9-4635-b1a7-c75f35c64190_1920x1080.png 1456w" sizes="100vw"><img src="https://substackcdn.com/image/fetch/$s_!HGpp!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F32b25d1c-bea9-4635-b1a7-c75f35c64190_1920x1080.png" width="1456" height="819" data-attrs="{&quot;src&quot;:&quot;https://substack-post-media.s3.amazonaws.com/public/images/32b25d1c-bea9-4635-b1a7-c75f35c64190_1920x1080.png&quot;,&quot;srcNoWatermark&quot;:null,&quot;fullscreen&quot;:null,&quot;imageSize&quot;:null,&quot;height&quot;:819,&quot;width&quot;:1456,&quot;resizeWidth&quot;:null,&quot;bytes&quot;:3299746,&quot;alt&quot;:null,&quot;title&quot;:null,&quot;type&quot;:&quot;image/png&quot;,&quot;href&quot;:null,&quot;belowTheFold&quot;:false,&quot;topImage&quot;:true,&quot;internalRedirect&quot;:&quot;https://www.zerodaydawn.com/i/184765549?img=https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F32b25d1c-bea9-4635-b1a7-c75f35c64190_1920x1080.png&quot;,&quot;isProcessing&quot;:false,&quot;align&quot;:null,&quot;offset&quot;:false}" class="sizing-normal" alt="" srcset="https://substackcdn.com/image/fetch/$s_!HGpp!,w_424,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F32b25d1c-bea9-4635-b1a7-c75f35c64190_1920x1080.png 424w, https://substackcdn.com/image/fetch/$s_!HGpp!,w_848,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F32b25d1c-bea9-4635-b1a7-c75f35c64190_1920x1080.png 848w, https://substackcdn.com/image/fetch/$s_!HGpp!,w_1272,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F32b25d1c-bea9-4635-b1a7-c75f35c64190_1920x1080.png 1272w, https://substackcdn.com/image/fetch/$s_!HGpp!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F32b25d1c-bea9-4635-b1a7-c75f35c64190_1920x1080.png 1456w" sizes="100vw" fetchpriority="high"></picture><div class="image-link-expand"><div class="pencraft pc-display-flex pc-gap-8 pc-reset"><button tabindex="0" type="button" class="pencraft pc-reset pencraft icon-container restack-image"><svg role="img" width="20" height="20" viewBox="0 0 20 20" fill="none" stroke-width="1.5" stroke="var(--color-fg-primary)" stroke-linecap="round" stroke-linejoin="round" xmlns="http://www.w3.org/2000/svg"><g><title></title><path d="M2.53001 7.81595C3.49179 4.73911 6.43281 2.5 9.91173 2.5C13.1684 2.5 15.9537 4.46214 17.0852 7.23684L17.6179 8.67647M17.6179 8.67647L18.5002 4.26471M17.6179 8.67647L13.6473 6.91176M17.4995 12.1841C16.5378 15.2609 13.5967 17.5 10.1178 17.5C6.86118 17.5 4.07589 15.5379 2.94432 12.7632L2.41165 11.3235M2.41165 11.3235L1.5293 15.7353M2.41165 11.3235L6.38224 13.0882"></path></g></svg></button><button tabindex="0" type="button" class="pencraft pc-reset pencraft icon-container view-image"><svg xmlns="http://www.w3.org/2000/svg" width="20" height="20" viewBox="0 0 24 24" fill="none" stroke="currentColor" stroke-width="2" stroke-linecap="round" stroke-linejoin="round" class="lucide lucide-maximize2 lucide-maximize-2"><polyline points="15 3 21 3 21 9"></polyline><polyline points="9 21 3 21 3 15"></polyline><line x1="21" x2="14" y1="3" y2="10"></line><line x1="3" x2="10" y1="21" y2="14"></line></svg></button></div></div></div></a></figure></div><h2>Executive Summary</h2><p>Your headquarters location is not a compliance strategy. It is a comfortable assumption about to collide with regulatory reality.</p><p>Organizations outside the EU believe they are watching from a safe distance. The AI Act is a European regulation. They are not European companies. The math seems simple.</p><p>It is not.</p><p>The EU AI Act does not care where your servers are located. It does not care where your company is incorporated. It cares about one thing: where your AI system&#8217;s output lands.</p><p>If that output affects people in the EU&#8212;decisions about their creditworthiness, their job applications, their insurance eligibility&#8212;you are in scope. Period.</p><p>This piece explains how the AI Act reaches companies that never planned to be regulated, why the Digital Omnibus will not change this, and what non-EU organizations must understand before August 2026.</p><div class="subscription-widget-wrap-editor" data-attrs="{&quot;url&quot;:&quot;https://www.zerodaydawn.com/subscribe?&quot;,&quot;text&quot;:&quot;Subscribe&quot;,&quot;language&quot;:&quot;en&quot;}" data-component-name="SubscribeWidgetToDOM"><div class="subscription-widget show-subscribe"><div class="preamble"><p class="cta-caption">Thanks for reading Zero-Day Dawn! Subscribe for free to receive new posts and support my work.</p></div><form class="subscription-widget-subscribe"><input type="email" class="email-input" name="email" placeholder="Type your email&#8230;" tabindex="-1"><input type="submit" class="button primary" value="Subscribe"><div class="fake-input-wrapper"><div class="fake-input"></div><div class="fake-button"></div></div></form></div></div><div><hr></div><h2>The Comfortable Lie</h2><p>Here is what the market wants to believe:</p><p><em>We are not in the EU. This does not apply to us.</em></p><p>US companies. Asian headquarters. Middle Eastern operations. The EU AI Act is someone else&#8217;s problem.</p><p>This comfortable lie persists because the alternative requires reading the regulation.</p><p>The alternative reveals that the AI Act was drafted with extraterritorial reach built in from the start. The drafters studied how GDPR caught non-EU companies off guard. They designed the AI Act to eliminate the ambiguity.</p><p><strong>Geography is not a shield. Output is the trigger.</strong></p><div><hr></div><h2>How The Regulation Catches You</h2><p><strong>Article 2 defines four ways non-EU organizations fall into scope.</strong></p><ol><li><p><strong>Your output lands in the EU.</strong> You are established outside the Union, but your AI system produces results used inside it. A US company&#8217;s hiring algorithm screens candidates for a German subsidiary. A Singapore fintech&#8217;s credit model scores EU applicants. The system runs offshore. The output lands in the Union. That is enough.</p></li><li><p><strong>Your EU customers use your system.</strong> You sell AI tools to EU-based deployers. They use your system to make decisions affecting EU residents. You are in scope as a provider regardless of where you incorporated.</p></li><li><p><strong>Your EU subsidiary deploys AI.</strong> An EU-established entity cannot escape by routing AI through offshore infrastructure. Where the deployer is established determines obligations&#8212;not where the servers sit.</p></li><li><p><strong>You modified a system's purpose. </strong>You licensed a foundation model not classified as high-risk. Then you deployed it to screen job applicants, assess creditworthiness, or evaluate insurance claims. You may have become the provider under EU law.</p></li></ol><p>The regulation anticipated every offshore workaround. It closed the loopholes before you reached for them.</p><div class="subscription-widget-wrap-editor" data-attrs="{&quot;url&quot;:&quot;https://www.zerodaydawn.com/subscribe?&quot;,&quot;text&quot;:&quot;Subscribe&quot;,&quot;language&quot;:&quot;en&quot;}" data-component-name="SubscribeWidgetToDOM"><div class="subscription-widget show-subscribe"><div class="preamble"><p class="cta-caption">Thanks for reading Zero-Day Dawn! Subscribe for free to receive new posts and support my work.</p></div><form class="subscription-widget-subscribe"><input type="email" class="email-input" name="email" placeholder="Type your email&#8230;" tabindex="-1"><input type="submit" class="button primary" value="Subscribe"><div class="fake-input-wrapper"><div class="fake-input"></div><div class="fake-button"></div></div></form></div></div><div><hr></div><h2>The Mandatory EU Footprint</h2><p>If you are a third-country provider of high-risk AI systems, you must appoint an authorized representative inside the EU before placing systems on the market.</p><p><strong>This is not optional. It is a legal requirement under Article 22.</strong></p><p>The authorized representative is not an administrative contact. They are legally responsible for your compliance. They must cooperate with Market Surveillance Authorities. They must respond to regulatory inquiries on your behalf.</p><p>And here is the provision most third-country providers have not absorbed: the authorized representative must terminate the mandate if they have reason to consider you are acting contrary to the regulation.</p><p>They must then immediately inform the relevant Market Surveillance Authority&#8212;and explain why.</p><p>Your EU representative is not just your compliance interface. They are a regulatory tripwire. If you violate the Act, they are obligated to report you.</p><p><strong>And if you skip the appointment entirely?</strong> Article 83 treats this as formal non-compliance&#8212;independent of whether the system presents any actual risk. The Market Surveillance Authority can order withdrawal or recall based solely on the missing appointment. <strong>T</strong>he system could be technically sound, operationally compliant, and demonstrably safe. None of that matters. <strong>No EU footprint means no legal market access.</strong></p><p>Choosing an authorized representative is not a procurement decision. It is a compliance architecture decision. The representative must understand your systems, have access to your documentation, and be willing to carry the liability your deployment creates.</p><div><hr></div><h2>What Regulators Will Actually Ask</h2><p>When Market Surveillance Authorities investigate a non-EU provider, they will ask operational questions.</p><p><strong>Show us the output.</strong> Where does your AI system produce results affecting EU residents? What is your scope of EU exposure?</p><p><strong>Show us the classification analysis.</strong> For each system whose output reaches the EU: what is its regulatory status? High-risk? Exempt? What reasoning supports that determination?</p><p><strong><a href="https://www.zerodaydawn.com/p/who-owns-your-ai-risk">Show us who owns that determination.</a></strong> Who in your organization has authority to make binding classification decisions? Where is that authority documented?</p><p><strong>Show us your authorized representative.</strong> Do they have access to your technical documentation? Can they answer questions on your behalf?</p><p>If you cannot answer these questions, your non-EU status provides no protection. You are in scope with no compliance infrastructure.</p><div><hr></div><h2>What To Do Now</h2><p>If your organization operates AI systems whose output reaches the EU, start with three questions.</p><ol><li><p><strong>Where does output land?</strong> Map every AI system that produces decisions affecting EU residents. This is your scope perimeter. Everything inside is potentially subject to the AI Act regardless of where you are located.</p></li><li><p><strong>What did you modify?</strong> For every third-party AI system you customized, assess whether your intended purpose differs from the original provider&#8217;s classification. If you applied a general-purpose tool to a high-risk use case, you may have inherited provider obligations.</p></li><li><p><strong>Who is your EU footprint?</strong> If any system in your portfolio is high-risk, you need an authorized representative inside the EU before placing it on the market. <strong>This is a pre-market requirement.</strong></p></li></ol><p>These questions do not require legal counsel to begin answering. They require operational honesty about where your AI systems actually affect people.</p><p>If your organization needs a structured methodology for working through classification logic&#8212;the upstream reasoning that determines whether you are a provider, what your systems&#8217; risk status is, and what obligations follow&#8212;the framework I use is documented in my <a href="https://quantumcoherence.ai/b/the-article-6-classification-handbook">The Article 6 Classification Handbook</a>.</p><div><hr></div><h2>Conclusion</h2><p>Your jurisdiction will not shield you from enforcement.</p><p>The EU AI Act applies based on where output lands, not where companies are incorporated. The offshore processing loophole was closed before you thought to use it. The provider trap pulls in companies that modified intended purpose without realizing they inherited obligations.</p><p>Third-country providers face an additional structural exposure: the mandatory authorized representative who must report violations to regulators. Your EU footprint is not a compliance checkbox. It is an accountability mechanism you cannot avoid.</p><p>The Digital Omnibus will not change this. That proposal focuses on SME simplification and sandbox expansion&#8212;not territorial scope. The Article 2 triggers remain. <strong>The extraterritorial reach remains.</strong></p><p>Non-EU companies that assumed they were watching from a safe distance are about to discover they were always in scope.</p><p>The long arm reaches further than you thought. August 2026 is when you find out how far.</p><div class="subscription-widget-wrap-editor" data-attrs="{&quot;url&quot;:&quot;https://www.zerodaydawn.com/subscribe?&quot;,&quot;text&quot;:&quot;Subscribe&quot;,&quot;language&quot;:&quot;en&quot;}" data-component-name="SubscribeWidgetToDOM"><div class="subscription-widget show-subscribe"><div class="preamble"><p class="cta-caption">Thanks for reading Zero-Day Dawn! Subscribe for free to receive new posts and support my work.</p></div><form class="subscription-widget-subscribe"><input type="email" class="email-input" name="email" placeholder="Type your email&#8230;" tabindex="-1"><input type="submit" class="button primary" value="Subscribe"><div class="fake-input-wrapper"><div class="fake-input"></div><div class="fake-button"></div></div></form></div></div><div><hr></div><h5><strong>Regulatory Disclaimer</strong></h5><h5>This article provides educational analysis of the EU Artificial Intelligence Act (Regulation (EU) 2024/1689) as of January 2026. Nothing in this article constitutes legal advice, regulatory interpretation, or compliance certification.</h5><h5>Organizations should consult qualified legal counsel specializing in EU AI Act compliance before making classification determinations or deployment decisions.</h5><h5>Quantum Coherence LLC does not provide legal advice or regulatory compliance determinations.</h5>]]></content:encoded></item><item><title><![CDATA[Your ISO 42001 Badge Won't Save You]]></title><description><![CDATA[The difference between market assurance and regulatory survival]]></description><link>https://www.zerodaydawn.com/p/your-iso-42001-badge-wont-save-you</link><guid isPermaLink="false">https://www.zerodaydawn.com/p/your-iso-42001-badge-wont-save-you</guid><dc:creator><![CDATA[Violeta Klein, CISSP, AIGP]]></dc:creator><pubDate>Mon, 12 Jan 2026 06:02:06 GMT</pubDate><enclosure url="https://substackcdn.com/image/fetch/$s_!HLHD!,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F89d075ca-a94a-4479-a5b5-349305a8cd31_1920x1080.png" length="0" type="image/jpeg"/><content:encoded><![CDATA[<div class="captioned-image-container"><figure><a class="image-link image2 is-viewable-img" target="_blank" href="https://substackcdn.com/image/fetch/$s_!HLHD!,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F89d075ca-a94a-4479-a5b5-349305a8cd31_1920x1080.png" data-component-name="Image2ToDOM"><div class="image2-inset"><picture><source type="image/webp" srcset="https://substackcdn.com/image/fetch/$s_!HLHD!,w_424,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F89d075ca-a94a-4479-a5b5-349305a8cd31_1920x1080.png 424w, https://substackcdn.com/image/fetch/$s_!HLHD!,w_848,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F89d075ca-a94a-4479-a5b5-349305a8cd31_1920x1080.png 848w, https://substackcdn.com/image/fetch/$s_!HLHD!,w_1272,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F89d075ca-a94a-4479-a5b5-349305a8cd31_1920x1080.png 1272w, https://substackcdn.com/image/fetch/$s_!HLHD!,w_1456,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F89d075ca-a94a-4479-a5b5-349305a8cd31_1920x1080.png 1456w" sizes="100vw"><img src="https://substackcdn.com/image/fetch/$s_!HLHD!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F89d075ca-a94a-4479-a5b5-349305a8cd31_1920x1080.png" width="1456" height="819" data-attrs="{&quot;src&quot;:&quot;https://substack-post-media.s3.amazonaws.com/public/images/89d075ca-a94a-4479-a5b5-349305a8cd31_1920x1080.png&quot;,&quot;srcNoWatermark&quot;:null,&quot;fullscreen&quot;:null,&quot;imageSize&quot;:null,&quot;height&quot;:819,&quot;width&quot;:1456,&quot;resizeWidth&quot;:null,&quot;bytes&quot;:3294709,&quot;alt&quot;:null,&quot;title&quot;:null,&quot;type&quot;:&quot;image/png&quot;,&quot;href&quot;:null,&quot;belowTheFold&quot;:false,&quot;topImage&quot;:true,&quot;internalRedirect&quot;:&quot;https://www.zerodaydawn.com/i/183887643?img=https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F89d075ca-a94a-4479-a5b5-349305a8cd31_1920x1080.png&quot;,&quot;isProcessing&quot;:false,&quot;align&quot;:null,&quot;offset&quot;:false}" class="sizing-normal" alt="" srcset="https://substackcdn.com/image/fetch/$s_!HLHD!,w_424,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F89d075ca-a94a-4479-a5b5-349305a8cd31_1920x1080.png 424w, https://substackcdn.com/image/fetch/$s_!HLHD!,w_848,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F89d075ca-a94a-4479-a5b5-349305a8cd31_1920x1080.png 848w, https://substackcdn.com/image/fetch/$s_!HLHD!,w_1272,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F89d075ca-a94a-4479-a5b5-349305a8cd31_1920x1080.png 1272w, https://substackcdn.com/image/fetch/$s_!HLHD!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F89d075ca-a94a-4479-a5b5-349305a8cd31_1920x1080.png 1456w" sizes="100vw" fetchpriority="high"></picture><div class="image-link-expand"><div class="pencraft pc-display-flex pc-gap-8 pc-reset"><button tabindex="0" type="button" class="pencraft pc-reset pencraft icon-container restack-image"><svg role="img" width="20" height="20" viewBox="0 0 20 20" fill="none" stroke-width="1.5" stroke="var(--color-fg-primary)" stroke-linecap="round" stroke-linejoin="round" xmlns="http://www.w3.org/2000/svg"><g><title></title><path d="M2.53001 7.81595C3.49179 4.73911 6.43281 2.5 9.91173 2.5C13.1684 2.5 15.9537 4.46214 17.0852 7.23684L17.6179 8.67647M17.6179 8.67647L18.5002 4.26471M17.6179 8.67647L13.6473 6.91176M17.4995 12.1841C16.5378 15.2609 13.5967 17.5 10.1178 17.5C6.86118 17.5 4.07589 15.5379 2.94432 12.7632L2.41165 11.3235M2.41165 11.3235L1.5293 15.7353M2.41165 11.3235L6.38224 13.0882"></path></g></svg></button><button tabindex="0" type="button" class="pencraft pc-reset pencraft icon-container view-image"><svg xmlns="http://www.w3.org/2000/svg" width="20" height="20" viewBox="0 0 24 24" fill="none" stroke="currentColor" stroke-width="2" stroke-linecap="round" stroke-linejoin="round" class="lucide lucide-maximize2 lucide-maximize-2"><polyline points="15 3 21 3 21 9"></polyline><polyline points="9 21 3 21 3 15"></polyline><line x1="21" x2="14" y1="3" y2="10"></line><line x1="3" x2="10" y1="21" y2="14"></line></svg></button></div></div></div></a></figure></div><h2>Executive Summary</h2><p>Your ISO 42001 certificate is a process badge. Not a regulatory shield.</p><p>Organizations are treating certification as AI Act compliance. Boards are reassured. Procurement teams check boxes. The certificate goes on the website. Everyone feels protected.</p><p>They shouldn&#8217;t.</p><p>ISO 42001 certifies that a management system exists. It does not certify that your AI systems are correctly classified. It does not validate your Article 6 reasoning. It does not create the technical documentation regulators will actually review.</p><p>Certification auditors verify your process exists.</p><p>Market Surveillance Authorities verify your classification decisions are defensible.</p><p>Different auditors. Different questions. Different consequences.</p><p>The reflexive defense - &#8221;no one is suggesting certification equals compliance&#8221; - ignores how the market actually behaves. Organizations pursue certification precisely because it signals compliance readiness. The distinction defenders draw in theory collapses in practice.</p><p>This piece explains why the badge that impresses buyers will not shield you from regulators, what enforcement actually looks like, and what the market refuses to admit before August 2026.</p><div class="subscription-widget-wrap-editor" data-attrs="{&quot;url&quot;:&quot;https://www.zerodaydawn.com/subscribe?&quot;,&quot;text&quot;:&quot;Subscribe&quot;,&quot;language&quot;:&quot;en&quot;}" data-component-name="SubscribeWidgetToDOM"><div class="subscription-widget show-subscribe"><div class="preamble"><p class="cta-caption">Thanks for reading Zero-Day Dawn! Subscribe for free to receive new posts and support my work.</p></div><form class="subscription-widget-subscribe"><input type="email" class="email-input" name="email" placeholder="Type your email&#8230;" tabindex="-1"><input type="submit" class="button primary" value="Subscribe"><div class="fake-input-wrapper"><div class="fake-input"></div><div class="fake-button"></div></div></form></div></div><div><hr></div><h2>The Comfortable Lie</h2><p>Here is what the market wants to believe:</p><p><strong>Get certified. Check the box. Move on.</strong></p><p>ISO 42001 certification demonstrates governance maturity. Procurement accepts it. Customers stop asking questions. The compliance problem is solved - or at least deferred until someone else&#8217;s budget cycle.</p><p>This is the comfortable lie. It persists because the alternative is harder.</p><p>The alternative requires answering questions that certification bodies never ask. Which of your AI systems are high-risk under Article 6? Who made that determination? On what basis? Where is the reasoning documented? <strong>What happens when the system </strong><em><strong>changes</strong></em><strong>?</strong></p><p>These are not management system questions. They are product safety questions. And the EU AI Act is product safety law.</p><p>Organizations choosing certification as their primary compliance signal are not confused. They are selecting the path of least resistance - governance charade that looks like progress while the actual regulatory exposure remains unaddressed.</p><p>The comfortable lie will hold until August 2026. Then Market Surveillance Authorities will ask questions certification bodies never did. And the organizations that confused process badges for regulatory shields will discover what compliance actually costs when you have to build it twice.</p><div><hr></div><h2>What ISO 42001 Actually Certifies</h2><p>ISO/IEC 42001 is a management system standard. It certifies that an organization has established an AI Management System - policies, processes, roles, and responsibilities for governing AI.</p><p>The standard requires organizations to identify external obligations under Clause 4.1. This includes legal requirements. A competent auditor will verify that your organization has acknowledged the EU AI Act as an applicable obligation.</p><p><strong>That is where the standard&#8217;s relationship to the AI Act ends.</strong></p><p>ISO 42001 does not provide classification methodology. It does not tell organizations how to determine whether a system falls under Annex III categories. It does not address the profiling override that collapses exemption claims. It does not create the technical documentation regulators will review.</p><p>The certificate confirms that a process for identifying legal obligations exists. It does not validate whether the analysis required by those obligations was ever performed.</p><p>An organization can be fully certified under ISO 42001 while having zero documented reasoning for its Article 6 classification decisions. The certification body will not catch this. It is outside their scope.</p><p><strong>This is not a technicality. It is the gap between what certification promises and what regulation requires.</strong></p><div><hr></div><h2>The Certification Body Blind Spot</h2><p>Certification bodies audit what the standard requires. ISO 42001 requires a management system. It requires identifying external obligations. It requires controls and processes.</p><p>It does not require auditors to validate the substance of regulatory determinations.</p><p>When an organization declares &#8220;we have no high-risk AI systems,&#8221; the certification body accepts that declaration. Auditor days are calculated based on scope. Fewer high-risk systems means fewer auditor days. Lower costs. Faster certification.</p><p>The economics are simple: challenging classification adds time and expense. Accepting declarations keeps engagements profitable.</p><p>No one asks: on what basis did you reach that determination? Who performed the analysis? Where is the documentation? Can this reasoning survive regulatory scrutiny?</p><p>The classification assumption passes through the certification process unchallenged. The organization receives a certificate. The certificate implies governance maturity. The gap underneath remains invisible - until regulators ask questions certification bodies are not equipped to answer.</p><p>This is not speculation. It is industry reality. <strong>The certification process assumes the hard work is done upstream. No one checks if it actually was.</strong></p><div><hr></div><h2>What Regulators Will Actually Ask</h2><p>Market Surveillance Authorities do not audit management systems. They enforce product safety law.</p><p>When they arrive, they will not ask to see your ISO 42001 certificate. They will not care about your governance maturity score. They will ask questions the certification process never touched.</p><p><strong>Show us the classification decision.</strong></p><p>Not the management system. Not the policy framework. The determination: this system is high-risk under Article 6, or this system is not high-risk, based on this intended purpose, mapped to this Annex III category, with these exemptions considered and these reasons documented.</p><p><strong>Show us who had authority to make that determination.</strong></p><p>A named individual. A cross-functional body. Someone with documented mandate to make binding classification decisions. Not opinions. Not recommendations. Decisions that carry accountability.</p><p><strong>Show us how you reached that conclusion.</strong></p><p>The reasoning chain. Intended purpose analysis. Annex III mapping. Exemption evaluation. Profiling override assessment. The analytical trail that connects your system&#8217;s function to its regulatory status.</p><p>Here is the trap the majority of organizations miss: even if you determine a system is not high-risk under Article 6(3), you are legally required to document that assessment before placing the system on the market - and register it in the EU database. <a href="https://www.zerodaydawn.com/p/the-digital-omnibus-trap-eu-ai-act">The "we're not high-risk" path does not exempt you from documentation. </a>It creates a different documentation obligation. Claiming exemption without documented reasoning is not a defense. It is exposure.</p><p>If you cannot produce this documentation, your certificate is irrelevant. The regulator is not asking about governance maturity. They are asking whether the classification was defensible.</p><p>One impresses procurement. <strong>The other determines whether you can legally operate in the EU.</strong></p><div><hr></div><h2>The Enforcement Mechanics</h2><p>The EU AI Act gives Market Surveillance Authorities specific powers that no management system certificate can override.</p><p>Under Article 80, MSAs have the sole legal mandate to evaluate - and overrule - a provider&#8217;s classification decision. If an organization has declared a system &#8220;not high-risk&#8221; and the regulator disagrees, the regulator&#8217;s determination controls. Your internal analysis, however polished, can be superseded by a single regulatory finding.</p><p>Meanwhile, fourteen separate Commission guidelines are expected before August 2026 - covering everything from high-risk classification to fundamental rights impact assessments. None will carry presumption of conformity. <a href="https://www.zerodaydawn.com/p/the-100-day-mandate">Guidelines clarify interpretation. They do not provide regulatory safe harbor.</a></p><p>If an MSA reverses a classification, Article 79(2) allows them to demand full compliance or market recall within as few as 15 working days. <strong>This is not a negotiation timeline. It is a compliance cliff.</strong></p><p>An organization that built its compliance program around an incorrect classification now faces catastrophic rework. Technical documentation that does not exist must be created. Risk management processes that were never implemented must be established. Post-market monitoring that was never scoped must begin. All under deadline pressure with no extensions.</p><p>Article 99 adds financial exposure. Providing misleading information about a system&#8217;s regulatory status - including its high-risk classification - risks fines up to &#8364;7.5 million.</p><p>The ISO 42001 certificate provides zero presumption of conformity for any of this. An organization can be fully certified under ISO 42001 and still face classification reversal, mandatory rework within 15 days, and multi-million euro penalties.</p><p>The badge does not save you. It never could.</p><div><hr></div><h2>Why The Standard Falls Short</h2><p>This assessment is not controversial within regulatory circles.</p><p>The JRC gap analysis found that ISO 42001 lacks the specific safety-by-design mandates required for Annex III high-risk systems. The standard was designed for organizational governance, not product safety. It provides management system frameworks. It does not address the technical requirements the AI Act imposes on high-risk AI.</p><p>The EU AI Office has signaled that ISO 42001 is not a full proxy for the Act&#8217;s requirements. The incompatibility goes deeper than gaps. ISO 42001 was designed for global applicability, including customer and business needs &#8212; not EU product safety law. This is why prEN 18286 - the home-grown European standard being developed by CEN-CENELEC JTC 21 specifically for high - risk AI QMS requirements - is being fast-tracked<a class="footnote-anchor" data-component-name="FootnoteAnchorToDOM" id="footnote-anchor-1" href="#footnote-1" target="_self">1</a>. When finalized and harmonized, it will provide the presumption of conformity for Article 17. <strong>ISO 42001 is not part of this harmonization process and never will be.</strong></p><p>The European Commission has not requested harmonized standards for Article 6 classification. The reason is structural: classification is legal interpretation, not technical specification. Standards bodies write technical specifications, not regulatory determinations. That determination sits with you.</p><p>The FDA&#8217;s silence is equally telling. The agency has notably declined to recognize ISO 42001 as a product-safety standard. In both major regulatory jurisdictions - EU and US - the standard has not reached the threshold required for high-risk AI governance.</p><p>Until prEN 18286 is published and harmonized, no certification provides regulatory safe harbor. Organizations must build compliance directly against the regulation&#8217;s requirements - classification, technical documentation, risk management, human oversight - without relying on a management system certificate as a shortcut.</p><p>The shortcut does not exist. The comfortable lie says it does.</p><div><hr></div><h2>The Market Narrative vs. Regulatory Reality</h2><p>Here is what the market tells organizations:</p><p>&#8220;Build your AIMS. Get certified. You will be ahead of competitors. Customers will trust you. Procurement will prefer you. You are demonstrating responsible AI governance.&#8221;</p><p>All of this is true - <strong>for market positioning</strong>.</p><p>Here is what the market does not say:</p><blockquote><p>Certification does not validate your classification decisions. Regulators can overrule your determination regardless of your certificate. <strong>You may be fully certified and fully non-compliant simultaneously</strong>. The badge that wins contracts will not prevent enforcement actions.</p></blockquote><p>The market narrative serves the certification ecosystem. Consultants sell readiness programs. Certification bodies sell audits. Organizations buy confidence. Everyone benefits except the organizations that will face enforcement with expensive false confidence and no regulatory shield.</p><p>The distinction between market assurance and regulatory survival is not academic. It is the difference between winning a contract and losing the ability to operate in the EU.</p><div><hr></div><h2>What To Do Now</h2><p>Build the AIMS if it serves your market positioning. Pursue ISO 42001 if procurement requires it. But do not confuse the certificate with the regulatory requirement it does not cover.</p><p>Before certification means anything, answer three questions.</p><p><strong>Who owns classification?</strong> Not who participates in discussions. <a href="https://www.zerodaydawn.com/p/who-owns-your-ai-risk">Who has formal, documented authority to make binding Article 6 determinations?</a> If no one has that authority, you have a governance gap that certification will paper over but cannot close.</p><p><strong>Is the reasoning documented?</strong> For every AI system in your portfolio, can you produce the classification analysis - intended purpose, Annex III mapping, exemption evaluation, profiling assessment - within 15 days if regulators request it? If not, your classification is assumption, not determination. Assumptions do not survive enforcement.</p><p><strong>What happens when systems change?</strong> Feature additions, use case expansion, behavioral drift - each can shift a system&#8217;s regulatory status without a line of code changing. <a href="https://www.zerodaydawn.com/p/who-owns-your-ai-risk">Who monitors for these triggers? What process catches classification drift before regulators do?</a> If no one owns this, your classification is a snapshot. The regulation requires a lifecycle.</p><p><strong>ISO 42001 does not require you to answer these questions. The EU AI Act does.</strong></p><p>The Commission has already published guidance on AI system definitions and prohibited practices. These are available now. Waiting for more clarity is waiting for answers to questions you can already address.</p><p>The organizations answering them now will have compliance that maps to regulatory exposure. The organizations hiding behind certification will discover the gap when it is too expensive to close.</p><div><hr></div><h2>Conclusion</h2><p>The certificate that impresses procurement will not shield you from enforcement.</p><p>ISO 42001 certifies that governance infrastructure exists. It says nothing about whether the upstream classification decisions that determine regulatory scope were ever formally made, documented, or defensible.</p><p>Market Surveillance Authorities will not ask about your management system maturity. They will ask about your classification reasoning. They will ask who made the determination. They will ask for documentation you may not have.</p><p>Different auditors. Different questions. Different consequences.</p><p>If you cannot answer, the certificate is expensive false confidence. A process badge mistaken for a regulatory shield.</p><p>Certification is market assurance. Classification is regulatory survival.</p><p><strong>Know the difference before August 2026.</strong></p><div><hr></div><p>If your organization needs a structured methodology for Article 6 classification&#8212;the upstream logic that makes compliance defensible&#8212;I have published the framework I use:</p><p><strong><a href="https://quantumcoherence.ai/b/the-article-6-classification-handbook">The Article 6 Classification Handbook</a></strong></p><p>A practical, defensible methodology for EU AI Act compliance. Not legal advice. Not a compliance shortcut. A reasoning architecture for teams that must own their classification decisions.</p><div class="subscription-widget-wrap-editor" data-attrs="{&quot;url&quot;:&quot;https://www.zerodaydawn.com/subscribe?&quot;,&quot;text&quot;:&quot;Subscribe&quot;,&quot;language&quot;:&quot;en&quot;}" data-component-name="SubscribeWidgetToDOM"><div class="subscription-widget show-subscribe"><div class="preamble"><p class="cta-caption">Thanks for reading Zero-Day Dawn! Subscribe for free to receive new posts and support my work.</p></div><form class="subscription-widget-subscribe"><input type="email" class="email-input" name="email" placeholder="Type your email&#8230;" tabindex="-1"><input type="submit" class="button primary" value="Subscribe"><div class="fake-input-wrapper"><div class="fake-input"></div><div class="fake-button"></div></div></form></div></div><p>This analysis benefited from ongoing dialogue with practitioners shaping the conformity assessment landscape, including <span class="mention-wrap" data-attrs="{&quot;name&quot;:&quot;Adam Leon Smith DEng FBCS&quot;,&quot;id&quot;:351251468,&quot;type&quot;:&quot;user&quot;,&quot;url&quot;:null,&quot;photo_url&quot;:&quot;https://substack-post-media.s3.amazonaws.com/public/images/8eb6b884-e7e9-4e64-b672-a12db045c312_679x679.jpeg&quot;,&quot;uuid&quot;:&quot;02b2db1b-fb45-4725-8748-db494bf04f82&quot;}" data-component-name="MentionToDOM"></span>, project leader for prEN 18286 at CEN-CENELEC JTC 21.</p><div><hr></div><h6><strong>Regulatory Disclaimer</strong></h6><h6>This article provides educational analysis of the EU Artificial Intelligence Act (Regulation (EU) 2024/1689) as of January 2026. Nothing in this article constitutes legal advice, regulatory interpretation, or compliance certification.</h6><h6>Organizations should consult qualified legal counsel specializing in EU AI Act compliance before making classification determinations or deployment decisions.</h6><h6>Quantum Coherence LLC does not provide legal advice or regulatory compliance determinations.</h6><div class="footnote" data-component-name="FootnoteToDOM"><a id="footnote-1" href="#footnote-anchor-1" class="footnote-number" contenteditable="false" target="_self">1</a><div class="footnote-content"><p>For more on prEN 18286 and the standards timeline, see <span class="mention-wrap" data-attrs="{&quot;name&quot;:&quot;Adam Leon Smith DEng FBCS&quot;,&quot;id&quot;:351251468,&quot;type&quot;:&quot;user&quot;,&quot;url&quot;:null,&quot;photo_url&quot;:&quot;https://substack-post-media.s3.amazonaws.com/public/images/8eb6b884-e7e9-4e64-b672-a12db045c312_679x679.jpeg&quot;,&quot;uuid&quot;:&quot;3fad7168-bdad-4837-a928-6c89b9d0f4df&quot;}" data-component-name="MentionToDOM"></span>&#8217;s work at <a href="https://adamleonsmith.substack.com">adamleonsmith.substack.com</a></p><p></p></div></div>]]></content:encoded></item><item><title><![CDATA[Your QMS Has an AI Risk Problem]]></title><description><![CDATA[Your QMS is ready. Is your AI risk assessment?]]></description><link>https://www.zerodaydawn.com/p/your-qms-has-an-ai-risk-problem</link><guid isPermaLink="false">https://www.zerodaydawn.com/p/your-qms-has-an-ai-risk-problem</guid><dc:creator><![CDATA[Violeta Klein, CISSP, AIGP]]></dc:creator><pubDate>Mon, 05 Jan 2026 06:01:41 GMT</pubDate><enclosure url="https://substackcdn.com/image/fetch/$s_!ssh4!,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F390aabae-a97b-4e90-aabb-883a36639fa0_1920x1080.png" length="0" type="image/jpeg"/><content:encoded><![CDATA[<div class="captioned-image-container"><figure><a class="image-link image2 is-viewable-img" target="_blank" href="https://substackcdn.com/image/fetch/$s_!ssh4!,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F390aabae-a97b-4e90-aabb-883a36639fa0_1920x1080.png" data-component-name="Image2ToDOM"><div class="image2-inset"><picture><source type="image/webp" srcset="https://substackcdn.com/image/fetch/$s_!ssh4!,w_424,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F390aabae-a97b-4e90-aabb-883a36639fa0_1920x1080.png 424w, https://substackcdn.com/image/fetch/$s_!ssh4!,w_848,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F390aabae-a97b-4e90-aabb-883a36639fa0_1920x1080.png 848w, https://substackcdn.com/image/fetch/$s_!ssh4!,w_1272,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F390aabae-a97b-4e90-aabb-883a36639fa0_1920x1080.png 1272w, https://substackcdn.com/image/fetch/$s_!ssh4!,w_1456,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F390aabae-a97b-4e90-aabb-883a36639fa0_1920x1080.png 1456w" sizes="100vw"><img src="https://substackcdn.com/image/fetch/$s_!ssh4!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F390aabae-a97b-4e90-aabb-883a36639fa0_1920x1080.png" width="1456" height="819" data-attrs="{&quot;src&quot;:&quot;https://substack-post-media.s3.amazonaws.com/public/images/390aabae-a97b-4e90-aabb-883a36639fa0_1920x1080.png&quot;,&quot;srcNoWatermark&quot;:null,&quot;fullscreen&quot;:null,&quot;imageSize&quot;:null,&quot;height&quot;:819,&quot;width&quot;:1456,&quot;resizeWidth&quot;:null,&quot;bytes&quot;:3303502,&quot;alt&quot;:null,&quot;title&quot;:null,&quot;type&quot;:&quot;image/png&quot;,&quot;href&quot;:null,&quot;belowTheFold&quot;:false,&quot;topImage&quot;:true,&quot;internalRedirect&quot;:&quot;https://www.zerodaydawn.com/i/183066600?img=https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F390aabae-a97b-4e90-aabb-883a36639fa0_1920x1080.png&quot;,&quot;isProcessing&quot;:false,&quot;align&quot;:null,&quot;offset&quot;:false}" class="sizing-normal" alt="" srcset="https://substackcdn.com/image/fetch/$s_!ssh4!,w_424,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F390aabae-a97b-4e90-aabb-883a36639fa0_1920x1080.png 424w, https://substackcdn.com/image/fetch/$s_!ssh4!,w_848,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F390aabae-a97b-4e90-aabb-883a36639fa0_1920x1080.png 848w, https://substackcdn.com/image/fetch/$s_!ssh4!,w_1272,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F390aabae-a97b-4e90-aabb-883a36639fa0_1920x1080.png 1272w, https://substackcdn.com/image/fetch/$s_!ssh4!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F390aabae-a97b-4e90-aabb-883a36639fa0_1920x1080.png 1456w" sizes="100vw" fetchpriority="high"></picture><div class="image-link-expand"><div class="pencraft pc-display-flex pc-gap-8 pc-reset"><button tabindex="0" type="button" class="pencraft pc-reset pencraft icon-container restack-image"><svg role="img" width="20" height="20" viewBox="0 0 20 20" fill="none" stroke-width="1.5" stroke="var(--color-fg-primary)" stroke-linecap="round" stroke-linejoin="round" xmlns="http://www.w3.org/2000/svg"><g><title></title><path d="M2.53001 7.81595C3.49179 4.73911 6.43281 2.5 9.91173 2.5C13.1684 2.5 15.9537 4.46214 17.0852 7.23684L17.6179 8.67647M17.6179 8.67647L18.5002 4.26471M17.6179 8.67647L13.6473 6.91176M17.4995 12.1841C16.5378 15.2609 13.5967 17.5 10.1178 17.5C6.86118 17.5 4.07589 15.5379 2.94432 12.7632L2.41165 11.3235M2.41165 11.3235L1.5293 15.7353M2.41165 11.3235L6.38224 13.0882"></path></g></svg></button><button tabindex="0" type="button" class="pencraft pc-reset pencraft icon-container view-image"><svg xmlns="http://www.w3.org/2000/svg" width="20" height="20" viewBox="0 0 24 24" fill="none" stroke="currentColor" stroke-width="2" stroke-linecap="round" stroke-linejoin="round" class="lucide lucide-maximize2 lucide-maximize-2"><polyline points="15 3 21 3 21 9"></polyline><polyline points="9 21 3 21 3 15"></polyline><line x1="21" x2="14" y1="3" y2="10"></line><line x1="3" x2="10" y1="21" y2="14"></line></svg></button></div></div></div></a></figure></div><h2>Executive Summary</h2><p>The EU AI Act created a dependency that most compliance programs ignore.</p><p>Article 17 requires providers of high-risk AI systems to implement a Quality Management System. The requirements are extensive: design controls, data governance, post-market monitoring, incident reporting, accountability frameworks. Organizations have spent months building these systems.</p><p>But Article 17 has a prerequisite. It only applies to high-risk AI systems. And the determination of what qualifies as high-risk sits upstream, in Article 6.</p><p>Here is the problem: Article 6 classification is not a one-time decision. Systems evolve. Use cases expand. User behavior drifts. A system that launched as minimal-risk can cross into high-risk territory without a single line of code changing.</p><p>Article 17 knows this. It explicitly requires procedures for &#8220;the management of modifications to the high-risk AI system&#8221; &#8212; including the moment when a system that was not high-risk becomes one.</p><p>The organizations building QMS frameworks without classification authority are building on sand. The organizations treating classification as a launch checkpoint are missing the ongoing obligation entirely.</p><p>This piece explains the dependency, exposes the assumption buried in every QMS, and outlines what it actually takes to close the gap before regulators ask questions you cannot answer.</p><p class="button-wrapper" data-attrs="{&quot;url&quot;:&quot;https://www.zerodaydawn.com/subscribe?&quot;,&quot;text&quot;:&quot;Subscribe now&quot;,&quot;action&quot;:null,&quot;class&quot;:null}" data-component-name="ButtonCreateButton"><a class="button primary" href="https://www.zerodaydawn.com/subscribe?"><span>Subscribe now</span></a></p><div><hr></div><h2>The Dependency Nobody Talks About</h2><p>Open Article 17 of the EU AI Act. Read the first line.</p><p>&#8220;Providers of high-risk AI systems shall put a quality management system in place that ensures compliance with this Regulation.&#8221;</p><p>That sentence contains an <em><strong>assumption</strong></em>. The provider already knows which of its systems are high-risk.</p><p>The entire QMS architecture &#8212; design controls, data governance, risk management integration, post-market monitoring, serious incident reporting &#8212; applies only to systems that have been classified as high-risk under Article 6. If a system is not high-risk, Article 17 does not apply. <strong>If a system is high-risk but was never classified as such, the provider is non-compliant from day one, regardless of what management systems exist elsewhere in the organization.</strong></p><p>This is not a technicality. It is the load-bearing joint between two regulatory obligations that most organizations treat as separate workstreams.</p><p>The typical compliance program looks like this: one team builds the QMS framework, another team works on AI inventory, a third team handles technical documentation. These workstreams run in parallel, with vague coordination and optimistic assumptions about scope.</p><p><strong>The regulation does not work that way.</strong></p><p>Article 17 paragraph 1(a) requires &#8220;a strategy for regulatory compliance, including compliance with conformity assessment procedures and procedures for the management of modifications to the high-risk AI system.&#8221;</p><p>You cannot write that strategy without knowing which systems are high-risk. You cannot define modification procedures without knowing the baseline classification. You cannot scope conformity assessment requirements without understanding which pathway applies.</p><p><strong>The QMS is downstream of classification</strong>. Every component of it inherits the assumptions embedded in that upstream decision. If those assumptions are wrong &#8212; or worse, if they were never formally made &#8212; the QMS is anchored to nothing.</p><div><hr></div><h2>The Assumption Buried in Every QMS</h2><p>Every QMS has an implicit classification decision behind it.</p><p>When an organization scopes its quality management system to cover &#8220;our AI-powered recruitment tool&#8221; or &#8220;our customer service automation platform,&#8221; it is making a claim: this system is high-risk, and here is how we manage it. Or: this system is not high-risk, and therefore Article 17 does not apply.</p><p>The question is whether that claim was ever formally decided.</p><p>In mature data governance frameworks, classification has an owner. A data owner &#8212; distinct from the system owner &#8212; holds responsibility for data classification and periodic revalidation. The RACI structure is explicit. Accountability is documented.</p><p><strong>AI system classification under the EU AI Act rarely has this discipline.</strong></p><p>The determination of whether a system falls under Annex III, whether exemptions apply, whether the profiling override triggers automatic high-risk designation &#8212; these are legal-technical judgments that sit at the intersection of regulatory interpretation, system architecture, and business context. They require cross-functional input. They require documented reasoning. They require a named individual or body with authority to make binding decisions.</p><p>Without that structure, classification becomes implicit. Someone, somewhere, made an assumption. That assumption propagated into the QMS scope. The QMS was built. And now the organization has a polished management system wrapped around a decision that was never formally made, never documented, and cannot be defended.</p><p>When regulators arrive, they will not ask to see the QMS first. They will ask: on what basis did you determine this system was high-risk? Who made that determination? Where is the reasoning documented?</p><p>If the answer is silence, your QMS is worthless.</p><div><hr></div><h2>Systems Don&#8217;t Stay Where You Classified Them</h2><p>Classification is not a launch checkpoint. <strong>It is a standing obligation.</strong></p><p>This is explicit in the regulation. Article 17 requires providers to maintain procedures for managing modifications &#8212; including modifications that change the risk profile of a system. The QMS must include mechanisms for detecting when a system has drifted into territory that triggers new obligations.</p><p>The logic is straightforward. AI systems are not static artifacts. They evolve through:</p><p><strong>Feature additions.</strong> A recommendation engine gains a new input signal. A chatbot gets connected to a customer database. A scheduling tool starts incorporating performance metrics. Each addition can shift the system&#8217;s functional purpose closer to &#8212; or squarely into &#8212; an Annex III category.</p><p><strong>Use case expansion.</strong> A system deployed for internal analytics gets exposed to customer-facing decisions. A tool designed for one department gets adopted by another with different decision authority. The intended purpose described at launch no longer matches the operational reality.</p><p><strong>Behavioral drift.</strong> User interactions shape system outputs in ways the original design did not anticipate. A conversational AI trained on support tickets starts generating responses that influence purchasing decisions. A content moderation tool evolves into a gatekeeper for access to services.</p><p>None of these require code changes. None of them trigger traditional change management processes. All of them can move a system from minimal-risk to high-risk under Article 6.</p><p>The <a href="https://www.linkedin.com/posts/arnoudengelfriet_an-ai-teddy-bear-taught-kids-how-to-start-activity-7398663450102493186-Jrqc?utm_source=share&amp;utm_medium=member_desktop&amp;rcm=ACoAAFwBeU8Bf1ZN3Gq33hvOhlCh4sTULxLWhQk">AI teddy bear case</a> illustrates this precisely. A children&#8217;s toy with conversational AI capabilities &#8212; assessed, tested, deemed safe for market. Then real-world interactions produced outputs that were inappropriate for the intended users. The system was withdrawn, re-assessed, and eventually relaunched with a new risk classification and new safeguards.</p><p>That is the lifecycle the regulation anticipates. Systems launch with one risk profile. Post-market evidence reveals a different reality. Classification must be revisited. Obligations must be updated.</p><p>Organizations that treat classification as a gate to pass &#8212; rather than a condition to monitor &#8212; will discover the gap only when it is too late to close it gracefully.</p><div><hr></div><h2>The Question Regulators Will Ask</h2><p>When market surveillance authorities arrive, they will follow the dependency chain.</p><p>The first question will not be &#8220;show us your QMS documentation.&#8221; It will be earlier. More fundamental.</p><p><strong>Show us the classification decision this QMS was scoped against.</strong></p><p>Not the spreadsheet. Not the internal memo. The formal determination: this system is high-risk under Article 6(1)(a) or Article 6(2), based on this intended purpose, mapped to this Annex III category, with these exemptions considered and rejected for these reasons.</p><p><strong>Show us who had authority to make that determination.</strong></p><p>A named individual. A cross-functional body. Someone with documented mandate to make binding classification decisions &#8212; not recommendations, not opinions, decisions. Someone who can look the regulator in the eye and defend the reasoning.</p><p><strong>Show us how you would know if it changed.</strong></p><p>What triggers reassessment? Who monitors for feature additions that shift functional purpose? What process catches use case expansion before it creates compliance exposure? How does post-market evidence flow back into classification review?</p><p>If the organization cannot answer these questions, the QMS is a charade. Expensive, well-documented charade &#8212; but a charade nonetheless.</p><p>The regulation did not create Article 17 obligations so that organizations could build management systems around undefined scope. It created them so that high-risk AI systems would be governed with the rigor their impact demands. That rigor begins with knowing what you are governing.</p><div><hr></div><h2>The Real Cost</h2><p>The budget surprise of 2026 will not be Article 17 documentation.</p><p>Documentation is work. It is predictable work. Teams can scope it, resource it, execute it. The cost is visible from the start.</p><p><strong>The surprise will be rework.</strong></p><p>Rework happens when the classification underneath the QMS turns out to be indefensible. When the system boundary was drawn wrong. When the intended purpose was vague. When an exemption was claimed without sufficient analysis. When the profiling override was missed entirely.</p><p>At that point, the QMS must be re-scoped. Technical documentation must be revised. Conformity assessment pathways must be reconsidered. Post-market monitoring must be restructured. The work that was &#8220;done&#8221; must be done again &#8212; under compressed timelines, with constrained capacity, while the original deadline has not moved.</p><blockquote><p>This is the compounding cost of upstream ambiguity. Every downstream artifact inherits the flaw. Every correction propagates through the dependency chain.</p></blockquote><p>Organizations that built classification capability first &#8212; that established authority, documented methodology, created reasoning chains before building the QMS &#8212; will execute 2026 with realistic timelines and predictable costs.</p><p>Organizations that built the QMS first and assumed classification would sort itself out will discover they built a house on a foundation that does not exist.</p><div><hr></div><h2>What This Means for 2026</h2><p>Three questions determine whether your compliance program is structurally sound.</p><p><strong>Before you build the QMS: Who owns classification?</strong></p><p>Not who has opinions. Not who participates in discussions. Who has formal, documented authority to make binding Article 6 determinations? If the answer is &#8220;no one,&#8221; you have a governance gap that no amount of downstream documentation can close.</p><p><strong>Before you scope the QMS: Which systems are high-risk, and why?</strong></p><p>Not which systems are &#8220;probably&#8221; high-risk. Not which systems &#8220;might&#8221; fall under Annex III. Which systems have been formally classified, with documented reasoning chains connecting intended purpose to regulatory category to exemption analysis to final determination? If the answer is &#8220;we haven&#8217;t done that yet,&#8221; your QMS scope is anchored to assumptions, not decisions.</p><p><strong>Before you finalize the QMS: How will you detect when a system crosses the line?</strong></p><p>What triggers reassessment? What evidence flows from post-market monitoring back into classification review? Who is responsible for watching the indicators that signal drift? If the answer is &#8220;we haven&#8217;t thought about that,&#8221; you are treating classification as a snapshot. The regulation treats it as a lifecycle.</p><div><hr></div><h2>Conclusion</h2><p>The EU AI Act created a dependency chain that most organizations have inverted.</p><p>They built the QMS first, because QMS requirements are familiar. ISO frameworks exist. Consultants are available. The work is visible and demonstrable.</p><p>They deferred classification, because classification is harder. It requires legal interpretation. It requires technical analysis. It requires cross-functional coordination. The work is invisible until it fails.</p><p>But the regulation does not care which workstream is more comfortable. It cares which workstream is upstream.</p><blockquote><p>Article 17 obligations flow from Article 6 classification. A QMS scoped to the wrong systems is non-compliant. A QMS built without classification authority is unanchored. A QMS that cannot detect drift is incomplete.</p></blockquote><p>The organizations that understand this dependency will build classification capability first. They will establish governance, document methodology, create the reasoning architecture that makes everything downstream defensible.</p><p>The organizations that do not will discover, when regulators arrive, that their QMS has an AI risk problem.</p><p>Compliance is expensive. Building it twice is unaffordable.</p><div><hr></div><p><em>If your organization needs a structured methodology for Article 6 classification &#8212; the upstream logic that makes Article 17 defensible &#8212; I have published the framework I use:</em></p><p><strong><a href="https://quantumcoherence.ai/b/the-article-6-classification-handbook">The Article 6 Classification Handbook</a></strong></p><p><em>A practical, defensible methodology for EU AI Act compliance. Not legal advice. Not a compliance shortcut. A reasoning architecture for teams that must own their classification decisions.</em></p><p class="button-wrapper" data-attrs="{&quot;url&quot;:&quot;https://www.zerodaydawn.com/subscribe?&quot;,&quot;text&quot;:&quot;Subscribe now&quot;,&quot;action&quot;:null,&quot;class&quot;:null}" data-component-name="ButtonCreateButton"><a class="button primary" href="https://www.zerodaydawn.com/subscribe?"><span>Subscribe now</span></a></p><div><hr></div><h5><strong>Regulatory Disclaimer</strong></h5><h5>This article provides educational analysis of the EU Artificial Intelligence Act (Regulation (EU) 2024/1689) as of January 2026. Nothing in this article constitutes legal advice, regulatory interpretation, or compliance certification.</h5><h5>Organizations should consult qualified legal counsel specializing in EU AI Act compliance before making classification determinations or deployment decisions.</h5><h5>Quantum Coherence LLC does not provide legal advice or regulatory compliance determinations.</h5>]]></content:encoded></item><item><title><![CDATA[The 100-Day Mandate]]></title><description><![CDATA[Forget August. The battle for AI compliance is won or lost in Q1 2026]]></description><link>https://www.zerodaydawn.com/p/the-100-day-mandate</link><guid isPermaLink="false">https://www.zerodaydawn.com/p/the-100-day-mandate</guid><dc:creator><![CDATA[Violeta Klein, CISSP, AIGP]]></dc:creator><pubDate>Mon, 29 Dec 2025 06:01:37 GMT</pubDate><enclosure url="https://substackcdn.com/image/fetch/$s_!t2w7!,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F379cd791-8f1b-4ada-af27-70d718ab388a_1920x1080.png" length="0" type="image/jpeg"/><content:encoded><![CDATA[<div class="captioned-image-container"><figure><a class="image-link image2 is-viewable-img" target="_blank" href="https://substackcdn.com/image/fetch/$s_!t2w7!,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F379cd791-8f1b-4ada-af27-70d718ab388a_1920x1080.png" data-component-name="Image2ToDOM"><div class="image2-inset"><picture><source type="image/webp" srcset="https://substackcdn.com/image/fetch/$s_!t2w7!,w_424,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F379cd791-8f1b-4ada-af27-70d718ab388a_1920x1080.png 424w, https://substackcdn.com/image/fetch/$s_!t2w7!,w_848,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F379cd791-8f1b-4ada-af27-70d718ab388a_1920x1080.png 848w, https://substackcdn.com/image/fetch/$s_!t2w7!,w_1272,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F379cd791-8f1b-4ada-af27-70d718ab388a_1920x1080.png 1272w, https://substackcdn.com/image/fetch/$s_!t2w7!,w_1456,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F379cd791-8f1b-4ada-af27-70d718ab388a_1920x1080.png 1456w" sizes="100vw"><img src="https://substackcdn.com/image/fetch/$s_!t2w7!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F379cd791-8f1b-4ada-af27-70d718ab388a_1920x1080.png" width="1456" height="819" data-attrs="{&quot;src&quot;:&quot;https://substack-post-media.s3.amazonaws.com/public/images/379cd791-8f1b-4ada-af27-70d718ab388a_1920x1080.png&quot;,&quot;srcNoWatermark&quot;:null,&quot;fullscreen&quot;:null,&quot;imageSize&quot;:null,&quot;height&quot;:819,&quot;width&quot;:1456,&quot;resizeWidth&quot;:null,&quot;bytes&quot;:3282453,&quot;alt&quot;:null,&quot;title&quot;:null,&quot;type&quot;:&quot;image/png&quot;,&quot;href&quot;:null,&quot;belowTheFold&quot;:false,&quot;topImage&quot;:true,&quot;internalRedirect&quot;:&quot;https://www.zerodaydawn.com/i/182642701?img=https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F379cd791-8f1b-4ada-af27-70d718ab388a_1920x1080.png&quot;,&quot;isProcessing&quot;:false,&quot;align&quot;:null,&quot;offset&quot;:false}" class="sizing-normal" alt="" srcset="https://substackcdn.com/image/fetch/$s_!t2w7!,w_424,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F379cd791-8f1b-4ada-af27-70d718ab388a_1920x1080.png 424w, https://substackcdn.com/image/fetch/$s_!t2w7!,w_848,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F379cd791-8f1b-4ada-af27-70d718ab388a_1920x1080.png 848w, https://substackcdn.com/image/fetch/$s_!t2w7!,w_1272,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F379cd791-8f1b-4ada-af27-70d718ab388a_1920x1080.png 1272w, https://substackcdn.com/image/fetch/$s_!t2w7!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F379cd791-8f1b-4ada-af27-70d718ab388a_1920x1080.png 1456w" sizes="100vw" fetchpriority="high"></picture><div class="image-link-expand"><div class="pencraft pc-display-flex pc-gap-8 pc-reset"><button tabindex="0" type="button" class="pencraft pc-reset pencraft icon-container restack-image"><svg role="img" width="20" height="20" viewBox="0 0 20 20" fill="none" stroke-width="1.5" stroke="var(--color-fg-primary)" stroke-linecap="round" stroke-linejoin="round" xmlns="http://www.w3.org/2000/svg"><g><title></title><path d="M2.53001 7.81595C3.49179 4.73911 6.43281 2.5 9.91173 2.5C13.1684 2.5 15.9537 4.46214 17.0852 7.23684L17.6179 8.67647M17.6179 8.67647L18.5002 4.26471M17.6179 8.67647L13.6473 6.91176M17.4995 12.1841C16.5378 15.2609 13.5967 17.5 10.1178 17.5C6.86118 17.5 4.07589 15.5379 2.94432 12.7632L2.41165 11.3235M2.41165 11.3235L1.5293 15.7353M2.41165 11.3235L6.38224 13.0882"></path></g></svg></button><button tabindex="0" type="button" class="pencraft pc-reset pencraft icon-container view-image"><svg xmlns="http://www.w3.org/2000/svg" width="20" height="20" viewBox="0 0 24 24" fill="none" stroke="currentColor" stroke-width="2" stroke-linecap="round" stroke-linejoin="round" class="lucide lucide-maximize2 lucide-maximize-2"><polyline points="15 3 21 3 21 9"></polyline><polyline points="9 21 3 21 3 15"></polyline><line x1="21" x2="14" y1="3" y2="10"></line><line x1="3" x2="10" y1="21" y2="14"></line></svg></button></div></div></div></a></figure></div><h2><strong>Executive Summary</strong></h2><p>The August 2026 deadline dominates AI Act planning conversations. It shouldn&#8217;t.</p><p>August is when classification documentation must be complete and systems registered. It is not when the work happens &#8212; it is when the work must already be finished.</p><p>The real window is Q1 2026. The first 100 days of the year will determine which organizations reach August with defensible positions and which arrive in crisis mode, compressing months of governance work into weeks.</p><p>This is not a planning quarter. It is an execution quarter. Organizations that treat January through March as preparation time will discover in April that preparation time ended in December.</p><p>Three decisions must be made. Three mistakes will derail them. And by April, the gap between organizations that executed and those that waited will be visible &#8212; and difficult to close.</p><div class="subscription-widget-wrap-editor" data-attrs="{&quot;url&quot;:&quot;https://www.zerodaydawn.com/subscribe?&quot;,&quot;text&quot;:&quot;Subscribe&quot;,&quot;language&quot;:&quot;en&quot;}" data-component-name="SubscribeWidgetToDOM"><div class="subscription-widget show-subscribe"><div class="preamble"><p class="cta-caption">Thanks for reading Zero-Day Dawn! Subscribe for free to receive new posts and support my work.</p></div><form class="subscription-widget-subscribe"><input type="email" class="email-input" name="email" placeholder="Type your email&#8230;" tabindex="-1"><input type="submit" class="button primary" value="Subscribe"><div class="fake-input-wrapper"><div class="fake-input"></div><div class="fake-button"></div></div></form></div></div><div><hr></div><h2><strong>The January Question</strong></h2><p>One question determines whether Q1 produces momentum or paralysis:</p><p><em><strong>Who has authority to make binding classification decisions?</strong></em></p><p>Not who is monitoring the regulation. Not who sits on the AI working group. Who signs the technical file. Who decides that System X is high-risk and System Y qualifies for <a href="https://www.zerodaydawn.com/p/inside-the-eu-ai-act-your-classification">exemption</a>. Who can look a regulator in the eye and <strong>defend the reasoning</strong>.</p><p>Classification sits at the intersection of legal interpretation, technical architecture, and business context. No single function owns all three. <strong>Legal</strong> understands the regulation but cannot assess system architecture. <strong>Engineering</strong> understands capabilities but may not grasp regulatory triggers. <strong>Business</strong> defines intended purpose but may not recognize when that purpose crosses a classification threshold.</p><p>Without explicit authority, decisions stall in the gap between functions. Each waits for the other. Meetings produce discussion, not determination. Documentation remains unsigned.</p><p>Organizations that resolved this governance question in December will execute in January. Organizations still debating authority structures will lose weeks &#8212; weeks they cannot afford.</p><p>The first act of Q1 is not classification. It is designating who classifies.</p>
      <p>
          <a href="https://www.zerodaydawn.com/p/the-100-day-mandate">
              Read more
          </a>
      </p>
   ]]></content:encoded></item><item><title><![CDATA[Who Owns Your AI Risk?]]></title><description><![CDATA[Why AI compliance fails at the boardroom level]]></description><link>https://www.zerodaydawn.com/p/who-owns-your-ai-risk</link><guid isPermaLink="false">https://www.zerodaydawn.com/p/who-owns-your-ai-risk</guid><dc:creator><![CDATA[Violeta Klein, CISSP, AIGP]]></dc:creator><pubDate>Mon, 22 Dec 2025 06:00:21 GMT</pubDate><enclosure url="https://substackcdn.com/image/fetch/$s_!CBi1!,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fc1824a35-2c6e-44c8-bcd0-46786d5d6869_1920x1080.png" length="0" type="image/jpeg"/><content:encoded><![CDATA[<div class="captioned-image-container"><figure><a class="image-link image2 is-viewable-img" target="_blank" href="https://substackcdn.com/image/fetch/$s_!CBi1!,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fc1824a35-2c6e-44c8-bcd0-46786d5d6869_1920x1080.png" data-component-name="Image2ToDOM"><div class="image2-inset"><picture><source type="image/webp" srcset="https://substackcdn.com/image/fetch/$s_!CBi1!,w_424,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fc1824a35-2c6e-44c8-bcd0-46786d5d6869_1920x1080.png 424w, https://substackcdn.com/image/fetch/$s_!CBi1!,w_848,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fc1824a35-2c6e-44c8-bcd0-46786d5d6869_1920x1080.png 848w, https://substackcdn.com/image/fetch/$s_!CBi1!,w_1272,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fc1824a35-2c6e-44c8-bcd0-46786d5d6869_1920x1080.png 1272w, https://substackcdn.com/image/fetch/$s_!CBi1!,w_1456,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fc1824a35-2c6e-44c8-bcd0-46786d5d6869_1920x1080.png 1456w" sizes="100vw"><img src="https://substackcdn.com/image/fetch/$s_!CBi1!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fc1824a35-2c6e-44c8-bcd0-46786d5d6869_1920x1080.png" width="1456" height="819" data-attrs="{&quot;src&quot;:&quot;https://substack-post-media.s3.amazonaws.com/public/images/c1824a35-2c6e-44c8-bcd0-46786d5d6869_1920x1080.png&quot;,&quot;srcNoWatermark&quot;:null,&quot;fullscreen&quot;:null,&quot;imageSize&quot;:null,&quot;height&quot;:819,&quot;width&quot;:1456,&quot;resizeWidth&quot;:null,&quot;bytes&quot;:3297013,&quot;alt&quot;:null,&quot;title&quot;:null,&quot;type&quot;:&quot;image/png&quot;,&quot;href&quot;:null,&quot;belowTheFold&quot;:false,&quot;topImage&quot;:true,&quot;internalRedirect&quot;:&quot;https://www.zerodaydawn.com/i/182175426?img=https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fc1824a35-2c6e-44c8-bcd0-46786d5d6869_1920x1080.png&quot;,&quot;isProcessing&quot;:false,&quot;align&quot;:null,&quot;offset&quot;:false}" class="sizing-normal" alt="" srcset="https://substackcdn.com/image/fetch/$s_!CBi1!,w_424,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fc1824a35-2c6e-44c8-bcd0-46786d5d6869_1920x1080.png 424w, https://substackcdn.com/image/fetch/$s_!CBi1!,w_848,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fc1824a35-2c6e-44c8-bcd0-46786d5d6869_1920x1080.png 848w, https://substackcdn.com/image/fetch/$s_!CBi1!,w_1272,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fc1824a35-2c6e-44c8-bcd0-46786d5d6869_1920x1080.png 1272w, https://substackcdn.com/image/fetch/$s_!CBi1!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fc1824a35-2c6e-44c8-bcd0-46786d5d6869_1920x1080.png 1456w" sizes="100vw" fetchpriority="high"></picture><div class="image-link-expand"><div class="pencraft pc-display-flex pc-gap-8 pc-reset"><button tabindex="0" type="button" class="pencraft pc-reset pencraft icon-container restack-image"><svg role="img" width="20" height="20" viewBox="0 0 20 20" fill="none" stroke-width="1.5" stroke="var(--color-fg-primary)" stroke-linecap="round" stroke-linejoin="round" xmlns="http://www.w3.org/2000/svg"><g><title></title><path d="M2.53001 7.81595C3.49179 4.73911 6.43281 2.5 9.91173 2.5C13.1684 2.5 15.9537 4.46214 17.0852 7.23684L17.6179 8.67647M17.6179 8.67647L18.5002 4.26471M17.6179 8.67647L13.6473 6.91176M17.4995 12.1841C16.5378 15.2609 13.5967 17.5 10.1178 17.5C6.86118 17.5 4.07589 15.5379 2.94432 12.7632L2.41165 11.3235M2.41165 11.3235L1.5293 15.7353M2.41165 11.3235L6.38224 13.0882"></path></g></svg></button><button tabindex="0" type="button" class="pencraft pc-reset pencraft icon-container view-image"><svg xmlns="http://www.w3.org/2000/svg" width="20" height="20" viewBox="0 0 24 24" fill="none" stroke="currentColor" stroke-width="2" stroke-linecap="round" stroke-linejoin="round" class="lucide lucide-maximize2 lucide-maximize-2"><polyline points="15 3 21 3 21 9"></polyline><polyline points="9 21 3 21 3 15"></polyline><line x1="21" x2="14" y1="3" y2="10"></line><line x1="3" x2="10" y1="21" y2="14"></line></svg></button></div></div></div></a></figure></div><h2><strong>Executive Summary</strong></h2><p>The EU AI Act is now law. By August 2026, every AI system placed on the EU market must have a defensible classification behind it &#8212; documented, justified, and traceable to a named decision-maker. By August 2027, high-risk systems must be fully compliant with risk management, data governance, and technical documentation requirements.</p><p>These deadlines remain anchored despite ongoing <a href="https://www.zerodaydawn.com/p/the-digital-omnibus-trap-eu-ai-act">simplification proposals</a>. Political and enforcement timelines may shift &#8212; classification obligations have not.</p><p>Board-level conversations about AI focus on adoption and competitive advantage. Almost none focus on the question that determines regulatory exposure: who in this organization has authority to classify our AI systems &#8212; and have they done so?</p><p>This is not a technical compliance detail. It is a governance failure with direct financial consequences. Penalties reach &#8364;15 million or 3% of global turnover, and multiple systems mean multiple potential violations.</p><p>Absence of documentation is itself a compliance failure. Absence of ownership is worse &#8212; it means the failure cannot be corrected because no one has authority to sign the paper.</p><p><strong>Five questions determine whether your organization is prepared.</strong></p><div class="subscription-widget-wrap-editor" data-attrs="{&quot;url&quot;:&quot;https://www.zerodaydawn.com/subscribe?&quot;,&quot;text&quot;:&quot;Subscribe&quot;,&quot;language&quot;:&quot;en&quot;}" data-component-name="SubscribeWidgetToDOM"><div class="subscription-widget show-subscribe"><div class="preamble"><p class="cta-caption">Thanks for reading Zero-Day Dawn! Subscribe for free to receive new posts and support my work.</p></div><form class="subscription-widget-subscribe"><input type="email" class="email-input" name="email" placeholder="Type your email&#8230;" tabindex="-1"><input type="submit" class="button primary" value="Subscribe"><div class="fake-input-wrapper"><div class="fake-input"></div><div class="fake-button"></div></div></form></div></div><div><hr></div><h2><strong>The Board&#8217;s Blind Spot</strong></h2><p>Boards have spent two years hearing about AI&#8217;s transformative potential. They have approved investments, endorsed pilot programmes, and reviewed presentations on use cases from customer service automation to predictive analytics.</p><p>What boards have not received is a clear answer to a simpler question: which of these systems fall under EU AI Act obligations, and <em><strong>who</strong></em> made that determination?</p><p>This gap exists because AI compliance has been treated as a legal or IT problem. It is neither. The EU AI Act is a governance regulation that happens to apply to technology. Its core demand is organizational clarity &#8212; about what systems you operate, what purposes they serve, what risks they create, and <strong>who owns those determinations</strong>.</p><p>The regulation does not distinguish between organizations that deliberately misclassified systems and those that never classified them at all. The question regulators will ask is simple: who made this classification decision, and where is the documentation?</p><p>If no one can answer, the failure is not technical. It is structural.</p><div><hr></div><h2><strong>Question One: Do We Know What We Have?</strong></h2><p>The foundational question is deceptively simple: do we know which AI systems we are placing on the EU market?</p><p>For many organizations, the honest answer is no.</p><p>AI capabilities have proliferated through procurement, not just development. Customer relationship platforms embed predictive scoring. HR tools use algorithmic screening. Marketing systems deploy recommendation engines. Finance departments rely on fraud detection models. Each of these may constitute an AI system under the EU AI Act&#8217;s broad definition &#8212; and each requires classification.</p><p><strong>The Procurement Trap:</strong> Many boards assume AI Act obligations apply only to organizations that build AI systems. This is incorrect. If your organization procures an AI-enabled tool and places it on the EU market &#8212; under your brand, integrated into your service, offered to your customers &#8212; classification responsibility transfers to <strong>you</strong>. You become the deployer. In some configurations, you become the provider. The penalties attach to your balance sheet, not your vendor&#8217;s.</p><p>A board that cannot answer &#8220;how many AI systems do we operate &#8212; including procured tools?&#8221; cannot answer &#8220;how many require high-risk compliance?&#8221; <strong>The first ownership gap is not knowing what you own.</strong></p><div><hr></div><h2><strong>Question Two: Who Holds the Pen?</strong></h2><p>Classification under Article 6 is not a checkbox exercise. It is an interpretive judgment requiring understanding of both system architecture and regulatory logic.</p><p>The determination turns on <strong>intended purpose</strong> &#8212; not what a system can do, but what it is meant to do within your operational context. It requires assessment of whether outputs <strong>materially influence</strong> decisions affecting natural persons. A system that &#8220;only recommends&#8221; can still trigger high-risk classification if those recommendations shape hiring decisions, credit assessments, or access to services. The question is not whether a human clicks the final button &#8212; it is <strong>whether the system&#8217;s output constrains or directs the human&#8217;s judgment</strong>.</p><p><strong>The Governance Gap:</strong> Legal counsel interprets the regulation but cannot assess system architecture. Technical leads understand capabilities but may not grasp regulatory implications. Business owners define intended purpose but may not recognize when that purpose triggers classification thresholds. No single function can make this determination alone.</p><p><strong>The question for boards</strong>: who has explicit authority to make binding Article 6 determinations? Not &#8220;who is monitoring the regulation.&#8221; Not &#8220;who sits on the working group.&#8221; Who signs the technical file? Who decides that System X is high-risk and System Y qualifies for exemption?</p><p>If that authority has not been formally designated, classification decisions are not being made. They are being deferred. And <strong>deferral has a deadline</strong>.</p><div><hr></div><h2><strong>Question Three: What If We Are Wrong?</strong></h2><p>Classification errors run in two directions, and the costs are not equal.</p><p><strong>Over-classification wastes resources</strong>. Systems incorrectly designated as high-risk trigger compliance obligations costing hundreds of thousands of euros &#8212; risk management frameworks, technical documentation, conformity assessments, quality management systems. For organizations with large AI portfolios, unnecessary high-risk designations consume budgets that should be allocated elsewhere. But over-classification is defensible. It demonstrates caution.</p><p><strong>Under-classification creates liability</strong>. A system classified as exempt that is later determined to be high-risk exposes the organization to penalties, enforcement action, and potential market withdrawal. The organization will have operated the system without required safeguards, without mandated documentation, and without human oversight obligations. Under-classification is cheaper &#8212; until enforcement arrives.</p><p><strong>The Board Question:</strong> What is our risk appetite for classification error? Have we defined whether we lean toward caution or efficiency? Has that appetite been communicated to whoever holds classification authority? Do we have any mechanism to detect misclassification before regulators do?</p><p><strong>Ownership means accountability for error</strong>. If no one owns the classification decision, no one owns the consequences of getting it wrong &#8212; until Market Surveillance assigns ownership for you.</p><div><hr></div><h2><strong>Question Four: Are We Planning Against the Right Deadline?</strong></h2><p>The EU AI Act timeline has been extensively discussed, mostly around the wrong date.</p><p>August 2027 is when full high-risk compliance is required &#8212; risk management systems operational, technical documentation complete, conformity assessments conducted. That date dominates planning conversations.</p><p>August 2026 is when classification obligations crystallize. Every AI system placed on the EU market must have a documented classification determination. <strong>Under current law</strong>, high-risk systems must be registered in the EU database before deployment &#8212; though the <a href="https://www.zerodaydawn.com/p/the-digital-omnibus-trap-eu-ai-act">Digital Omnibus</a> proposes removing registration for exempt systems while maintaining the documentation requirement.</p><p><strong>The Mechanical Truth:</strong> Classification precedes compliance. An organization cannot implement high-risk requirements for systems it has not yet classified. August 2027 compliance is impossible without August 2026 classification.</p><p>The <a href="https://www.zerodaydawn.com/p/the-digital-omnibus-trap-eu-ai-act">Digital Omnibus</a> may adjust enforcement mechanics and grace periods. It has not moved the classification anchor. When you place a system on the EU market, the obligation to have a defensible classification behind it crystallizes &#8212; regardless of what simplification measures pass.</p><p>Organizations planning to &#8220;start compliance work in 2026&#8221; are planning to fail. Classification assessment across a complex AI portfolio requires months. Framework design requires months more. Implementation, testing, and validation consume whatever remains.</p><p>Eight months separate today from the August 2026 deadline. That is not a runway for deliberation. It is a runway for execution &#8212; but only if the decision about <strong>who owns classification</strong> has already been made.</p><div><hr></div><h2><strong>Question Five: Fill the Empty Chair</strong></h2><p>Strategic questions matter only if they translate into operational decisions. The board&#8217;s role is not to conduct classification assessments &#8212; it is to ensure someone has explicit authority to do so.</p><p>Classification authority requires <strong>four elements</strong>:</p><p><strong>Mandate:</strong> Formal authorization to make binding Article 6 determinations on behalf of the organization.</p><p><strong>Composition:</strong> Cross-functional representation &#8212; legal, technical, business, and compliance perspectives integrated into a single decision-making body.</p><p><strong>Methodology:</strong> Documented process for assessment, including how intended purpose is determined, how material influence is evaluated, and how edge cases are escalated. This is where structured frameworks &#8212; such as the one I&#8217;ve published in <em><a href="https://quantumcoherence.ai/b/the-article-6-classification-handbook">The Article 6 Classification Handbook</a></em> &#8212; translate governance decisions into repeatable, defensible methodology.</p><p><strong>Accountability:</strong> Named individuals whose signatures appear on classification documentation and who can defend those determinations under regulatory scrutiny.</p><p>This does not require external consultants. It does not require new technology. It requires a governance decision that has been deferred because the deadline felt distant and the regulation felt uncertain.</p><div><hr></div><h2><strong>The Governance Imperative</strong></h2><p>The EU AI Act does not ask whether your AI systems are technically sophisticated. It asks whether your organization can demonstrate it made <strong>deliberate, documented decisions</strong> about what those systems are and how they should be classified.</p><p>That demonstration requires ownership &#8212; of inventory, of methodology, of risk appetite, of timeline, and of authority.</p><p>The regulatory picture continues to evolve. Just this week, <a href="https://ec.europa.eu/newsroom/sante/items/915785/en">DG SANTE</a> proposed <a href="https://health.ec.europa.eu/document/download/25e7ea7c-cab3-40cf-86d9-d11f5e7744d8_en?filename=md_com_2025-1023_act_en.pdf">amendments</a> that would shift medical devices to a different compliance pathway under the AI Act. As regulatory analyst <span class="mention-wrap" data-attrs="{&quot;name&quot;:&quot;Laura Caroli&quot;,&quot;id&quot;:261475753,&quot;type&quot;:&quot;user&quot;,&quot;url&quot;:null,&quot;photo_url&quot;:&quot;https://substack-post-media.s3.amazonaws.com/public/images/89c2c2ec-2b20-4829-ac84-9976ffcf866a_1006x1006.jpeg&quot;,&quot;uuid&quot;:&quot;63a89e3e-a7fa-4897-a8ff-c0524b63c627&quot;}" data-component-name="MentionToDOM"></span> has noted, if adopted, <a href="https://substack.com/home/post/p-182158685">this could set precedent</a> for other sectors to seek similar carve-outs. Organizations that build classification capability now will be positioned to adapt. Those waiting for final clarity may find that clarity keeps receding.</p><p>Boards that treat AI compliance as someone else&#8217;s problem will discover it became their problem the moment no one could answer the question that regulators will certainly ask:</p><p><em><strong>Who made this classification decision, and where is the documentation?</strong></em></p><p>August 2026 is coming. Fill the empty chair.</p><p class="button-wrapper" data-attrs="{&quot;url&quot;:&quot;https://www.zerodaydawn.com/subscribe?&quot;,&quot;text&quot;:&quot;Subscribe now&quot;,&quot;action&quot;:null,&quot;class&quot;:null}" data-component-name="ButtonCreateButton"><a class="button primary" href="https://www.zerodaydawn.com/subscribe?"><span>Subscribe now</span></a></p><div><hr></div><h5><strong>Regulatory Disclaimer</strong></h5><h5>This article provides educational analysis of the EU Artificial Intelligence Act (Regulation (EU) 2024/1689). It does not constitute legal advice, regulatory interpretation, or compliance certification. Classification determinations depend on specific system architectures, organizational context, and deployment environments. Organizations should consult qualified legal counsel before making classification decisions.</h5>]]></content:encoded></item></channel></rss>