Zero-Day Dawn

Zero-Day Dawn

Deploy First, Comply Never

A field guide to everything AI agent builders get wrong about the EU AI Act

Violeta Klein, CISSP, CEFA's avatar
Violeta Klein, CISSP, CEFA
Mar 02, 2026
∙ Paid

Executive Summary

This article is for every team that has built, shipped, or deployed an AI agent without asking whether the EU AI Act applies to them. It does. This is the field guide to the fourteen regulatory traps waiting between your deployment and your first enforcement action.

You built an agent. You shipped it. Users in the EU are using it. You are now operating inside a regulatory framework you probably haven’t read — and the obligations it imposes are already binding.

The EU AI Act entered into force in August 2024. Bans on prohibited AI practices are actively enforced right now. General-purpose AI obligations hit in August 2025. Broad transparency rules and most high-risk obligations become enforceable in August 2026, with the rest following in 2027.

What follows is every assumption AI agent builders are making that will not survive enforcement. Each one is a trap. Each trap has a regulatory consequence. None of them require you to be a large company, a European company, or a company that intended to operate in the EU.


1. The Extraterritorial Trigger

What you assume: You’re not based in the EU, so the EU AI Act doesn’t apply to you.

What actually happens: The regulation follows output, not headquarters. If your AI agent produces a recommendation, a classification, a score, or a decision that is used by a natural person inside the EU, you are in scope. It does not matter where your servers are. It does not matter where your company is incorporated. It does not matter whether you intended your agent to reach the EU market.

A recruiter in Paris uses your agent to screen candidates. A bank in Amsterdam uses it to flag transaction risk. A university in Milan uses it to evaluate student submissions. You are now a provider or deployer under the EU AI Act — and the obligations that come with that status are enforceable against you.

What it costs you: Penalties for non-compliance with high-risk or transparency obligations reach €15 million or 3% of global annual turnover. Supplying misleading information to regulators carries fines of €7.5 million or 1% of global annual turnover. These are not theoretical. They are statutory.


2. Classification Creep

What you assume: You built a productivity tool. An assistant. A workflow optimizer. Not a high-risk AI system.

What actually happens: The EU AI Act does not classify systems by what you call them. It classifies them by their legally defined intended purpose. Your agent screens job applicants — that is an employment decision system under Annex III. Your agent evaluates the creditworthiness of individual consumers — that is a financial access system under Annex III. Your agent monitors employee performance or allocates tasks — employment domain, Annex III. Your agent influences a student’s educational progression — education domain, Annex III.

You did not design a high-risk system. But you deployed one. The gap between the generic tool you built and the high-risk task it now performs is where enforcement lives.

The sneakiest triggers are the ones builders never anticipate. If an internal research agent is repurposed to access HR data and generate recommendations affecting hiring decisions, its intended purpose has legally changed. It has entered an Annex III employment domain. Whoever made that change is now legally the provider of a high-risk AI system.

What it costs you: Every downstream obligation — risk management, conformity assessment, documentation, human oversight, logging — is triggered by classification. Get classification wrong and everything you build on top of it is wasted. Get it right and you know what you owe. Skip it and a regulator will do the classification for you.


3. The Liability Shift

What you assume: You are the provider. Your enterprise customers are deployers. They handle human oversight and logging on their end. Clean separation.

What actually happens: If your customer modifies the intended purpose of your agent — deploys it in a domain you did not anticipate, connects it to data sources you did not design for, or changes how it interacts with end users — they may have just triggered a legal conversion. A deployer who makes a substantial modification to an AI system, or who changes its intended purpose such that it becomes high-risk, assumes the full obligations of a provider. That means conformity assessment, technical documentation, risk management, and post-market monitoring — all of it shifts to the deployer.

But here's the trap nobody discusses: when your customer becomes the provider through modification, the original provider — you — must legally cooperate. You are required to provide technical access and assistance so the new provider can meet their obligations. The only way out? You must explicitly specify in your terms that the system is not to be changed into a high-risk AI system. If you didn't write that in, you owe them your documentation — limited only by strict trade secret protections.

What it costs you: You lose control of how your system is classified. Your customer’s deployment decision creates obligations for both of you. And if you didn’t explicitly forbid high-risk modification in your contracts, enforcement will find two parties pointing at each other with nobody holding the compliance.


4. The Open-Source Illusion

What you assume: You built your agent on an open-source model. Open-source means lighter regulatory requirements. You’re covered.

What actually happens: The EU AI Act offers limited transparency exemptions for open-source general-purpose AI models. Those exemptions apply to the model layer. The moment you integrate that model into an AI system that qualifies as high-risk under Annex III — because it screens applicants, evaluates creditworthiness, assesses insurance risk, or influences educational outcomes — every exemption vanishes. The system-level obligations apply in full.

Open-source is a licensing model. It is not a regulatory shield. The regulation does not care how your model was licensed. It cares what the system built on top of it does to people.

What it costs you: Every builder using open-source models who assumed lighter obligations now has the same compliance burden as a proprietary system deployed in the same domain. The model’s license changed nothing about the system’s classification.


The first four traps are the ones that catch builders before they even know they’re playing. What follows are the ten operational traps that determine whether your deployed agent survives its first regulatory inquiry — or becomes the case study other builders learn from.

This post is for paid subscribers

Already a paid subscriber? Sign in
© 2026 Quantum Coherence LLC · Privacy ∙ Terms ∙ Collection notice
Start your SubstackGet the app
Substack is the home for great culture