The 2025 AI Act Reckoning
The year the regulation became real - and most organizations still can’t answer the question that matters
Executive Summary
2025 was the year the EU AI Act stopped being theoretical. Prohibited practices took effect in February. GPAI obligations activated in August. The Digital Omnibus landed in November. In December, the EU Fundamental Rights Agency highlighted persistent weaknesses in how high-risk AI classification is being applied in practice.
And yet: most organizations still cannot answer the one question that determines their entire compliance trajectory - which of our systems are high-risk under Annex III?
That is the reckoning. Not regulatory. Organizational.
The regulation moved. The market largely didn’t. And for organizations that spent 2025 waiting for clarity, 2026 begins in catch-up mode - with compressed timelines, constrained external capacity, and compounding costs.
This article is not a call to panic. December is not action season. But it is a moment to be honest about what happened this year, what didn’t, and what the January scramble will cost organizations that delayed classification work.
The gap between regulatory reality and organizational readiness is now visible. 2026 will determine who closes it.
What Happened in 2025
The EU AI Act’s implementation timeline advanced faster than many leadership teams anticipated.
February 2, 2025: Prohibited practices became applicable. Organizations deploying social scoring systems, manipulative AI, or certain forms of real-time biometric identification in public spaces entered immediate enforcement exposure. The regulation became operative in law, not merely in guidance.
August 2, 2025: GPAI model obligations activated. Providers of general-purpose AI models became subject to documentation, transparency, and systemic risk obligations. The upstream supply chain moved inside the regulatory perimeter.
November 19, 2025: The European Commission published the Digital Omnibus proposal. Expectations of material simplification proved misplaced. The proposal focused on procedural adjustments - particularly for SMEs - without altering the underlying classification logic of the Act. Organizations waiting for the Omnibus to reduce their obligations discovered it did not.
December 4, 2025: The Fundamental Rights Agency released its report on high-risk AI systems. The findings highlighted recurring stress points at Article 6: scope narrowing through restrictive AI definitions, expansive use of the procedural-task filter, and systematic oversight of the profiling override that triggers automatic high-risk designation.
Meanwhile, the broader regulatory ecosystem advanced unevenly. Draft standards progressed, though full adoption is now expected in 2027 - later than originally anticipated. The AI cybersecurity standards landscape resolved into three distinct tiers - guidance, certifiable baselines, and presumption of conformity - with only prEN 18282 positioned for eventual citation in the Official Journal. Member State authority designations lagged behind the August 2025 deadline.
The regulation moved forward. The clarity organizations said they needed didn’t arrive in the form they expected.
What Didn’t Happen
Most organizations did not build classification capability.
The question that should now be answerable - which of our AI systems fall under Annex III, and why? - remains unanswered across a large share of SMEs and mid-market organizations.
The symptoms are consistent:
No AI inventory exists.
Systems are dispersed across business units, embedded in vendor tools, or deployed without centralized visibility. Leadership cannot reliably enumerate what is in use, let alone classify it.
Intended purpose is undefined.
Internal documentation relies on vague, promotional language - “supports decision-making,” “improves efficiency” - rather than operational descriptions that enable legal and technical analysis.
Classification logic is missing.
Teams may hold opinions about whether a system is high-risk, but lack a documented reasoning chain connecting that view to Article 6 criteria and Annex III categories.
Ownership is unassigned.
No individual or body holds formal authority to make binding classification determinations. Decisions stall in the gap between legal, engineering, and business functions.
The FRA report confirmed what practitioners already observe: even experts struggle to classify the same system consistently when intended purpose is vague and system boundaries are implicit. Organizations without structured methodology are not making defensible decisions. They are making guesses.
2025 was the year to build the upstream logic that makes everything else possible. For organizations that did not, every downstream compliance workstream - QMS design, technical documentation, conformity assessment planning - remains unanchored.
Why Article 6 Is the Bottleneck (Not a Detail)
Article 6 is not an administrative step. It is the load-bearing point of the AI Act.
Classification determines whether obligations apply at all, which obligation set applies, and whether conformity assessment is required. Every downstream requirement - risk management, data governance, human oversight, technical documentation, post-market monitoring - depends on this determination.
When classification is wrong, everything built on top of it is unstable. No management system, standard, or audit can compensate for an incorrect scope decision.
Why the January Scramble Is Already Priced In
The cost of delay is not abstract. It compounds through three predictable mechanisms.
1. External capacity constraints.
Following the Digital Omnibus publication, demand for AI Act advisory support increased sharply. Organizations entering the market in Q1 2026 will compete for limited specialist capacity, often under less favorable commercial terms and longer lead times.
2. Timeline compression.
The August 2, 2026 milestone is not when compliance work concludes; it is when classification documentation must be complete and systems registered. Organizations beginning classification in early 2026 face a compressed sequence: classification, followed immediately by framework design and implementation, with minimal buffer for correction.
Organizations that completed classification in 2025 enter 2026 with a structural advantage: defined scope, realistic budgets, and executable sequencing.
3. Organizational learning curves.
Classification is not a procurement exercise. It is a reasoning capability.
Teams encountering Article 6 methodology for the first time will predictably misclassify models instead of systems, overlook downstream decision effects, overextend the procedural-task filter, or miss profiling overrides. These errors often surface late - during documentation review or regulatory inquiry - when correction is costly.
Organizations that built classification capability in 2025 have already encountered and resolved these failure modes. That institutional learning cannot be accelerated on demand.
The One Decision That Matters Before Q1
This is not a call to launch a compliance program in December. Bandwidth is limited. Teams are exhausted.
But one decision can be made now, and it determines whether January begins with momentum or paralysis:
Designate who has authority to classify AI systems under Article 6.
Classification sits at the intersection of legal interpretation, technical architecture, and business context. Without explicit authority, decisions stall as functions wait on one another.
Organizations that move fastest in 2026 will be those that resolved this governance question in 2025 - by appointing a single accountable owner or a cross-functional body with binding decision rights.
That decision costs nothing. And it removes the single largest bottleneck to Q1 execution.
The Year Ahead
2026 will separate organizations into two categories.
The first built classification capability in 2025. They have AI inventories, documented intended purposes, and reasoning chains connecting systems to Article 6 and Annex III. They know which systems are high-risk, which are exempt, and why.
These organizations will execute compliance programs with realistic timelines, predictable costs, and defensible documentation.
The second category waited - for the Omnibus, for guidelines, for certainty that never arrived in the form they expected. They will begin 2026 without classification logic, competing for constrained capacity, compressing already tight timelines, and absorbing avoidable risk.
The reckoning is not regulatory. Enforcement will come, but not immediately. The reckoning is organizational: the gap between what the regulation requires and what an organization can actually demonstrate.
2025 made that gap visible. 2026 will determine who closes it.
Building the Capability
Classification is not a label. It is a reasoning architecture.
Organizations that intend to own classification internally require documented frameworks for intended purpose decomposition, Annex III mapping, exemption logic, and defensible decision records.
The methodology I use to build that architecture is documented in The Article 6 Classification Handbook. It provides structured decision frameworks and documentation logic for teams that must make and defend classification decisions before August 2026.
Not legal advice. Not a compliance shortcut. A methodology for organizations that must own their reasoning.
The handbook is available here.
Regulatory Disclaimer
This article provides educational analysis of the EU Artificial Intelligence Act (Regulation (EU) 2024/1689) and related developments as of December 2025. Nothing in this article constitutes legal advice, regulatory interpretation, or compliance certification.
Organizations should consult qualified legal counsel specializing in EU AI Act compliance before making classification determinations or deployment decisions.
Quantum Coherence LLC does not provide legal advice or regulatory compliance determinations.



Your point on market lagging the regulation? Exactly. Brilliant.