Who Owns Your AI Risk?
Why AI compliance fails at the boardroom level
Executive Summary
The EU AI Act is now law. By August 2026, every AI system placed on the EU market must have a defensible classification behind it — documented, justified, and traceable to a named decision-maker. By August 2027, high-risk systems must be fully compliant with risk management, data governance, and technical documentation requirements.
These deadlines remain anchored despite ongoing simplification proposals. Political and enforcement timelines may shift — classification obligations have not.
Board-level conversations about AI focus on adoption and competitive advantage. Almost none focus on the question that determines regulatory exposure: who in this organization has authority to classify our AI systems — and have they done so?
This is not a technical compliance detail. It is a governance failure with direct financial consequences. Penalties reach €15 million or 3% of global turnover, and multiple systems mean multiple potential violations.
Absence of documentation is itself a compliance failure. Absence of ownership is worse — it means the failure cannot be corrected because no one has authority to sign the paper.
Five questions determine whether your organization is prepared.
The Board’s Blind Spot
Boards have spent two years hearing about AI’s transformative potential. They have approved investments, endorsed pilot programmes, and reviewed presentations on use cases from customer service automation to predictive analytics.
What boards have not received is a clear answer to a simpler question: which of these systems fall under EU AI Act obligations, and who made that determination?
This gap exists because AI compliance has been treated as a legal or IT problem. It is neither. The EU AI Act is a governance regulation that happens to apply to technology. Its core demand is organizational clarity — about what systems you operate, what purposes they serve, what risks they create, and who owns those determinations.
The regulation does not distinguish between organizations that deliberately misclassified systems and those that never classified them at all. The question regulators will ask is simple: who made this classification decision, and where is the documentation?
If no one can answer, the failure is not technical. It is structural.
Question One: Do We Know What We Have?
The foundational question is deceptively simple: do we know which AI systems we are placing on the EU market?
For many organizations, the honest answer is no.
AI capabilities have proliferated through procurement, not just development. Customer relationship platforms embed predictive scoring. HR tools use algorithmic screening. Marketing systems deploy recommendation engines. Finance departments rely on fraud detection models. Each of these may constitute an AI system under the EU AI Act’s broad definition — and each requires classification.
The Procurement Trap: Many boards assume AI Act obligations apply only to organizations that build AI systems. This is incorrect. If your organization procures an AI-enabled tool and places it on the EU market — under your brand, integrated into your service, offered to your customers — classification responsibility transfers to you. You become the deployer. In some configurations, you become the provider. The penalties attach to your balance sheet, not your vendor’s.
A board that cannot answer “how many AI systems do we operate — including procured tools?” cannot answer “how many require high-risk compliance?” The first ownership gap is not knowing what you own.
Question Two: Who Holds the Pen?
Classification under Article 6 is not a checkbox exercise. It is an interpretive judgment requiring understanding of both system architecture and regulatory logic.
The determination turns on intended purpose — not what a system can do, but what it is meant to do within your operational context. It requires assessment of whether outputs materially influence decisions affecting natural persons. A system that “only recommends” can still trigger high-risk classification if those recommendations shape hiring decisions, credit assessments, or access to services. The question is not whether a human clicks the final button — it is whether the system’s output constrains or directs the human’s judgment.
The Governance Gap: Legal counsel interprets the regulation but cannot assess system architecture. Technical leads understand capabilities but may not grasp regulatory implications. Business owners define intended purpose but may not recognize when that purpose triggers classification thresholds. No single function can make this determination alone.
The question for boards: who has explicit authority to make binding Article 6 determinations? Not “who is monitoring the regulation.” Not “who sits on the working group.” Who signs the technical file? Who decides that System X is high-risk and System Y qualifies for exemption?
If that authority has not been formally designated, classification decisions are not being made. They are being deferred. And deferral has a deadline.
Question Three: What If We Are Wrong?
Classification errors run in two directions, and the costs are not equal.
Over-classification wastes resources. Systems incorrectly designated as high-risk trigger compliance obligations costing hundreds of thousands of euros — risk management frameworks, technical documentation, conformity assessments, quality management systems. For organizations with large AI portfolios, unnecessary high-risk designations consume budgets that should be allocated elsewhere. But over-classification is defensible. It demonstrates caution.
Under-classification creates liability. A system classified as exempt that is later determined to be high-risk exposes the organization to penalties, enforcement action, and potential market withdrawal. The organization will have operated the system without required safeguards, without mandated documentation, and without human oversight obligations. Under-classification is cheaper — until enforcement arrives.
The Board Question: What is our risk appetite for classification error? Have we defined whether we lean toward caution or efficiency? Has that appetite been communicated to whoever holds classification authority? Do we have any mechanism to detect misclassification before regulators do?
Ownership means accountability for error. If no one owns the classification decision, no one owns the consequences of getting it wrong — until Market Surveillance assigns ownership for you.
Question Four: Are We Planning Against the Right Deadline?
The EU AI Act timeline has been extensively discussed, mostly around the wrong date.
August 2027 is when full high-risk compliance is required — risk management systems operational, technical documentation complete, conformity assessments conducted. That date dominates planning conversations.
August 2026 is when classification obligations crystallize. Every AI system placed on the EU market must have a documented classification determination. Under current law, high-risk systems must be registered in the EU database before deployment — though the Digital Omnibus proposes removing registration for exempt systems while maintaining the documentation requirement.
The Mechanical Truth: Classification precedes compliance. An organization cannot implement high-risk requirements for systems it has not yet classified. August 2027 compliance is impossible without August 2026 classification.
The Digital Omnibus may adjust enforcement mechanics and grace periods. It has not moved the classification anchor. When you place a system on the EU market, the obligation to have a defensible classification behind it crystallizes — regardless of what simplification measures pass.
Organizations planning to “start compliance work in 2026” are planning to fail. Classification assessment across a complex AI portfolio requires months. Framework design requires months more. Implementation, testing, and validation consume whatever remains.
Eight months separate today from the August 2026 deadline. That is not a runway for deliberation. It is a runway for execution — but only if the decision about who owns classification has already been made.
Question Five: Fill the Empty Chair
Strategic questions matter only if they translate into operational decisions. The board’s role is not to conduct classification assessments — it is to ensure someone has explicit authority to do so.
Classification authority requires four elements:
Mandate: Formal authorization to make binding Article 6 determinations on behalf of the organization.
Composition: Cross-functional representation — legal, technical, business, and compliance perspectives integrated into a single decision-making body.
Methodology: Documented process for assessment, including how intended purpose is determined, how material influence is evaluated, and how edge cases are escalated. This is where structured frameworks — such as the one I’ve published in The Article 6 Classification Handbook — translate governance decisions into repeatable, defensible methodology.
Accountability: Named individuals whose signatures appear on classification documentation and who can defend those determinations under regulatory scrutiny.
This does not require external consultants. It does not require new technology. It requires a governance decision that has been deferred because the deadline felt distant and the regulation felt uncertain.
The Governance Imperative
The EU AI Act does not ask whether your AI systems are technically sophisticated. It asks whether your organization can demonstrate it made deliberate, documented decisions about what those systems are and how they should be classified.
That demonstration requires ownership — of inventory, of methodology, of risk appetite, of timeline, and of authority.
The regulatory picture continues to evolve. Just this week, DG SANTE proposed amendments that would shift medical devices to a different compliance pathway under the AI Act. As regulatory analyst Laura Caroli has noted, if adopted, this could set precedent for other sectors to seek similar carve-outs. Organizations that build classification capability now will be positioned to adapt. Those waiting for final clarity may find that clarity keeps receding.
Boards that treat AI compliance as someone else’s problem will discover it became their problem the moment no one could answer the question that regulators will certainly ask:
Who made this classification decision, and where is the documentation?
August 2026 is coming. Fill the empty chair.



Exceptional breakdown of the governance vacuum. The distinction between compliance as a legal problem versus an ownership problem is spot-on, especially when most orgs are still treating Article 6 classification like a box-ticking excercise that can be delegated. I've seen this first hand where procurement teams sign off on AI tools without even knowing they've become deployers under the Act, which makes the August 2026 deadline feel way more urgent than most boards realize.