The EU AI Act: Overview and Scope
The EU AI Act (Regulation 2024/1689) is the world's first comprehensive AI regulation. It went into effect in August 2024 with phased enforcement through 2026. Any organization selling to the EU, processing data of EU residents, or operating in the EU needs to comply. This is not optional and not just for EU companies—it applies globally.
Scope: The Act covers "AI systems"—software using machine learning to make or recommend decisions. Chatbots, recommendation engines, hiring algorithms, medical AI, autonomous vehicles, credit scoring, all covered. It has limited exceptions (military, law enforcement in specific cases, general AI for research).
Applicability: The Act applies to: (1) EU providers deploying AI in the EU. (2) Providers outside the EU deploying AI to EU users. (3) Importers/distributors of AI systems. (4) Users of high-risk AI systems. If you affect EU residents, you're covered.
EU AI Act Timeline: Key Enforcement Dates
| Date | Requirement | Who Must Comply |
|---|---|---|
| August 2024 | Act effective; prohibited AI practices banned | All AI providers |
| August 2025 | GPAI Model obligations effective (transparency, content policy) | Providers of frontier models (GPT-4, Claude, Gemini, etc.) |
| August 2026 | High-risk AI requirements effective (risk management, eval, certification) | Providers of high-risk AI systems |
| August 2027 | Transparency requirements effective for general-purpose AI | All providers of general-purpose AI |
Current status (Feb 2026): Prohibited practices enforcement is active. GPAI obligations just took effect (August 2025). High-risk requirements will be enforced starting August 2026 (6 months away). Prepare now.
Risk Classification Deep Dive: Understanding the Four Tiers
Tier 1: Prohibited (Unacceptable Risk)
Examples: Social credit systems, emotion recognition in schools, real-time remote biometric identification in public spaces (with narrow law enforcement exceptions), subliminal manipulation techniques, exploitative AI that manipulates children or vulnerable people.
Requirement: Don't deploy. Period. No exceptions (except specified law enforcement cases). If you're providing these, you're violating the law as of August 2024.
Tier 2: High-Risk Systems
Define as: AI systems that could cause significant harm to fundamental rights (safety, privacy, discrimination). The Act lists 170+ categories. Key ones:
Biometric systems: Facial recognition, fingerprint recognition. High-risk because they affect privacy and identity.
Safety-critical systems: Autonomous vehicles, medical devices, robots. High-risk because failures cause physical harm.
Employment and education: Hiring algorithms, resume screening, exam proctoring. High-risk because they affect livelihoods and access.
Credit and benefits: Loan decisions, credit scoring, welfare eligibility. High-risk because they affect financial inclusion.
Justice and law enforcement: Criminal risk assessment, facial recognition for law enforcement, predictive policing. High-risk because they affect liberty.
Critical infrastructure: Power grids, water systems, healthcare. High-risk because failures affect public safety.
Requirement: Comply with Articles 9-15 (detailed below). This includes comprehensive risk management, data governance, technical documentation, evaluation, testing, logging, transparency, human oversight, accuracy/robustness/cybersecurity requirements.
Tier 3: Limited-Risk (Transparency)
Examples: Chatbots, content recommendation systems, general-purpose AI systems used downstream in any way.
Requirement: Transparency. Users must know they're interacting with AI. Providers must disclose how their systems work. This is lighter than high-risk but mandatory.
Tier 4: Minimal Risk (Voluntary)
Examples: Spam filters, autocomplete, game AIs, narrowly scoped systems with minimal harm potential.
Requirement: Recommended but not mandatory—follow best practices if you want, but not legally required. Though if they drift into another category, requirements apply.
High-Risk AI System Requirements: Articles 9-15 (August 2026 Deadline)
Article 9: Risk Management System
Requirement: Establish a documented risk management process. You must: (1) Identify risks: What can go wrong? Bias, safety failures, privacy breaches, manipulation, cybersecurity vulnerabilities. (2) Assess likelihood and severity: How likely is each risk? How bad are consequences? (3) Implement controls: For each risk, what do you do to prevent it? (4) Test mitigation: Does your control work? (5) Monitor and iterate: Does the risk still exist? How do you monitor it?
Eval role: Risk identification and testing are primarily eval activities. Your risk management system maps to your eval program.
Article 10: Data Governance
Requirement: Data used to train, validate, or test AI must be: (1) High quality: Accurate, complete, representative. (2) Documented: Where did it come from? Any known issues? (3) Filtered: No prohibited data (e.g., personal data without consent). (4) Diverse: Represents all relevant groups (gender, age, ethnicity, etc.). This prevents bias. (5) Preserved: Kept for audit trail and testing.
Eval role: Evaluate data quality. Test for bias across demographic groups. Audit data provenance.
Article 11: Technical Documentation
Requirement: Create detailed documentation covering: (1) System design and architecture. (2) How it processes data and makes decisions. (3) Intended use and foreseeable misuse. (4) Known limitations and failure modes. (5) Risk management approach and mitigation measures. (6) Performance data (accuracy, fairness, robustness). (7) Evaluation and testing results. (8) Audit logs.
What to document: A high-risk AI technical documentation package is typically 50-150 pages. It includes: system description, data governance plan, risk assessment, eval protocol, eval results, limitations, training procedure, deployment instructions, monitoring plan.
Article 12: Logging and Monitoring
Requirement: Systems must automatically log: (1) How the system is used. (2) What decisions it made. (3) What data it processed. (4) Any errors or anomalies. Logs must be: (a) Retrievable for audit. (b) Retained for extended period (2-5 years depending on use case). (c) Protected from tampering.
Why: Regulators can audit logs to check compliance. Users can request logs about them.
Article 13: Transparency
Requirement: Users of the AI system must be told: (1) That they're interacting with AI. (2) What the AI is making decisions about. (3) How to contest decisions. (4) Key system characteristics and limitations. For high-risk, this is mandatory. For limited-risk (most modern AI), transparency is still required but less detailed.
Article 14: Human Oversight
Requirement: Humans must be in the loop. Specifically: (1) Human review: Important AI decisions must be reviewed/approved by a human before execution. (2) Meaningful override: Humans must be able to easily override/reject AI recommendations. (3) Skilled operators: People operating the AI must understand its limitations and be trained. (4) Proportional intervention: The riskier the decision, the more human review required.
Example: AI that recommends hiring candidates is high-risk. Requirement: each hiring recommendation is reviewed by a human who can reject it. The human must understand that the AI is biased toward certain backgrounds and can make mistakes.
Article 15: Accuracy, Robustness, Cybersecurity
Requirement: Your AI must be: (1) Accurate: High accuracy relative to the task. What's "high" depends on use case. (2) Robust: Works reliably even when conditions change (adversarial inputs, distribution shift, etc.). (3) Cybersecure: Resistant to attacks that could compromise decision-making or privacy.
How demonstrated: Through evaluation. You must show through testing that your system meets these requirements. This is where most of your eval work lives—proving Articles 9-15 through systematic evaluation.
GPAI Model Obligations: Article 53 (Effective August 2025)
GPAI = General-Purpose AI. These are large models (like GPT-4, Claude, Gemini) that can be adapted for many uses. Providers must:
1. Technical Documentation
Publish (or make available to regulators) documentation on: (1) System design and training process. (2) Capabilities and limitations. (3) Known risks (hallucination, bias, jailbreaking vulnerability). (4) Test results on standard benchmarks.
2. Copyright Compliance
Document what copyrighted data was used in training. Provide tools for copyright holders to audit. This is new—reflects EU concern about AI training on copyrighted content without permission.
3. Summaries of Evaluation Results
Publish summaries of: (1) How the model was evaluated (benchmark names, methodologies). (2) Evaluation results (scores on standard benchmarks). (3) Known failure modes (where the model performs poorly). This must be published before the model is released or available to downstream providers.
4. Cooperation with Regulators
If regulators request information about your GPAI model, you must provide it. This enables enforcement.
Practical impact: If you're providing a large language model, you must publish evaluation results. You can't release a model without documenting how well it works (or doesn't). This drives transparency and prevents models being deployed without anyone knowing their limitations.
Notified Body and Conformity Assessment: Third-Party Audits
For high-risk AI systems, you can (and often must) get third-party certification. A "notified body" is a private company (Bureau Veritas, SGS, etc.) approved by the EU to audit AI systems for conformity to the Act.
What a Notified Body Does
(1) Reviews your risk management system. (2) Audits your technical documentation. (3) Validates your evaluation methodology and results. (4) Tests your system or reviews your testing. (5) Issues a certificate ("CE mark" in EU terminology) if compliant. (6) Can conduct follow-up audits.
Cost and Timeline
Cost: €50K-500K depending on system complexity. For a complex autonomous vehicle system, closer to €500K. For a medium-complexity hiring algorithm, €100-200K. For a simple loan decision system, €50-100K.
Timeline: 3-6 months from start to certification (if you're well-prepared).
Strategy
Some companies seek certification (builds credibility), others don't (adds cost). The Act doesn't require certification for most high-risk systems—it requires compliance, not third-party audit. But regulators view certification favorably. If you're concerned about enforcement risk, get certified. If you're confident in your compliance, self-declaration is enough (but more risky).
Mapping Your Eval Program to EU AI Act: Where Your Work Fits
Article 9 (Risk Management) ← Risk Assessment + Testing
Your eval program identifies risks (bias, safety, privacy, security). Your testing validates that risk controls work. The risk management system = your eval framework.
Article 10 (Data Governance) ← Data Quality Eval
You evaluate data quality, bias, diversity. Documentation of data (source, quality, demographics) is your eval report.
Article 11 (Technical Documentation) ← Your Eval Reports
Your eval results (accuracy, fairness, robustness, failure modes) go directly into technical documentation.
Article 12 (Logging/Monitoring) ← Production Eval
Your system logs for auditing. You continuously monitor eval metrics in production. This serves both compliance (regulatory audit) and safety (catch drift).
Article 13 (Transparency) ← Model Card / Eval Summary
You publish summaries of what your model does and doesn't do well. Your eval findings become transparency statements for users.
Article 14 (Human Oversight) ← Your Decision on Human-in-the-Loop
Your eval design includes checking whether human oversight is needed. You define review thresholds based on risk.
Article 15 (Accuracy/Robustness/Security) ← Your Eval Metrics
You measure accuracy, robustness (adversarial testing, domain shift), and cybersecurity vulnerabilities. These become compliance evidence.
Penalties and Enforcement: What Happens If You Violate
Prohibited AI (Tier 1) Violations
Fine: €35 million OR 7% of global revenue, whichever is higher. No exceptions. If you deploy prohibited AI, you're breaking the law as of August 2024. Regulators take this seriously.
High-Risk AI (Tier 2) Violations
Fine: €20 million OR 4% of global revenue. This applies to violating Articles 9-15. Examples: (1) No risk management system. (2) Poor quality data. (3) No eval or insufficient eval. (4) No human oversight. (5) Not meeting accuracy/robustness requirements. (6) No logging/monitoring. (7) Opacity.
GPAI Violations (Article 53)
Fine: €15 million OR 3% of global revenue. Example: released a large language model without publishing evaluation results.
Transparency/Compliance Violations (Other)
Fine: €10 million OR 2% of global revenue. Example: didn't tell users they were interacting with AI.
Enforcement
Who enforces? National market surveillance authorities in each EU member state. Germany, France, Netherlands, Poland are most aggressive. These are government bodies empowered to: (1) Investigate complaints. (2) Audit companies. (3) Issue fines. (4) Order systems to be taken down. (5) Prosecute executive in extreme cases.
Additionally, DG CONNECT (European Commission) oversees frontier model providers and can fine directly.
Practical Compliance Strategy: Step-by-Step
Step 1: Audit Your AI Systems (Now)
(1) List all AI systems your company provides or operates. (2) For each, determine risk category: prohibited? high-risk? limited-risk? minimal-risk? (3) Verify: are you currently violating prohibitions? (If yes, fix immediately.)
Step 2: For High-Risk Systems, Build Compliance Plan (Now)
Timeline to August 2026 deadline: 18 months. For each high-risk system: (1) Design risk management system (map to eval program). (2) Audit data governance. (3) Create technical documentation. (4) Design eval protocol. (5) Run evaluation and document results. (6) Implement logging/monitoring. (7) Define human oversight. (8) Decide on certification (notified body audit).
Resource estimate: A single high-risk system compliance project = 6-12 people × 6-12 months = 36-144 person-months (3-12 person-years). This is significant. Start now.
Step 3: For GPAI Models, Publish Eval Results (By August 2025)
If you're a provider of a frontier model, you're already under GPAI obligations (as of August 2025). Publish: (1) Technical documentation. (2) Evaluation results on standard benchmarks. (3) Known limitations. (4) Copyright training data disclosure.
Step 4: For Limited-Risk Systems, Implement Transparency (Now)
Even limited-risk systems need transparency. Ensure users know: (1) They're interacting with AI. (2) What the AI is doing (brief description). (3) How to appeal/override if relevant.
Step 5: Ongoing Monitoring (Always)
Even after compliance is achieved, maintain: (1) Audit logs. (2) Production monitoring. (3) Regular eval of system performance. (4) Incident response if system fails. (5) Updates to technical documentation if model changes.
EU AI Act Alignment with NIST and ISO Standards
The good news: the EU AI Act requirements align with NIST AI Risk Management Framework and ISO 42001. If you're following those standards, EU AI Act compliance is partly done.
| EU AI Act | NIST AI RMF | ISO 42001 |
|---|---|---|
| Article 9: Risk Management | MAP, MEASURE, MANAGE functions | Risk Management System |
| Article 10: Data Governance | MAP (data quality assessment) | Data Governance requirements |
| Article 15: Accuracy/Robustness | MEASURE (performance metrics) | Performance requirements |
| Article 12: Logging/Monitoring | MONITOR function | Monitoring and logging |
| Article 13: Transparency | Transparency practices | Transparency documentation |
Strategic recommendation: If you're starting compliance from scratch, implement using NIST AI RMF. It's framework, not a standard, but it aligns with both EU AI Act and ISO 42001. Makes it easier to be simultaneously compliant with multiple regimes.
By August 2024 (DONE): Stop deploying prohibited AI systems. Audit and report if any violations. This deadline has passed—if you're still violating, fix immediately.
By August 2025 (6 months away): GPAI model providers must publish evaluation results and technical documentation. If you provide large models, get this done now.
By August 2026 (18 months away): High-risk AI systems must fully comply with Articles 9-15. This is the big deadline. If you have high-risk systems, your compliance plan should be 50% done now (Feb 2026). You have 6 months to finish.
By August 2027: Transparency requirements for general-purpose AI. Start planning now.
EU AI Act Compliance and Eval Summary
- Scope: Applies globally to anyone deploying AI that affects EU residents
- Tiers: Prohibited (banned), High-risk (heavily regulated), Limited-risk (transparency required), Minimal-risk (voluntary)
- Timeline: Prohibited effective Aug 2024. GPAI effective Aug 2025. High-risk effective Aug 2026
- High-risk requirements: Articles 9-15 cover risk management, data, documentation, logging, transparency, oversight, accuracy/robustness/security
- Eval role: Your eval program proves compliance with Articles 9-15. Risk assessment, testing, monitoring are core compliance activities
- GPAI obligations: Document and publish eval results. No hiding model limitations
- Enforcement: Fines up to €35M or 7% global revenue. National regulators + DG CONNECT
- Notified bodies: Third-party auditors can certify compliance (optional but recommended). Cost €50K-500K
- Alignment: EU AI Act maps to NIST AI RMF and ISO 42001. Use these frameworks for simultaneous compliance
- Practical compliance cost: €200K-500K + 6-12 months per high-risk system for full compliance
Ready to Ensure EU AI Act Compliance?
Audit your systems now. Map high-risk systems. Build compliance plan. The August 2026 deadline for high-risk AI is 18 months away. Start today.
Exam Coming Soon