Skip to content
EU AI Act

EU AI Act Compliance Checklist 2026: The Complete Guide

March 25, 2026·12 min read

The EU AI Act is the world's first comprehensive legal framework for artificial intelligence. With the full enforcement deadline of August 2, 2026 approaching rapidly, every organization that develops, deploys, imports, or distributes AI systems affecting EU citizens needs a clear compliance roadmap.

This checklist distills the entire regulation into actionable steps organized by timeline. Whether you are a startup with a single chatbot or an enterprise running dozens of AI models, use this guide to systematically close compliance gaps before it is too late.

Key Enforcement Dates

Feb 2, 2025Prohibited AI practices ban takes effect
Aug 2, 2025GPAI model obligations apply; governance rules active
Aug 2, 2026Full enforcement: high-risk AI system requirements, transparency, penalties
Aug 2, 2027Obligations for high-risk AI listed in Annex I (regulated products)
View full interactive timeline →

Phase 1: AI Inventory and Mapping (Do This First)

You cannot comply with what you do not know exists. The foundation of EU AI Act compliance is a complete inventory of every AI system your organization touches.

  • List every AI system your organization develops, deploys, imports, or distributes. Include third-party APIs, SaaS tools with AI features, and embedded AI components.
  • Document intended purposes for each system: what decisions it supports, who it affects, what data it processes.
  • Identify your role for each system: are you a provider (developer), deployer (user), importer, or distributor? Each role has different obligations.
  • Map data flows: what data enters the system, where outputs go, who has access, and whether personal data is involved (triggering GDPR overlap).
  • Document third-party AI providers and confirm their compliance status. Request compliance declarations from vendors.

Phase 2: Risk Classification

The AI Act assigns obligations based on risk level. Misclassifying your system can lead to either unnecessary compliance burden or, worse, regulatory penalties for non-compliance.

  • Screen for prohibited practices (Art. 5): social scoring, exploiting vulnerabilities, subliminal manipulation, untargeted facial recognition scraping, emotion recognition in workplaces/schools, real-time remote biometric identification in public spaces.
  • Evaluate against Annex III high-risk categories: biometrics, critical infrastructure, education, employment, essential services (credit, insurance, public benefits), law enforcement, migration, justice. Each system that falls here requires full compliance.
  • Identify limited-risk transparency obligations (Art. 50): chatbots, AI-generated content, emotion recognition, biometric categorization systems require user-facing disclosures.
  • Document classification rationale for every system. Regulators may challenge your classification, so keep a clear audit trail.
Use our Risk Assessment tool to classify your AI systems →

Phase 3: High-Risk System Compliance (Art. 8-15)

If any AI system is classified as high-risk, these are the mandatory requirements. Each one must be documented and demonstrable.

  • Risk Management System (Art. 9): implement a continuous, iterative process covering risk identification, analysis, estimation, evaluation, and mitigation throughout the entire AI lifecycle.
  • Data Governance (Art. 10): ensure training, validation, and testing datasets meet quality criteria. Document data provenance, preparation methods, labeling processes, and known biases or gaps.
  • Technical Documentation (Art. 11): prepare comprehensive documentation covering system design, architecture, algorithms, training processes, performance metrics, known limitations, and intended use.
  • Record-Keeping and Logging (Art. 12): implement automatic logging of system operations enabling traceability. Define log retention periods appropriate to the system's purpose and risk level.
  • Transparency and Information (Art. 13): provide clear instructions for deployers covering capabilities, limitations, accuracy levels, foreseeable misuse, and maintenance requirements.
  • Human Oversight (Art. 14): design systems enabling effective human oversight. Humans must be able to understand outputs, detect anomalies, decide not to use the system, and intervene or stop operation.
  • Accuracy, Robustness, and Cybersecurity (Art. 15): declare accuracy levels, implement resilience against errors and adversarial attacks, and protect against cybersecurity threats relevant to the system.

Phase 4: Conformity Assessment and Registration

  • Determine assessment type (Art. 43): most high-risk systems allow internal conformity assessment, but certain biometric and critical infrastructure systems require third-party audit by a notified body.
  • Conduct conformity assessment: verify all Art. 8-15 requirements are met. Document findings, test results, and any corrective actions taken.
  • Prepare EU Declaration of Conformity (Art. 47): draft and sign the formal declaration for each high-risk AI system.
  • Affix CE marking (Art. 48): apply the marking visibly before placing the system on the market.
  • Register in the EU database (Art. 49): submit registration for high-risk AI systems before deployment. Both providers and deployers have registration duties.

Phase 5: Transparency and User-Facing Obligations

Even minimal-risk and limited-risk AI systems may have transparency duties under Article 50.

  • AI interaction disclosure: inform users when they interact with an AI system (chatbots, virtual assistants) unless it is obvious from context.
  • AI-generated content labeling: mark synthetic text, images, audio, and video as AI-generated using machine-readable metadata where technically feasible.
  • Deepfake disclosure: clearly label deepfakes (AI-manipulated or generated content depicting real persons or events).
  • Update website and product documentation: add clear AI usage disclosures to your website, app, and user-facing materials.
Scan your website for transparency compliance →

Phase 6: GPAI Model Obligations (If Applicable)

If you develop or deploy general-purpose AI models (LLMs, foundation models, multi-modal models), additional obligations apply from August 2, 2025 onward.

  • Technical documentation (Art. 53): maintain model cards covering architecture, training methodology, data sources, evaluation results, and known limitations.
  • Copyright compliance: document copyrighted training data and comply with EU copyright law, including opt-out mechanisms for rights holders.
  • Systemic risk models (Art. 55): if your GPAI model has systemic risk (trained with >10^25 FLOPs or designated by the AI Office), conduct adversarial testing, implement incident reporting, and ensure adequate cybersecurity.

Phase 7: Governance and Ongoing Monitoring

Compliance is not a one-time project. The AI Act requires continuous monitoring, incident response, and organizational governance.

  • Post-market monitoring (Art. 72): implement proportionate monitoring systems to collect and analyze data on AI performance throughout its lifecycle.
  • Serious incident reporting (Art. 73): establish procedures to report serious incidents to market surveillance authorities without undue delay, and no later than 15 days after becoming aware.
  • AI literacy training (Art. 4): ensure staff involved in AI development and deployment have sufficient AI literacy and training appropriate to their role and the context of use.
  • Fundamental Rights Impact Assessment (Art. 27): deployers of high-risk AI must conduct and document an assessment of impact on fundamental rights before putting the system into use.
  • Regular compliance audits: schedule periodic reviews to ensure continued compliance as your AI systems evolve and regulations are updated.

Penalties for Non-Compliance

The AI Act introduces significant penalties to ensure compliance. The fines are structured in three tiers based on the severity of the violation:

ViolationMaximum Fine
Prohibited AI practices (Art. 5)Up to €35M or 7% of global annual turnover
High-risk system requirements, GPAI obligations, notified body dutiesUp to €15M or 3% of global annual turnover
Incorrect information to authoritiesUp to €7.5M or 1% of global annual turnover

For SMEs and startups, fines are capped at the lower of the two amounts. The AI Act also provides for proportionality in enforcement.

Start Your Compliance Journey Today

Do not wait until the August 2026 deadline. CompliPilot scans your website and AI systems for EU AI Act and GDPR compliance gaps, giving you a prioritized action plan with specific fix recommendations.