EU AI Act Compliance Checklist 2026
The EU AI Act's full enforcement begins in August 2026, and organizations using artificial intelligence need a clear roadmap to compliance. This checklist breaks down every key requirement into actionable steps, organized by priority and timeline.
Whether you are a startup deploying a chatbot or an enterprise running multiple high-risk AI systems, this guide will help you systematically address each compliance requirement before the deadline.
Phase 1: AI System Inventory (Start Now)
Before you can comply, you need to know exactly what AI systems your organization uses. This is the foundational step that everything else builds upon.
- Catalog all AI systems: List every AI component your organization develops, deploys, distributes, or imports. This includes third-party AI services, APIs, and embedded AI features.
- Document AI purposes: For each system, document its intended purpose, the data it processes, who it affects, and what decisions it makes or assists with.
- Identify AI providers: If you use third-party AI, document the provider, their compliance status, and any contractual obligations.
- Map data flows: Understand what data enters each AI system, how it is processed, where results go, and who has access.
Phase 2: Risk Classification
The EU AI Act uses a risk-based approach. Classifying your systems correctly determines your compliance obligations.
- Check prohibited uses: Verify that none of your AI systems perform prohibited functions: social scoring, unauthorized biometric identification, manipulation, or exploitation of vulnerable groups.
- Evaluate against Annex III:The EU AI Act's Annex III lists specific high-risk use cases across sectors including critical infrastructure, education, employment, essential services, law enforcement, and migration. Map each of your AI systems against this list.
- Assess transparency requirements: For AI that interacts directly with users (chatbots, recommendation systems) or generates content (deepfakes, AI-generated text/images), identify transparency obligations.
- Document classification decisions: Record your reasoning for each classification. Regulators may ask why you classified a system at a particular risk level.
Phase 3: High-Risk System Requirements
If any of your AI systems are classified as high-risk, you must implement the following before August 2026:
- Risk management system:Implement a continuous, iterative process to identify, analyze, estimate, and evaluate risks throughout the AI system's lifecycle. This must include testing, mitigation measures, and residual risk assessment.
- Data governance: Ensure training, validation, and testing datasets meet quality criteria. Document data collection processes, data preparation, labeling, and any data gaps or shortcomings.
- Technical documentation:Maintain comprehensive documentation covering the system's purpose, design specifications, development methodology, testing procedures, performance metrics, and known limitations.
- Record-keeping:Implement automatic logging capabilities that allow traceability of the AI system's operation. Logs must be retained for a period appropriate to the system's purpose.
- Transparency to users:Provide clear, adequate information to deployers, including the system's capabilities, limitations, and instructions for use.
- Human oversight:Design systems to enable effective human oversight. This means humans must be able to understand the system's capabilities, monitor operation, and intervene or override when necessary.
- Accuracy, robustness, cybersecurity: Ensure appropriate levels of accuracy, robustness against errors, and protection against cybersecurity threats.
Phase 4: Conformity Assessment
High-risk AI systems must undergo a conformity assessment before being placed on the market or put into service.
- Self-assessment or third-party audit: Depending on the specific use case, you may need to perform an internal conformity assessment or engage a notified body for external audit.
- CE marking: After passing conformity assessment, high-risk AI systems must carry CE marking indicating compliance.
- EU database registration: Register high-risk AI systems in the EU-wide database before deployment.
- Declaration of conformity: Prepare and maintain a written EU declaration of conformity for each high-risk AI system.
Phase 5: Transparency and Disclosure
Even if your AI is not high-risk, transparency obligations may apply:
- AI interaction disclosure: If users interact with an AI system (chatbot, virtual assistant), they must be informed that they are interacting with AI, not a human.
- AI-generated content labeling: Content generated by AI (text, images, audio, video) must be labeled as AI-generated. This includes deepfakes.
- Emotion recognition disclosure: If your AI performs emotion recognition, subjects must be informed.
- Website disclosures: Update your website to clearly communicate your use of AI, what data it processes, and how users can get more information.
Phase 6: GPAI Model Requirements
If you develop or deploy general-purpose AI models (foundation models, large language models):
- Technical documentation: Maintain documentation on model architecture, training data, training process, and evaluation results.
- Copyright compliance: Comply with EU copyright law, including maintaining summaries of copyrighted training data.
- Systemic risk assessment: For GPAI models with systemic risk (generally those trained with over 10^25 FLOPs), conduct model evaluations, adversarial testing, and maintain incident reporting processes.
Phase 7: Ongoing Compliance
Compliance is not a one-time exercise. Plan for continuous monitoring:
- Post-market monitoring: Implement systems to monitor AI performance in production, collect feedback, and identify emerging risks.
- Incident reporting: Establish procedures to report serious incidents to regulatory authorities within required timeframes.
- Regular auditing: Schedule periodic compliance reviews and scans to catch drift or new requirements.
- Staff training: Train relevant personnel on EU AI Act requirements, their responsibilities, and compliance procedures.
Automate Your Compliance Checks
CompliPilot scans your website against EU AI Act and GDPR requirements, identifying gaps and providing specific fix recommendations. Start with a free scan to see where you stand.
Run Free Compliance Scan