Skip to content

AI Risk Assessment: Classify Your AI Systems

The EU AI Act uses a risk-based approach to regulation. Understanding where your AI systems fall in the risk classification is the essential first step toward compliance. Below you will find each risk category with examples and corresponding obligations.

Unacceptable Risk

These AI systems are completely prohibited under the EU AI Act.

Examples:

  • Social scoring systems by governments
  • Real-time remote biometric identification in public spaces (with limited exceptions)
  • AI that manipulates human behavior to circumvent free will
  • AI that exploits vulnerabilities of specific groups (children, elderly, disabled)
  • Emotion recognition in workplaces and educational institutions (with exceptions)

Compliance obligation: Banned. Must be discontinued immediately.

High Risk

These AI systems require extensive compliance measures and ongoing monitoring.

Examples:

  • AI in critical infrastructure (energy, transport, water)
  • AI used for educational access and vocational training
  • AI for employment decisions (hiring, promotion, termination)
  • AI for access to essential services (credit scoring, insurance)
  • AI in law enforcement (predictive policing, evidence assessment)
  • AI for migration and border control
  • AI for administration of justice

Compliance obligation: Conformity assessment, risk management system, data governance, technical documentation, human oversight, EU database registration.

Limited Risk

These AI systems have transparency obligations — users must know they are interacting with AI.

Examples:

  • Chatbots and virtual assistants
  • Deepfake generators
  • AI-generated text, images, or audio
  • Emotion recognition systems (where permitted)
  • Biometric categorization systems

Compliance obligation: Transparency disclosure: users must be informed they are interacting with AI or that content is AI-generated.

Minimal Risk

Most AI systems fall here. No additional requirements beyond existing laws.

Examples:

  • Spam filters
  • AI in video games
  • Inventory management systems
  • AI-powered search engines
  • Grammar and spell checkers

Compliance obligation: No additional requirements. Encouraged to adopt voluntary codes of conduct.

How to Assess Your AI Risk Level

Follow these steps to classify your AI systems:

  1. List all AI systems: Document every AI component in your organization, including third-party AI services.
  2. Check against prohibited uses: Verify none of your systems fall under the unacceptable risk category.
  3. Map to Annex III: The EU AI Act Annex III lists specific high-risk use cases. Check if your AI matches any.
  4. Evaluate transparency needs: For AI that interacts with users or generates content, ensure disclosure mechanisms.
  5. Document your assessment: Keep records of your classification decisions and reasoning.

General-Purpose AI (GPAI) Models

The EU AI Act also regulates general-purpose AI models (like large language models). Providers of GPAI models must maintain technical documentation, comply with EU copyright law, and publish a summary of training data used. GPAI models with systemic risk face additional obligations including model evaluations, adversarial testing, and incident reporting.

Automated Risk Scanning

CompliPilot can automatically scan your website to detect AI usage patterns and check compliance against EU AI Act and GDPR requirements.

Run Free AI Risk Scan