Skip to content

AI Compliance Glossary

Key terms and definitions from the EU AI Act and GDPR explained in plain language. Bookmark this page as your go-to reference for AI compliance terminology.

A

AI Act (EU Artificial Intelligence Act)

The EU regulation (Regulation 2024/1689) establishing harmonized rules on artificial intelligence. It introduces a risk-based classification system for AI systems and sets mandatory requirements for providers and deployers. The Act entered into force on 1 August 2024, with phased enforcement through 2026.

Regulation (EU) 2024/1689EU AI Act Compliance Guide

AI Deployer

Any natural or legal person, public authority, agency, or other body using an AI system under its authority, except where the AI system is used in the course of a personal non-professional activity. Deployers have specific obligations under the AI Act including transparency, human oversight, and monitoring duties.

AI Act, Art. 3(4)

AI Provider

A natural or legal person, public authority, agency, or other body that develops an AI system or general-purpose AI model and places it on the market or puts it into service under its own name or trademark. Providers bear the primary compliance obligations under the AI Act, including conformity assessment and technical documentation.

AI Act, Art. 3(3)

AI System

A machine-based system designed to operate with varying levels of autonomy, that may exhibit adaptiveness after deployment and that, for explicit or implicit objectives, infers from the input it receives how to generate outputs such as predictions, content, recommendations, or decisions that can influence physical or virtual environments.

AI Act, Art. 3(1)Scan your AI system

C

CE Marking

The conformity marking that must be affixed to high-risk AI systems before they can be placed on the EU market. It indicates that the AI system has undergone the required conformity assessment and meets all applicable requirements of the AI Act. The CE marking must be visible, legible, and indelible.

AI Act, Art. 48

Conformity Assessment

The process of verifying whether a high-risk AI system meets the requirements set out in the AI Act. Depending on the use case, this may be conducted internally by the provider or by an independent notified body. It must be completed before the AI system is placed on the market or put into service.

D

Data Controller

The natural or legal person, public authority, agency, or other body which, alone or jointly with others, determines the purposes and means of the processing of personal data. The controller bears primary responsibility for GDPR compliance, including lawful basis, transparency, and data subject rights.

GDPR, Art. 4(7)

Data Processor

A natural or legal person, public authority, agency, or other body which processes personal data on behalf of the controller. Processors must act only on the controller's instructions and maintain appropriate security measures. A Data Processing Agreement (DPA) must be in place between controller and processor.

GDPR, Art. 4(8) & Art. 28

DPIA (Data Protection Impact Assessment)

A systematic assessment of the potential impact of data processing operations on the protection of personal data. Required under GDPR when processing is likely to result in a high risk to individuals' rights and freedoms, particularly when using new technologies, profiling, or large-scale processing of sensitive data.

DPO (Data Protection Officer)

An independent expert appointed by an organization to oversee GDPR compliance. Mandatory for public authorities, organizations conducting large-scale systematic monitoring, and those processing special categories of data at scale. The DPO must be involved in all issues relating to data protection and report directly to the highest management level.

GDPR, Art. 37-39

F

Fundamental Rights Impact Assessment (FRIA)

An assessment that deployers of high-risk AI systems must conduct before putting such systems into use. It evaluates the potential impact on fundamental rights, including the right to non-discrimination, privacy, protection of personal data, and effective remedy. The FRIA must be submitted to the relevant market surveillance authority.

AI Act, Art. 27

G

GDPR (General Data Protection Regulation)

The EU regulation (Regulation 2016/679) on data protection and privacy. It applies to all organizations processing personal data of individuals in the EU/EEA, regardless of the organization's location. GDPR establishes principles for data processing, grants individual rights, and imposes penalties up to 4% of annual global turnover or 20 million EUR.

Regulation (EU) 2016/679GDPR Compliance Checker

General-Purpose AI (GPAI) Model

An AI model, including where trained with a large amount of data using self-supervision at scale, that displays significant generality and is capable of competently performing a wide range of distinct tasks. GPAI models have specific transparency and documentation obligations under the AI Act, with additional requirements for models posing systemic risks.

AI Act, Art. 3(63) & Art. 51-56

H

High-Risk AI System

An AI system that poses significant risks to health, safety, or fundamental rights. Defined in AI Act Annex III, high-risk categories include: biometric identification, critical infrastructure management, education and vocational training, employment and worker management, access to essential services, law enforcement, migration and border control, and administration of justice. These systems face the most stringent compliance requirements.

AI Act, Art. 6 & Annex IIIAssess your AI risk level

Human Oversight

The requirement that high-risk AI systems be designed and developed so that they can be effectively overseen by natural persons during their period of use. This includes the ability to fully understand the system's capabilities, correctly interpret outputs, decide not to use the system, and intervene or interrupt the system's operation.

AI Act, Art. 14

N

Notified Body

An independent organization designated by an EU member state to carry out third-party conformity assessments for high-risk AI systems. Notified bodies must meet strict criteria for competence, independence, and impartiality. They are required for certain categories of high-risk AI systems where self-assessment is not sufficient.

AI Act, Art. 28-39

P

Prohibited AI Practices

AI practices that are entirely banned under the AI Act due to their unacceptable risk to fundamental rights. These include: social scoring by public authorities, real-time remote biometric identification in public spaces (with limited exceptions), AI systems that exploit vulnerabilities of specific groups, subliminal manipulation techniques, and emotion recognition in workplaces and educational institutions.

AI Act, Art. 5

R

Regulatory Sandbox

A controlled framework set up by a competent authority that offers providers and prospective providers of AI systems the possibility to develop, train, validate, and test their AI systems under regulatory oversight. Sandboxes are designed to foster innovation while ensuring compliance. EU member states must establish at least one AI regulatory sandbox by August 2026.

AI Act, Art. 57-62

Risk Classification

The AI Act's framework for categorizing AI systems into four risk levels: unacceptable risk (banned), high risk (strict requirements), limited risk (transparency obligations), and minimal risk (no specific obligations). The classification determines which compliance requirements apply to a given AI system.

AI Act, Art. 5-7 & Annexes I-IIIClassify your AI system

Right to Explanation

Under GDPR, individuals have the right not to be subject to decisions based solely on automated processing (including profiling) that produce legal effects or similarly significant effects. When such processing occurs, data subjects have the right to obtain meaningful information about the logic involved, as well as the significance and envisaged consequences.

GDPR, Art. 22 & Recitals 71

T

Transparency Obligation

The requirement under the AI Act that certain AI systems disclose their artificial nature to users. This applies to AI systems that interact with natural persons (chatbots), generate synthetic content (deepfakes, AI-generated text or images), or perform emotion recognition or biometric categorization. Users must be informed clearly and in a timely manner.

Check Your Compliance Status

Understanding the terminology is the first step. Run a free scan to see how your website measures up against EU AI Act and GDPR requirements.

Run Free Compliance Scan