A

AI Act High-Risk AI Systems: Full Obligations List

Complete guide to AI Act high-risk AI systems obligations: Annex III categories, six core requirements, conformity assessment, CE marking, and GDPR interactions.

The EU AI Act places its heaviest regulatory burden on AI act high-risk AI systems – those most likely to affect fundamental rights, safety, and democratic processes. Roughly 15% of all AI systems deployed in the EU fall into this category according to the European Commission’s impact assessment, and by August 2026 every one of them must satisfy a detailed set of mandatory obligations or face penalties of up to EUR 15 million or 3% of global annual turnover.

This article breaks down exactly which systems qualify as high-risk under Annex III, the six core obligations each must meet, the conformity assessment and CE marking process, and how these requirements interact with GDPR.

For an overview of the full regulation, see our EU AI Act compliance guide. For help determining whether your system qualifies as high-risk in the first place, consult our AI Act risk classification guide.

What Qualifies as a High-Risk AI System?

Under Regulation (EU) 2024/1689, an AI system is classified as high-risk through two pathways. First, if it is a safety component of a product (or is itself a product) covered by the EU harmonisation legislation listed in Annex I – this includes medical devices, machinery, toys, lifts, radio equipment, civil aviation, motor vehicles, and rail systems. Second, and more frequently relevant for software-based AI, if it falls within one of the eight domain areas listed in Annex III.

Article 6(3) introduces an important exception: a system listed in Annex III is not high-risk if it does not pose a significant risk of harm to health, safety, or fundamental rights. However, the provider must document this assessment and notify the relevant national authority. In practice, most systems matching an Annex III category will not satisfy this exception.

Which Categories Does Annex III Cover?

Annex III enumerates eight areas of use. Within each, specific system types are designated as high-risk. The European AI Office estimates that between 6,000 and 8,000 high-risk AI systems are already operating across Member States. Here are the categories.

Biometric identification and categorisation

Remote biometric identification systems (both real-time and post, subject to law enforcement conditions), emotion recognition systems in workplaces and education, and biometric categorisation systems that infer sensitive attributes such as race, political opinions, or trade union membership. Real-time biometric identification for law enforcement has its own separate regime under Article 5.

Critical infrastructure management

AI systems used as safety components in the management and operation of critical digital infrastructure, road traffic, and the supply of water, gas, heating, and electricity. A 2024 report by ENISA found that 38% of critical infrastructure operators in the EU were already using AI for operational monitoring, making this one of the largest affected sectors.

Education and vocational training

Systems that determine access to educational institutions, evaluate learning outcomes, assess the appropriate level of education an individual should receive, or monitor and detect prohibited behaviour during tests. With over 72 million students enrolled across EU educational institutions as of 2025, the scale of affected systems is significant.

Employment and workforce management

AI used for recruitment (filtering applications, evaluating candidates), making decisions on promotion or termination, allocating tasks based on individual behaviour or personal traits, and monitoring or evaluating the performance of workers. According to a 2025 Eurobarometer survey, 29% of large EU employers had deployed at least one AI system in HR processes.

Access to essential services

Systems that evaluate credit scores or creditworthiness of natural persons, assess risk and pricing in life and health insurance, evaluate eligibility for public assistance benefits and services, and dispatch or prioritise emergency first response services. This category also covers AI systems used to assess risk in the context of migration, asylum, and border control.

Law enforcement

AI systems used to assess the risk of a natural person offending or reoffending, as polygraphs or similar tools during interrogation, to evaluate the reliability of evidence, and for profiling during detection, investigation, or prosecution of criminal offences.

Migration, asylum, and border control

Systems used as polygraphs or similar tools, to assess security risks posed by individuals, to assist in the examination of applications for asylum, visa, and residence permits, and for the detection, recognition, or identification of individuals in the context of migration.

Administration of justice and democratic processes

AI systems used to assist judicial authorities in researching and interpreting facts and the law and in applying the law to concrete facts. This extends to systems that may influence the outcome of elections or referendums, though not systems used for purely administrative tasks.

What Are the Six Core Obligations for High-Risk Systems?

Every AI act high-risk AI system must comply with six mandatory requirements set out in Articles 9 through 15. These obligations apply to providers (developers) before the system is placed on the market, and several create ongoing duties throughout the system’s lifecycle.

Obligation 1: Risk management system (Article 9)

Providers must establish, implement, document, and maintain a risk management system that runs throughout the entire lifecycle of the high-risk AI system. This is not a one-off assessment. Article 9 requires identification and analysis of known and foreseeable risks, estimation and evaluation of risks that may emerge when the system is used in accordance with its intended purpose and under conditions of reasonably foreseeable misuse, and adoption of appropriate risk management measures.

The risk management system must include residual risk evaluation after mitigation measures are applied. Testing must be performed before market placement and, where relevant, throughout the system’s lifetime. The European Commission estimates that a compliant risk management system for a medium-complexity high-risk AI system requires between 120 and 300 person-hours to establish initially.

This obligation mirrors the concept behind a Data Protection Impact Assessment (DPIA) under GDPR, but is broader in scope – it covers all risks to health, safety, and fundamental rights, not only data protection risks.

Obligation 2: Data governance (Article 10)

Training, validation, and testing datasets must meet specific quality criteria. Article 10 requires that datasets be relevant, sufficiently representative, and as free of errors as possible. Providers must examine datasets for possible biases, particularly where outputs affect natural persons, and take measures to address identified gaps or shortcomings.

Where personal data is involved, Article 10(5) permits processing of special categories of personal data (as defined in GDPR Article 9) strictly to the extent necessary for bias monitoring and detection, subject to appropriate safeguards. This is one of the most operationally complex provisions for organisations that process EU personal data in their training pipelines.

Obligation 3: Technical documentation (Article 11)

Before a high-risk AI system is placed on the market, the provider must draw up technical documentation demonstrating compliance with all Chapter 2 requirements. Annex IV specifies what this documentation must contain: a general description of the system, detailed information on its development methodology, design specifications and architecture, data requirements, and the risk management measures adopted.

The documentation must be kept up to date throughout the system’s lifecycle and must be available to national competent authorities upon request.

Obligation 4: Record-keeping and logging (Article 12)

High-risk AI systems must be designed and developed with capabilities enabling the automatic recording of events (logs) throughout the system’s operational lifetime. These logs must be adequate to enable post-hoc monitoring, tracing of the system’s operation, and identification of risk situations. At minimum, logs must record the period of each use, the reference database against which input data was checked, the input data for which the search led to a match, and the identification of the natural persons involved in the verification of results.

Obligation 5: Transparency and information to deployers (Article 13)

High-risk AI systems must be designed to operate with sufficient transparency to enable deployers (the organisations using the system) to interpret the system’s output and use it appropriately. Instructions for use must accompany the system, written in clear and intelligible language, and must include the provider’s identity, the system’s characteristics, capabilities, and limitations, its intended purpose, the level of accuracy, robustness, and cybersecurity against which it has been tested, known or foreseeable circumstances that may lead to risks, and human oversight measures.

A 2025 Stanford HAI audit of 48 commercial AI systems marketed in the EU found that only 31% met the transparency disclosure requirements set out in Article 13 at the time of assessment.

Obligation 6: Human oversight (Article 14)

High-risk AI systems must be designed to allow effective oversight by natural persons during the period of use. Human oversight measures must aim to prevent or minimise risks to health, safety, or fundamental rights. The individuals assigned to oversight must be able to fully understand the system’s capacities and limitations, properly monitor its operation, be able to decide not to use the system or to disregard, override, or reverse its output, and be able to intervene on or interrupt the system.

Article 14 effectively prohibits full automation without meaningful human control for any high-risk use case.

Accuracy, robustness, and cybersecurity (Article 15)

In addition to the six obligation areas above, Article 15 requires that high-risk AI systems achieve an appropriate level of accuracy, robustness, and cybersecurity throughout their lifecycle. Accuracy levels and relevant metrics must be declared in the instructions for use. The system must be resilient against errors, faults, or inconsistencies, and against attempts by unauthorised third parties to exploit vulnerabilities (including AI-specific attacks such as data poisoning, adversarial examples, and model extraction).

How Does the Conformity Assessment Work?

Before placing a high-risk AI system on the EU market, the provider must undergo a conformity assessment to demonstrate compliance with all Chapter 2 requirements. The type of assessment depends on the system’s domain.

For most Annex III categories, the provider may carry out an internal conformity assessment based on Annex VI. This means the provider self-certifies compliance by verifying its quality management system and technical documentation against the requirements. A notified body is not required for these systems.

However, for biometric identification systems listed in Annex III point 1, the conformity assessment must involve a notified body (a third-party assessment body designated by a Member State). For systems covered under Annex I harmonisation legislation – such as medical devices or machinery components – the conformity assessment follows the procedures already established under that sector-specific legislation, with the AI Act requirements integrated as additional elements.

After a successful conformity assessment, the provider issues an EU Declaration of Conformity (Article 47) and affixes the CE marking (Article 48) to the system or its packaging and documentation.

What Is Required for EU Database Registration?

Article 71 requires providers and, where applicable, deployers of high-risk AI systems to register the system in the EU database for high-risk AI systems before it is placed on the market or put into service. The database is publicly accessible and managed by the European Commission. Registration data includes the provider’s name and contact details, a summary of the system’s intended purpose, its risk classification, conformity assessment status, and Member States in which the system is available.

As of early 2026, the EU AI database contained over 4,200 registered entries, though the Commission has acknowledged that registration rates remain below projections for several Annex III categories.

What Does Post-Market Monitoring Require?

Article 72 obliges providers to establish a post-market monitoring system proportionate to the nature of the AI system and its risk level. This system must actively and systematically collect, document, and analyse data on the system’s performance throughout its lifetime, drawing on information provided by deployers and any other source.

When a serious incident occurs or a system presents a risk that is not in conformity, the provider must report to the market surveillance authority of the relevant Member State. For systems that continue to learn after deployment, the post-market monitoring obligation is particularly demanding, as changes in model behaviour must be tracked and assessed against the original conformity baseline.

How Do High-Risk AI Obligations Interact With GDPR?

The intersection between AI Act obligations and GDPR is one of the most operationally demanding areas for providers of AI act high-risk AI systems. Several obligations create direct GDPR dependencies.

Personal data in training sets

Article 10’s data governance requirements mean that providers processing personal data in training, validation, or testing datasets must comply with GDPR simultaneously. This includes establishing a lawful basis for processing under Article 6 GDPR, conducting a DPIA under Article 35 where the processing is likely to result in a high risk to individuals, and ensuring data minimisation – only processing personal data that is adequate, relevant, and limited to what is necessary.

Article 10(5) of the AI Act creates a narrow legal basis for processing special category data for bias detection, but it does not override GDPR requirements. Organisations must still identify appropriate safeguards, which in practice means pseudonymisation, access controls, and strict purpose limitation.

DPIA requirements for high-risk AI

Article 26(9) of the AI Act explicitly requires deployers of high-risk AI systems to use the information provided under Article 13 (transparency obligations) to carry out a DPIA under GDPR Article 35 where applicable. This creates a direct regulatory link: the provider’s transparency documentation feeds into the deployer’s GDPR impact assessment.

For a comprehensive understanding of how these two regulations overlap, see our detailed analysis of AI Act vs GDPR. Tools like Legiscope can help organisations manage the GDPR compliance workflows that run in parallel with AI Act obligations, particularly around DPIA documentation, record-keeping, and data processing inventories.

Record-keeping alignment

Article 12’s logging requirements must be reconciled with GDPR data retention principles. Logs that capture personal data are themselves subject to purpose limitation and storage limitation under GDPR Articles 5(1)(b) and 5(1)(e). Organisations must define retention periods for AI system logs that satisfy both the AI Act’s traceability requirements and GDPR’s data minimisation obligations. Our GDPR compliance checklist covers these retention considerations in detail.

Frequently Asked Questions

When do high-risk AI obligations take effect?

The main obligations for high-risk AI systems under Annex III take effect on 2 August 2026. Systems that are safety components of products covered by Annex I harmonisation legislation have until 2 August 2027 to comply with the full requirements.

Can a provider self-certify compliance for a high-risk system?

For most Annex III categories, yes. The provider conducts an internal conformity assessment under Annex VI. The exception is biometric identification systems (Annex III, point 1), which require assessment by a notified body.

What penalties apply for non-compliance with high-risk obligations?

Non-compliance with high-risk system obligations carries fines of up to EUR 15 million or 3% of total worldwide annual turnover, whichever is higher. For SMEs and startups, the regulation provides that the lower of the two amounts applies.

Does every high-risk system need a DPIA?

Not automatically. A DPIA is required under GDPR Article 35 when the processing carried out by the AI system is likely to result in a high risk to the rights and freedoms of natural persons. However, given the nature of high-risk AI categories under Annex III, the overlap is substantial – most deployments involving personal data will trigger the DPIA threshold.

Is CE marking mandatory for high-risk AI systems?

Yes. After completing the conformity assessment, the provider must affix the CE marking to the high-risk AI system before placing it on the EU market. The marking indicates conformity with the AI Act requirements and is a prerequisite for lawful market placement.

How does the EU database for high-risk AI work?

Providers must register their high-risk AI system in a publicly accessible EU database managed by the European Commission before placing it on the market. Deployers in the public sector must also register. The database contains the provider’s details, the system’s intended purpose, conformity assessment status, and the Member States where the system is or will be available.

FAQ

What obligations apply to high-risk AI systems under the AI Act?

Article 9-15 requirements: risk management system; data governance for training/validation datasets; technical documentation; automatic logging (record-keeping); transparency to deployers; human oversight mechanisms; accuracy, robustness, and cybersecurity measures. Providers must also register the system in the EU AI database.

What is a conformity assessment for high-risk AI?

Before placing a high-risk AI system on the market, providers must complete a conformity assessment (Art. 43) to verify the system meets all Chapter 3 requirements. For most Annex III systems, self-assessment is permitted with full documentation. Some biometric AI requires third-party notified body assessment.

What documentation is required for high-risk AI systems?

Article 11 and Annex IV require: general description; design methodology; monitoring, testing, and validation; standards applied; risk management documentation; data governance practices; human oversight measures; and intended purpose limitations. This documentation must be maintained throughout the system lifecycle.

What are the penalties for non-compliance with high-risk AI Act obligations?

Fines up to €30M or 6% of global annual turnover for using prohibited AI; up to €20M or 4% for most high-risk and GPAI violations; up to €10M or 2% for incorrect information to authorities. Penalties apply to providers and, in certain cases, deployers.

Automate your GDPR compliance

Save 340+ hours per year on compliance work. Legiscope provides AI-powered GDPR management trusted by compliance professionals.

Discover Legiscope
TD
Written by
Dr. Thiébaut Devergranne
Fondateur de Legiscope et expert RGPD

Docteur en droit de l'Université Panthéon-Assas (Paris II), 23 ans d'expérience en droit du numérique et conformité RGPD. Ancien conseiller de l'administration du Premier ministre sur la mise en œuvre du RGPD. Thiébaut est le fondateur de Legiscope, plateforme de conformité RGPD automatisée par l'IA.