High-risk AI systems under the EU AI Act - a sector-by-sector guide
Last update: June 7, 2025
Introduction
In June 2024, Regulation (EU) 2024/1689 - better known as the EU Artificial Intelligence Act - was formally adopted. The law introduces a differentiated, risk-based framework with the dual aim of stimulating innovative applications of artificial intelligence while protecting citizens from harmful effects of AI use. AI applications labeled as high risk form the 'red zone' in this framework: they are not prohibited but are subject to stricter requirements.
The final text automatically places eight groups of use cases, listed in Annex III, in the high-risk category, provided they do not fall under an explicit exception. This blog discusses each of these sectors, clarifies why they are considered more critical than other applications by legislators, and describes the concrete compliance steps providers and users must now take. Additionally, it covers the timeline - the rules apply in full from August 2, 2026 - and the interaction with other EU legislation.
Annex III at a glance
Annex III contains an exhaustive list of eight domains in which AI systems potentially have a major impact on safety or fundamental rights. These are:
- Biometrics & emotion analysis
- Critical infrastructure
- Education and training
- Work & HR processes
- Essential (public and private) services
- Law enforcement
- Migration, asylum & border management
- Justice & democratic processes
Anyone deploying an AI system in one of these domains that falls under the described use cases cannot shed the high-risk label; the legislator has already made the proportionality assessment for you.
1 Biometrics and emotion recognition
What does the Act say?
The very first point in Annex III reveals what Brussels is most concerned about: AI systems that recognize, classify, or attempt to read people's emotions. Examples include remote biometric identification on the street, systems that infer age, gender, or ethnicity in a shopping mall, and camera analyses that purportedly detect anger.
Why high risk?
Biometric models directly impact the fundamental right to privacy and can lead to discrimination. Moreover, errors are made 'at scale': once live, the system potentially scans thousands of people per minute.
Compliance tips
- Check legal basis – The AI Act requires that deployment is "permitted under Union or national law."
- Data governance – Collect representative datasets, document bias tests, and record data provenance.
- Conduct FRIA – Specifically for biometrics, Article 27 emphasizes the necessity of a documented fundamental rights impact assessment.
- Transparency obligation – Users must know they are being scanned; 'ghost use' is prohibited.
Practical example
In various EU member states, including the Netherlands, police pilots with live facial recognition have been temporarily suspended following criticism from civil rights organizations. It illustrates how quickly biometrics shifts from lab to street and why the EU wants such tests to fall under clear governance.
2 Critical infrastructure (energy, traffic, digital networks)
What does the Act say?
AI functioning as a safety component in the operation of water, gas, heat, or electricity networks, in road traffic management, or in other critical digital infrastructure is high risk.
Why high risk?
An erroneous classification model in an electricity grid can lead to blackouts; a wrong prediction in traffic management can cause accidents. The societal dependence on always-on services justifies additional safeguards.
Compliance tips
- Dual conformity assessment – Product regulations (e.g., the Machinery Regulation) and the AI Act both apply.
- Fail-safe by design – Annex VII emphasizes 'graceful degradation': the system must fail safely.
- Continuous monitoring – Post-market surveillance must enable rapid incident reporting.
Practical example
In 2025, Siemens tested a generative AI module within its predictive maintenance platform to predict turbine failures days in advance. This reduced unplanned downtime in three wind farms by 25 percent.
3 Education and vocational training
What does the Act say?
AI that influences admission to, progress in, or outcomes of educational institutions - think of automatic proctoring, adaptive testing platforms, or study guidance algorithms - falls under the high-risk rules.
Why high risk?
Access to education determines later opportunities in the labor market. Bias in a selection algorithm can structurally disadvantage groups; incorrectly detecting 'fraud' can cause reputational damage.
Compliance tips
- Human in the loop – Article 14 requires meaningful human review before final decisions.
- User participation – Involve teachers and students in the risk assessment; they provide practical feedback.
- Open learning models – Consider publicly available auditor benchmarks to measure bias.
Practical example
In 2023, a Dutch student sued her university for discrimination by online proctoring software. The Netherlands Institute for Human Rights determined that the software might violate the principle of equality and called on educational institutions to implement stricter audits.
4 Work, HR management & access to self-employment
What does the Act say?
Algorithms that filter applicants, determine promotions, monitor productivity, or schedule shifts belong to this category.
Why high risk?
A black-box scoring model can make or break a person's career. The asymmetry between employer (data) and employee (limited insight) increases the power imbalance.
Compliance tips
- Explainability – The outcome must be explainable.
- Audit trail – Maintain log files (Article 19) to be able to demonstrate how the decision was reached.
- Stakeholder consultation – Company and works council proactively discuss AI policy.
Practical example
In 2024, Amazon received a multi-million dollar fine because employees did not know how productivity algorithms determined their targets. The incident underscores that transparency is also subject to oversight outside Europe.
5 Essential public and private services
What does the Act say?
When an AI system decides on access to healthcare, social security, credit, or insurance, or classifies emergency calls, it becomes high risk.
Why high risk?
Wrongfully denied healthcare or micro-loans can directly lead to harm and increase social inequality.
Compliance tips
- Dataset representativity – Credit and insurance models must explicitly test for proxy discrimination.
- Monitoring emergency calls – For dispatch algorithms, additional obligations regarding robustness and accuracy apply.
- Link GDPR-DPIA – Bundle the FRIA with a Data Protection Impact Assessment.
Practical example
In 2025, the Spanish bank BBVA published a bias stress test for Spanish-language language models as part of its credit assessment process.
6 Law enforcement
What does the Act say?
Police and prosecution tools that assess recidivism risk, support evidence evaluation, or proactively predict crime fall under the strictest regime.
Why high risk?
False positives have far-reaching consequences: arrests, detention, or discriminatory surveillance tactics.
Compliance tips
- Legal basis – Only deploy if explicitly permitted under national law.
- Truth verification – Art. 40 requires validated performance data.
- Legal remedies – Citizens must have effective appeal procedures.
Practical example
In France, the gendarmerie paused the predictive policing project PAVED in 2025 following criticism about a lack of transparency.
7 Migration, asylum, and border management
What does the Act say?
AI that automates risk analysis of travelers, admission of asylum seekers, or detection of falsified documents becomes high risk.
Why high risk?
Incorrect risk scores can lead to unjustified border denials or detention.
Compliance tips
- Diversity in test data – Model based on global datasets.
- Ex-ante authorization – Some applications require approval from a supervisory authority.
- Cross-border governance – Collaborate with Frontex and the AI Office.
Practical example
The EU research 'iBorderCtrl' into AI lie detectors came under fire again at the European Court in 2024 due to inadequate transparency.
8 Justice and democratic processes
What does the Act say?
Tools that advise judges on jurisprudence or attempt to influence voters become high risk.
Why high risk?
The independence of the judiciary and the integrity of elections are at the core of the rule of law.
Compliance tips
- Transparency in court – AI-supported analyses must be verifiable.
- Political advertising library – Publish advertising data.
- Impact on pluralism – FRIA must map media effects.
Practical example
Estonia added mandatory human validation to its 'AI judge' in 2024 after lawyers pointed out deficiencies in the appeal option.
Horizontal obligations for all high-risk systems
- Risk management system – Article 9 describes an iterative risk management loop.
- Data and data governance – Quality and origin of data must be transparent (Article 10).
- Technical documentation – Annex IV lists mandatory components.
- Registration & CE marking – Before market introduction in the central EU database.
- Fundamental rights impact assessment – Mandatory for medium and large organizations.
- Incident reporting – Report major disruptions within 15 days.
Timeline and transitional regime
The AI Act entered into force on August 1, 2024 and has phased application:
- February 2, 2025 – Prohibited practices and AI literacy obligation.
- August 2, 2025 – Rules for general-purpose AI models and governance.
- August 2, 2026 – Core obligations for high-risk AI.
- August 2, 2027 – Obligations for AI within existing products.
Practical steps for organizations
- Inventory all your AI applications.
- Conduct a gap analysis.
- Assemble a multidisciplinary team.
- Develop FRIA and DPIA processes.
- Register with the AI Pact.
- Plan internal audits and external assessments in a timely manner.
Conclusion
The EU AI Act marks a turning point in European AI regulation. For the eight high-risk sectors from Annex III, this concretely means: more documentation, stricter audits, and a greater emphasis on fundamental rights. Organizations that now invest in explainability and human-centered governance win the trust of customers and regulators and create a sustainable competitive advantage.
Good luck on your compliance journey – and keep an eye on our blog for updates!