Responsible AI Platform

AI Risk Classification Decision Tree

Determine the risk classification of your AI system according to the EU AI Act

Follow this flowchart to determine which risk category your AI system falls under. Start at "Start" and follow the arrows based on your answers.

Start

Is it an AI system according to the AI Act definition?

An AI system is software that generates outputs based on input such as predictions, recommendations or decisions.

✓ Yes, it is an AI system
✗ No
No

Out of Scope

The AI Act does not apply to this system.

Does the system fall under one of the prohibited practices (Art. 5)?

•Subliminal manipulation causing harm
•Exploitation of vulnerabilities (age, disability)
•Social scoring by governments
•Real-time biometric identification in public spaces (exceptions apply)
•Emotion recognition at work/school
•Biometric categorization on sensitive characteristics
•Facial recognition via scraping
Yes

PROHIBITED

Stop development/use immediately. Seek legal advice.

Does the system fall under one of the high-risk categories (Annex III)?

Biometrics
Identification, categorization
Critical infrastructure
Water, gas, electricity, traffic
Education & vocational training
Admission, assessment
Employment
Recruitment, selection, performance evaluation
Essential services
Access to social services, credit
Law enforcement
Risk assessment, detection, investigation
Migration & border control
Asylum, visa, border surveillance
Administration of justice
Legal advice, alternative dispute resolution
Yes

HIGH-RISK

This AI system is classified as high-risk.

Risk management system
Data governance
Technical documentation
Logging and traceability
Transparency to users
Human oversight
Accuracy, robustness, cybersecurity
Conformity assessment

Does the system have direct interaction with persons or generate content?

•Chatbots or virtual assistants
•Generation of synthetic content (text, image, audio)
•Emotion recognition (non-prohibited applications)
•Biometric categorization (non-prohibited applications)
Yes

LIMITED RISK

This AI system has transparency obligations.

Inform users they are interacting with AI
Clearly mark AI-generated content
Inform about emotion recognition/categorization
No

MINIMAL RISK

No specific obligations under the AI Act.

Voluntary codes of conduct and best practices are recommended.

Legend

Prohibited - Stop immediately
High-risk - Extensive compliance required
Limited risk - Transparency obligations
Minimal risk - No obligations
Out of scope - AI Act not applicable

Disclaimer: This flowchart is a simplified representation. The final classification depends on specific circumstances. Consult the full AI Act text or an expert for binding advice.

Based on the EU AI Act (Regulation 2024/1689)

Version January 2025