Fundamental Rights Impact Assessment (FRIA)
Fundamental rights assessment for high-risk AI systems
Requirement
Deployers of high-risk AI systems must conduct a FRIA before deployment. This applies particularly to organizations using AI for decisions affecting natural persons.
1. General Information
2. Purpose and Context
3. Fundamental Rights Assessment
Assess the potential impact of the AI system for each fundamental right.
Human dignity (Art. 1 EU Charter)
Can the system affect human dignity?
E.g. reducing people to data, discriminatory treatment, or violation of personal integrity.
Privacy & data protection (Art. 7-8)
Does the system process personal data? What risks exist?
E.g. unauthorized access, purpose limitation, retention periods, data subject rights.
Non-discrimination (Art. 21)
Can the system lead to discrimination?
E.g. based on race, gender, age, disability, religion, sexual orientation.
Equality before the law (Art. 20)
Are persons treated equally by the system?
E.g. consistent decision-making, no arbitrariness.
Effective legal remedy (Art. 47)
Can affected persons effectively object?
E.g. possibility of human review, appeal and objection options.
Freedom of expression (Art. 11)
Does the system affect free expression?
E.g. content moderation, recommendation algorithms, censorship.
Access to services (Art. 36)
Can the system limit access to essential services?
E.g. healthcare, education, social security, financial services.
4. Vulnerable Groups
5. Risk Mitigation
6. Conclusion and Decision
Download full FRIA template
Receive the Word version of this FRIA template with all sections and detailed guidance.
You will also receive updates about EU AI Act compliance.
Disclaimer: This template is a compliance tool and not legal advice. Consult a lawyer for specific situations.
Based on Article 27 EU AI Act (Regulation 2024/1689)
Responsible AI Platform