There is a moment when the abstraction of European legislation suddenly becomes very concrete. It happens when a civil servant asks: "But what does this algorithm actually do to the rights of our residents?" That question -- simple, direct, human -- is precisely the core of the Fundamental Rights Impact Assessment that the EU AI Act mandates.
Article 27 of the AI Act introduces a new instrument that goes beyond the familiar privacy assessment. It requires certain organisations to take fundamental rights seriously before deploying an AI system. Not as a theoretical exercise, but as a concrete, documented, and officially notified assessment of how an AI system affects people's lives.
This guide is the most comprehensive treatment of the FRIA available in English. We walk through the legal basis word by word, work out the practice step by step, and do not shy away from the difficult questions. Because those are precisely what fundamental rights are about.
The Legal Basis: Article 27 Word by Word
Article 27 of Regulation (EU) 2024/1689 -- the EU AI Act -- is titled "Fundamental Rights Impact Assessment for High-Risk AI Systems." That sounds straightforward, but the scope is substantial. Let us follow the text.
The first paragraph opens with the core obligation: prior to deploying a high-risk AI system as referred to in Article 6(2) -- the systems listed in Annex III -- specific categories of deployers must perform an assessment of the impact on fundamental rights that the use of such a system may produce. That is the essence. The fundamental rights assessment is a pre-deployment obligation. You may not wait until after going live.
The obligation has one explicit exception: AI systems deployed as safety components in the management of critical infrastructure -- road traffic, water, gas, heating, electricity -- fall outside the FRIA requirement. The same organisations may still be FRIA-obligated for other AI applications outside that specific safety context.
The six elements that Article 27(1) mandates form the backbone of every FRIA:
Element (a) requires a description of the deployer's processes in which the high-risk AI system will be used, in line with its intended purpose. You are not just describing what the system does, but how it fits into your organisational workflow.
Element (b) asks for the period of time and frequency of use. Is the system deployed continuously, once per application, or periodically for reassessments? That context determines the scope of the impact.
Element (c) concerns the categories of natural persons and groups likely to be affected by its use in the specific context. This goes beyond direct users -- indirect stakeholders count too.
Element (d) forms the analytical core: the specific risks of harm likely to have an impact on the identified persons and groups. This must take into account the information provided by the provider pursuant to Article 13 (the transparency obligations for providers).
Element (e) concerns human oversight: how are oversight measures implemented in accordance with the instructions for use? Who monitors, and with what authority?
Element (f) closes with measures to be taken if risks materialise: internal governance arrangements, complaint mechanisms, escalation procedures.
Paragraph 2 clarifies that the obligation applies to the first use. For similar cases, deployers may rely on previously conducted FRIAs or existing impact assessments carried out by the provider. But as soon as any element has changed, the FRIA must be updated.
Paragraph 3 establishes the notification requirement: once the assessment is complete, deployers must notify the market surveillance authority of its results, submitting the filled-out template from Article 27(5). Organisations exempt under Article 46(1) -- such as situations involving public security -- may be relieved of this obligation.
Paragraph 4 governs the interaction with the DPIA under the GDPR (Article 35) or Directive 2016/680. If a DPIA has already been conducted, the FRIA complements it. You do not start over, but you add the fundamental rights dimension that goes beyond privacy.
Paragraph 5 authorises the AI Office to develop a template in the form of a questionnaire, including through an automated tool, to facilitate compliance. At the time of writing, this template has not yet been published. In the meantime, you can use our FRIA template based on Article 27 to get started.
Who Must Perform a FRIA?
The FRIA obligation does not apply to everyone using AI. Article 27 targets three specific categories of deployers.
Category 1: Bodies governed by public law. All entities governed by public law -- governments, municipalities, executive bodies, independent administrative organs. In the Dutch context these include municipalities, provinces, UWV (employment agency), the Tax Authority, IND (immigration service), DUO (education executive), and the police. Once they deploy a high-risk AI system from Annex III (with the exception of point 2), they are FRIA-obligated.
Category 2: Private entities providing public services. Recital 96 of the AI Act explains what "public services" means: tasks in the public interest in areas including education, healthcare, social services, housing, and the administration of justice. A private healthcare provider, a social housing association, a privately operated educational institution receiving public funding -- all fall into this category when deploying high-risk AI. Utilities providing essential services may also fall here, unless they use AI specifically as a safety component in that infrastructure.
Category 3: Deployers of specific financial AI systems. This is the category that surprises most organisations. Regardless of whether an organisation is public or private, the FRIA obligation applies to deployers of AI systems for (a) assessing the creditworthiness of natural persons or establishing their credit score (Annex III, point 5(b)), and (b) risk assessment and pricing in relation to natural persons for life and health insurance (Annex III, point 5(c)). Banks, financial institutions, and insurers using such systems are therefore always FRIA-obligated. An exception applies to AI used exclusively for detecting financial fraud.
A practical note: the FRIA obligation lies with the deployer, not the provider (developer). Those who build and sell AI do not need to perform a FRIA. Those who deploy AI in their own operations -- and fall within one of the three categories -- do.
When Must the FRIA Be Completed?
The answer is clear: before first use. Article 27(1) opens with "prior to deploying." There is no room for interpreting that you can start first and assess later. The FRIA is a pre-deployment instrument.
The reason is logical. An assessment must inform, not justify. If you go live first and then reflect on fundamental rights risks, the conclusions are likely to be used to defend the status quo rather than improve it. That is precisely the problem Article 27 aims to prevent.
After first use, the FRIA lives on. Whenever any of the six elements from paragraph 1 has changed -- the process has been modified, the target group expanded, the system updated -- the FRIA must be updated. It is not a one-time exercise but a living document.
In terms of the broader AI Act timeline: the obligations for high-risk AI systems under Article 6(2) apply from 2 August 2026. This means FRIAs for systems already in use must be completed by that date. For new systems deployed after that date, the FRIA obligation applies at go-live.
Which Fundamental Rights Do You Assess?
The FRIA is broader than the DPIA precisely because it covers the full spectrum of the EU Charter of Fundamental Rights. Article 7 (right to privacy) and Article 8 (protection of personal data) are included, but that is only the beginning.
Human dignity (Article 1 of the Charter) is the foundation. AI systems that reduce people to a score, a risk category, or a profile touch this right. Consider algorithms that rank welfare recipients by fraud probability: the way that is communicated and the consequences attached to it directly affect the dignity of those involved.
Non-discrimination (Article 21) is particularly relevant in the AI context. Algorithms trained on historical data reproduce historical inequalities. Direct discrimination -- the system explicitly produces different outcomes based on race or gender -- is rare and usually addressed in training data governance. Indirect discrimination -- the system uses proxy variables that correlate with protected characteristics -- is far harder to detect and at least as problematic. Your FRIA must address this explicitly.
Equality before the law (Article 20) requires that comparable cases be treated equally. Algorithmic systems can introduce inconsistency if they are poorly calibrated or if the input data is unevenly distributed.
Privacy and data protection (Articles 7 and 8) overlap here with the DPIA. You do not need to document this twice -- the FRIA complements the DPIA, as confirmed by Article 27(4).
Freedom of expression and information (Article 11) is relevant when AI assesses, filters, or ranks content. Systems making moderation decisions or influencing access to information touch this right.
Freedom of assembly and association (Article 12) can be affected by surveillance AI or systems analysing patterns in communication.
Right to good administration (Article 41) is crucial in government AI. Every citizen has the right to a reasoned decision, access to documents concerning them, and fair treatment. When an algorithm supports or automatically makes a decision, there must be a mechanism for human explanation and appeal.
Access to the courts (Article 47) concerns whether those affected can challenge an AI-based decision. If a credit score causes a mortgage application to be rejected, the applicant has the right to explanation and the ability to appeal. Your FRIA must describe how this is arranged.
Rights of the child (Article 24) are relevant when the system is deployed in contexts involving minors -- education, youth care, child protection.
Right to property (Article 17) and the right to social security and social assistance (Article 34) can be affected by AI in benefits administration or property management.
For a deeper look at how public organisations assess these rights in practice, see our post on the FRIA in the boardroom of the public sector.
FRIA Versus DPIA, ALTAI, and IAMA: The Essentials
Many organisations already work with impact assessment instruments. How does the FRIA relate to them?
The DPIA (Data Protection Impact Assessment, Article 35 GDPR) focuses on risks to personal data and privacy. It is mandatory for processing activities with a high privacy risk. The FRIA is broader, covering the full spectrum of fundamental rights. Legally, the FRIA complements the DPIA; in practice, you can combine them in a single integrated document. For a detailed comparison with a practical decision tree, see our DPIA vs FRIA comparison post.
The ALTAI (Assessment List for Trustworthy AI) is a voluntary checklist from the European Commission based on the 2019 Ethics Guidelines for Trustworthy AI. It covers seven requirements including safety, transparency, non-discrimination, and privacy. ALTAI is not a legal requirement but can serve as a useful preparatory tool for a FRIA.
The IAMA (Impact Assessment for National Algorithms) is a Dutch-specific instrument developed by the Ministry of Justice and Security to assess algorithmic decision-making in government. It is fundamental rights-oriented and overlaps significantly with the FRIA. Organisations that already apply the IAMA have a head start in conducting their FRIA.
Step by Step: How to Conduct a FRIA
ECNL and the Danish Institute for Human Rights published a detailed guide in December 2025 that distinguishes five phases. We translate these here into a practical approach.
Phase 1: Preparation and context
Before beginning the fundamental rights analysis, you lay the groundwork. Identify the specific AI system and version you intend to deploy. Describe the intended use precisely as defined by the provider in the system documentation (required under Article 11). Assemble a multidisciplinary team: you need a lawyer who understands fundamental rights, a data scientist who understands how the system works, a policy advisor who knows the context, and -- crucially -- representation from or meaningful engagement with the groups affected by the system.
Phase 2: Context description (Article 27(1)(a)-(c))
Describe the process landscape: in which workflow is the AI system embedded? Who makes the final decisions, and what role does the system play? Are these supporting recommendations or automated decisions? Determine frequency and period of use. Map the groups involved -- who is directly affected, who indirectly? Are there vulnerable groups such as children, the elderly, people with disabilities, those with low educational attainment, migrants?
Phase 3: Fundamental rights analysis (Article 27(1)(d))
This is the heart of the FRIA. For each relevant right from the EU Charter, assess: what are the specific risks of harm? How likely are these risks, and how serious? You draw on the provider's information (Article 13 AI Act), but also on contextual knowledge, literature, and where possible consultation with affected groups.
Be concrete. "Non-discrimination risk present" is not an analysis. Write: "The system was trained on historical credit application data from 2010-2020. During that period, applications from certain postal codes were systematically rejected more often. The system may reproduce this pattern. We have asked the provider to provide a bias analysis and estimate the risk of indirect discrimination as medium."
Phase 4: Human oversight and mitigation (Article 27(1)(e)-(f))
Describe how human oversight is organised. Which employee reviews the output, with what knowledge, and does that employee have authority to override the recommendation? Document also the mitigation measures per identified risk: technical (e.g. bias testing), organisational (e.g. training for employees working with the system), procedural (e.g. complaint mechanism).
Phase 5: Documentation and notification
Compile the FRIA report with all six elements of Article 27(1) explicitly addressed. Submit it to the market surveillance authority once the official AI Office template is available. Archive the FRIA internally and schedule a periodic review.
For a fully fillable template based on Article 27, see the FRIA template post. Want to get started right away? Use our interactive FRIA generator to build your FRIA report step by step, or download our FRIA template.
The Role of the DPO and Other Stakeholders
The Data Protection Officer (DPO) has a logical role in the FRIA process, but that role is broader than many organisations expect.
DPOs are already familiar with impact assessments through DPIA practice. They understand risk analysis, documentation requirements, and supervisory relationships. But the FRIA demands expertise beyond the privacy domain. Non-discrimination law, administrative law, access to justice -- these are areas where the average DPO needs additional competence.
In practice, three models are emerging. The first places the DPO as process owner who coordinates the FRIA but delegates the fundamental rights analysis to a multidisciplinary team. The second creates a separate AI Ethics Officer or AI Compliance Officer who leads the FRIA, with the DPO as advisor on the privacy dimension. The third -- most common in smaller organisations -- has the DPO conduct the full FRIA, which requires targeted upskilling in fundamental rights beyond privacy.
Beyond the DPO, there are additional stakeholders with roles to play. The AI system provider is required to provide information under Article 13 (usage registers, technical documentation, instructions) and Article 11 (full technical documentation). That information is the foundation for your fundamental rights analysis -- actively request it and document in writing what you received.
Works councils or employee representative bodies have co-determination rights when AI deployment affects working conditions or personnel evaluation. Engage them early, not as a formality but as a valuable source of perspective from the operational level.
Consider also consulting the groups affected by the system. Recital 96 of the AI Act recommends this as best practice. A municipality deploying an enforcement algorithm in vulnerable neighbourhoods would do well to involve residents -- or their representatives -- in the FRIA process.
Sector-Specific Considerations
The FRIA is in principle universal, but its content varies significantly by sector.
Government and municipalities deploy systems with direct administrative law consequences. Benefits algorithms, enforcement algorithms, systems for allocating care or housing -- the output touches fundamental social rights. Here the right to good administration (Article 41 Charter) and the right to judicial protection (Article 47) are particularly relevant. Dutch municipalities should also note the IAMA instrument recommended by the VNG (Association of Dutch Municipalities).
Financial sector organisations fall as Category 3 always under the FRIA obligation for credit and insurance algorithms. Here non-discrimination is the primary fundamental rights risk: structured and semi-structured lending data contains historical inequalities that AI amplifies. Banks must be able to provide specific bias analyses -- ideally appended to the FRIA as an annex.
Healthcare touches medical decision-making, treatment choices, and access to care. Systems that triage, diagnose, or support treatment plans may fall under Annex III. Here both human dignity and non-discrimination and the right to healthcare (Article 35 Charter) are relevant.
HR and recruitment is one of the most sensitive application areas. Annex III point 4 covers AI for recruitment, selection, promotion, and dismissal. Employers deploying such systems are FRIA-obligated if classified as public organisations or public service providers. Non-discrimination -- on grounds of gender, age, ethnicity -- is the dominant risk.
Health insurers using AI for risk profiling and premium calculation fall explicitly under Category 3 (Annex III point 5(c)). Here fundamental rights risks relate to unfair premium-setting for vulnerable or chronically ill policyholders.
The Relationship with Conformity Assessment and CE Marking
The FRIA does not stand alone -- it is part of a broader compliance ecosystem. For high-risk AI systems, the AI Act also requires conformity assessment (Article 43) and, for some systems, third-party audit. Providers must compile a technical file, test for accuracy and robustness, and establish a quality management system.
The FRIA is a deployer obligation that runs parallel to provider obligations. You as deployer cannot fulfil the provider's conformity assessment, and the provider's CE marking does not relieve you of the FRIA obligation. The two tracks are complementary: the provider demonstrates that the system was built safely; the deployer demonstrates that the system is deployed safely in the specific context.
A practical point: the technical documentation that providers must maintain under Article 11, and the usage logs under Articles 12 and 26, are your primary sources for the FRIA. Request these documents at the time of procurement or go-live -- it is your right as deployer.
Supervision and Enforcement: What if You Skip the FRIA?
The notification requirement of Article 27(3) implies active supervision. The market surveillance authority receives the completed templates and can verify whether the FRIA was adequately conducted. Different national authorities exercise oversight depending on sector.
The AI Act sets out sanctions in Article 99 for violations of obligations concerning high-risk AI systems. Fines can reach 15 million euros or 3 percent of global annual turnover for organisations failing to meet high-risk AI obligations, and up to 30 million euros or 6 percent for more serious violations. The FRIA obligation falls under deployer requirements -- the absence of a FRIA is a compliance risk.
One notable analytical point from legal observers: the AI Act does not specify a separate sanction specifically for the absence of a FRIA. But a missing FRIA is evidence that an organisation has not adequately assessed its high-risk AI system, which can have broader enforcement consequences. Supervisory authorities can also enforce through administrative orders requiring the FRIA to be completed.
Beyond enforcement, there is another risk: reputational damage and liability. If an AI system causes harm to individuals and you cannot demonstrate that you conducted a FRIA, you stand on weak ground in legal proceedings. The FRIA is not only a compliance instrument -- it is also a risk management instrument.
Common Mistakes in Practice
Organisations beginning FRIA preparation now make several recognisable errors.
Treating the FRIA as a checkbox is the most fundamental mistake. A FRIA completed in two hours by a lawyer without consulting affected staff or communities may formally meet the minimum legal requirements but misses the point entirely. A FRIA must inform, not justify.
Starting too late is a practical mistake. The FRIA requires information from the provider, consultation with stakeholders, and thorough analysis. That takes weeks, not hours. Begin as soon as you are considering an AI system, not a week before go-live.
Narrowing the fundamental rights analysis to privacy is the most substantive mistake. Non-discrimination, access to justice, right to good administration: these are rights that quickly become relevant but are missed by privacy-focused teams.
Underestimating the provider's role is a contractual mistake. You need the provider's information for element (d) of the FRIA. Contractually establish that the provider delivers technical documentation and usage logs timely and completely, and that the provider bears liability if information provided proves incorrect.
Failing to update the FRIA is a process mistake. Systems are updated, uses change, target groups shift. Build a periodic FRIA review into your standard AI governance processes.
Timeline: When Does Everything Need to Be in Place?
The EU AI Act follows a phased implementation timeline. Obligations for prohibited AI (Article 5) applied from 2 February 2025. Obligations for high-risk AI systems under Article 6(2) -- the category to which Article 27 applies -- apply from 2 August 2026.
In practical terms:
For systems already in use on 2 August 2026, the FRIA obligation applies as of that date. You therefore have until then to complete the FRIA and notify the market surveillance authority.
For systems deployed after 2 August 2026, the FRIA obligation applies at go-live. You conduct the FRIA as part of the implementation process.
Systems already in use before 2 August 2026 but modified after that date fall under the update obligation of Article 27(2): update the FRIA as soon as relevant elements have changed.
If you have not yet begun FRIA preparation for high-risk AI systems currently active in your organisation, there is no time to lose. First, map all systems falling under Annex III. Then determine for which ones the FRIA obligation applies. Then launch a FRIA process per system.
From Compliance to Responsible AI Use
The FRIA is legally required, but the best organisations see it as more than that. They use the fundamental rights assessment as an occasion to ask fundamental questions about the AI systems they deploy: are we the right organisation to be doing this? Do we have sufficient capacity for meaningful human oversight? Are the rights of those affected genuinely protected, or are we engaged in compliance theatre?
Those questions are not always comfortable. But they are exactly the questions the legislator intended to trigger with Article 27. The FRIA is an instrument for reflection, not only for documentation.
Organisations that take the FRIA seriously -- multidisciplinarily, with consultation of affected groups, with concrete measures per identified risk -- build in the process the governance structures that are necessary in the long term for responsible AI use. That is the promise of the fundamental rights assessment: not just reduced risk of sanctions, but genuinely better treatment of the rights of the people you serve.