Responsible AI Platform

FRIA Template: Step-by-Step Guide to Article 27 AI Act

··8 min leestijd
Delen:
Engelse versie niet beschikbaar

Imagine a municipality deploying an AI system to assess social benefit applications. Or a health insurer considering algorithms for risk profiling on life insurance policies. Before they flip the switch, the EU AI Act demands something fundamental: a rights impact assessment. Not as a box-ticking exercise, but as a serious analysis of what could go wrong for the people affected.

Article 27 of the AI Act introduces the Fundamental Rights Impact Assessment (FRIA). It is a new instrument designed specifically for AI systems, and it goes beyond the familiar DPIA from the GDPR. In this article, we walk through all five paragraphs of Article 27, explain who is affected, and provide a practical template you can start using today.

Who needs to conduct a FRIA?

Not every organisation using AI needs to perform a FRIA. Article 27 targets three specific categories of deployers of high-risk AI systems:

  1. Bodies governed by public law: government agencies, municipalities, executive authorities, and independent administrative bodies. Think tax authorities, employment agencies, or local councils using AI for enforcement.
  2. Private entities providing public services: healthcare providers, educational institutions, housing associations, social service providers. If your private organisation delivers services that affect the public interest, you fall under this category.
  3. Deployers of specific financial AI systems: organisations using AI for creditworthiness assessment, credit scoring, or risk assessment and pricing for life and health insurance (Annex III, point 5(b) and (c)). This category applies regardless of whether you are a public or private organisation.

Important: the obligation does not apply to AI systems used as safety components in the management of critical infrastructure, such as road traffic, water supply, gas, heating, or electricity (Annex III, point 2).

Article 27 paragraph by paragraph

Paragraph 1: The core of the FRIA

The first paragraph is the foundation. Before deploying a high-risk AI system, the organisations listed above must perform an assessment of the impact on fundamental rights. The assessment must consist of six elements:

(a) Process description: a description of the deployer's processes in which the high-risk AI system will be used in line with its intended purpose.

(b) Period and frequency: a description of the time period and frequency with which the high-risk AI system is intended to be used.

(c) Affected persons and groups: the categories of natural persons and groups likely to be affected by its use in the specific context.

(d) Specific risks: the specific risks of harm likely to impact the persons or groups identified under (c), taking into account the information provided by the provider pursuant to Article 13.

(e) Human oversight: a description of the implementation of human oversight measures, according to the instructions for use.

(f) Measures when risks materialise: the measures to be taken if the risks actually occur, including arrangements for internal governance and complaint mechanisms.

Paragraph 2: First use and updates

The obligation applies to the first use of the AI system. In similar cases, you may rely on previously conducted FRIAs or existing impact assessments prepared by the provider. However, once you determine that any of the elements from paragraph 1 has changed or is no longer up to date, you must update the assessment.

In practice, this means a FRIA is not a one-off exercise. It is a living document that evolves alongside changes in usage, context, or the system itself.

Paragraph 3: Notification to the market surveillance authority

After completing the FRIA, you must notify the market surveillance authority of the results. You do this by submitting the completed template (see paragraph 5) as part of the notification. Organisations falling under Article 46 paragraph 1 may be exempt from this notification obligation.

Paragraph 4: Overlap with the DPIA

This paragraph is particularly relevant for organisations already conducting a Data Protection Impact Assessment (DPIA) under Article 35 GDPR or Article 27 of Directive 2016/680. If you have already completed a DPIA, you do not need to start from scratch. The FRIA complements the existing DPIA.

In practice, this means you can combine both assessments into a single document, as long as you add the AI Act-specific elements (such as fundamental rights risks beyond privacy) to what you already have. This avoids duplicate work and provides a coherent overview of all risks.

Paragraph 5: Template from the AI Office

The AI Office will develop a template in the form of a questionnaire, potentially supported by an automated tool, to help deployers comply with their obligations. At the time of writing, this template has not yet been published. Nevertheless, you can start preparing now. The six elements from paragraph 1 form the backbone of every FRIA.

Practical FRIA template

Based on the legal text, academic research by Mantelero, the guide from ECNL and the Danish Institute for Human Rights, and the ALTAI checklist from the European Commission, you can already build a workable template. Below is a structure you can start using immediately.

Step 1: System identification and process description

Answer the following questions:

  • Which AI system is being deployed? (name, version, provider)
  • In which process will the system be used?
  • What is the intended purpose according to the provider?
  • How does this fit within the broader business processes?
  • Who is the internal responsible person (deployer contact)?

Step 2: Usage period and frequency

  • When will the system first be deployed?
  • How often will the system be used? (continuously, daily, weekly, occasionally)
  • Is there a planned end date, or is usage open-ended?

Step 3: Identify affected persons and groups

  • Which categories of persons are directly affected? (e.g. job applicants, patients, benefit recipients, insured persons)
  • Are vulnerable groups involved? (children, elderly, persons with disabilities, minorities)
  • How large is the potentially affected group?
  • Are there indirect effects on third parties?

Step 4: Risk assessment per fundamental right

Assess the potential impact for each relevant fundamental right from the EU Charter:

  • Human dignity (Art. 1 Charter): could the system reduce people to a score or profile?
  • Non-discrimination (Art. 21): are there risks of bias or unequal treatment?
  • Privacy and data protection (Art. 7-8): what personal data is being processed?
  • Freedom of expression (Art. 11): could the system restrict or censor expression?
  • Right to good administration (Art. 41): will affected persons receive a reasoned decision?
  • Access to justice (Art. 47): can affected persons challenge the outcome?
  • Rights of the child (Art. 24): if minors are involved, how are their interests protected?

Use the information that the provider is required to supply under Article 13 (transparency obligations).

Step 5: Describe human oversight

  • What human oversight measures have been implemented?
  • Who performs the oversight and with what authority?
  • Can a human override the system's output?
  • How is it ensured that the oversight person is adequately trained?
  • Which instructions from the provider are being followed?

Step 6: Mitigation measures and governance

  • What measures will be taken if risks materialise?
  • Is there an internal complaint mechanism for affected persons?
  • Who is responsible for internal governance around the AI system?
  • How will the FRIA be periodically reviewed and updated?
  • Is there an escalation procedure for unforeseen effects?

Step 7: Documentation and notification

  • Compile the complete FRIA report
  • Verify that all six elements from Article 27 paragraph 1 have been addressed
  • Submit the completed template to the market surveillance authority (once the official template is available)
  • Archive the FRIA and schedule a reassessment

The relationship with the DPIA

Many organisations already conduct DPIAs for processing activities with high privacy risk. The FRIA and DPIA overlap partially, but the FRIA goes broader. Where a DPIA focuses on risks to personal data, a FRIA examines the full spectrum of fundamental rights: discrimination, access to justice, freedom of expression, social rights.

The good news: Article 27 paragraph 4 explicitly allows you to combine the FRIA with an existing DPIA. You do not need to create two completely separate documents. Add the fundamental rights analysis to your existing DPIA and you satisfy both obligations.

Why start now?

The obligation to conduct a FRIA takes effect from 2 August 2026 for most high-risk AI systems. That may seem far away, but preparation takes time. You need to set up internal processes, assign responsibilities, and gather the right information from your AI providers.

Moreover, the ECNL/DIHR report demonstrates that a FRIA is more than a compliance checkbox. Done properly, it helps you genuinely understand what your AI systems do to people's rights. That is not only legally required, it is simply good practice.

Summary

Article 27 introduces a specific fundamental rights assessment for AI systems that goes beyond existing instruments. The FRIA requires public organisations, providers of public services, and certain financial institutions to think carefully about the impact of their AI on citizens' rights before deployment. With the template in this article, you can get started today. The official template from the AI Office will follow, but the six elements from the law are already set in stone.