Responsible AI Platform

FRIA for municipalities: public sector guide

··10 min read

Municipalities do not usually struggle with the idea that AI can affect rights. They struggle with the moment the abstract legal question turns into an operational one. The procurement is done, the vendor says the system is compliant, the policy team wants to go live, and someone asks the uncomfortable question: have we actually done the FRIA yet?

That question matters because under Article 27, the FRIA is not a provider task. It is a deployer obligation. For municipalities, public bodies, and other public sector teams using high-risk AI, it is one of the clearest moments where the EU AI Act says: the vendor’s file is not enough, you need your own assessment.

When a municipality actually needs a FRIA

The short version is not “always” and not “never.” It depends on two things.

First, the AI system must be a high-risk AI system referred to in Article 6(2), which means the system falls within the Annex III logic rather than the product safety route.

Second, the deployer must fall into one of the categories named in Article 27. That includes bodies governed by public law, private entities providing public services, and deployers of high-risk AI systems referred to in points 5(b) and 5(c) of Annex III.

For municipalities, the first category is the key one. A municipality is a public body. So if it deploys a qualifying Annex III high-risk AI system, the FRIA is in play.

There is one important exclusion written directly into Article 27(1): high-risk AI systems intended to be used in the area listed in Annex III point 2 are excluded from the FRIA obligation. That is the critical infrastructure carve-out.

So the municipality question is never just “are we a public authority?” It is “are we a public authority deploying a qualifying Annex III high-risk AI system outside the Annex III point 2 exception?”

If that classification work is still fuzzy, use the risk assessment tool and compare the use case against the categories explained in our high-risk AI systems guide.

Public sector teams should stop treating FRIA as a late-stage form

A FRIA is not the last document before go-live. It is supposed to shape the deployment decision.

That is obvious when you read Article 27 carefully. The assessment must be performed prior to deploying the high-risk AI system. In plain English, before first use.

If a municipality starts the FRIA after procurement is locked, after workflows are designed, and after internal ownership has already been assigned, the assessment quickly becomes defensive. The team is no longer asking whether the deployment should change. It is asking how to justify the deployment already chosen.

That is exactly the wrong mindset for public sector AI.

What Article 27 requires in practice

Article 27(1) lists six elements. The best way to read them is not as six legal boxes, but as six operational questions.

1. In which municipal process will the AI system be used?

The FRIA must describe the deployer’s processes in which the system will be used, in line with the intended purpose.

That means you need a real process description, not a product description.

If the AI system is used in social benefits work, where exactly does it enter the chain? Intake? Prioritisation? Risk scoring? Human review? Escalation? Final decision support?

If the system is used in HR, is it screening applicants, ranking candidates, or supporting interviews?

A FRIA that only repeats vendor marketing language is already weak.

2. How often and for how long will it be used?

Article 27 also requires a description of the period and frequency of use.

This sounds administrative, but it is not trivial. A tool used once a month in a pilot has a different risk profile from a system used daily at scale in municipal operations.

3. Which people and groups are likely to be affected?

This is where many municipalities get too generic. “Residents” is not enough.

The FRIA should identify affected categories concretely. Benefit applicants, job applicants, parents, students, people in debt support trajectories, residents in vulnerable neighbourhoods, or municipal employees can all be affected in different ways.

And indirect impact matters too. If a system is used to prioritise cases, the people whose cases are deprioritised may be just as affected as the people actively flagged.

4. What are the specific risks of harm?

This is the core analytical step. Article 27(1)(d) requires an assessment of the specific risks of harm likely to affect the identified persons or groups, taking into account the provider information supplied under Article 13.

That means the municipality should not guess blindly, but it also cannot stop at the provider’s paperwork. Provider documentation is input, not a substitute for context-specific analysis.

For public sector teams, the rights lens is usually broader than privacy alone. Non-discrimination, access to services, human dignity, due process, and access to remedy often matter just as much as data protection.

5. How is human oversight actually implemented?

Article 27 asks for a description of human oversight measures in line with the instructions for use.

This is where Article 14 and our human oversight guide become highly relevant. Public sector teams should name who reviews outputs, what competence those people have, when they can override the system, and what happens if they disagree with it.

A municipal FRIA becomes flimsy very quickly if “human oversight” is written as a generic sentence rather than a real workflow.

6. What happens if risks materialise?

Article 27(1)(f) requires measures to be taken if risks materialise, including internal governance arrangements and complaint mechanisms.

This is where public bodies often expose whether they are serious. If a resident wants to challenge an AI-supported outcome, is there a route? If the system shows subgroup bias after deployment, who acts? If the provider’s assumptions no longer fit reality, who pauses the system?

Without those answers, the FRIA is not really finished.

LearnWize2 minutes, zero commitment

Learn the EU AI Act by doing

No slides. No boring e-learning. Try an interactive module.

Interactive ChallengePowered by LearnWize LearnWize

Try it yourself

3 interactive activities. Earn XP. See why this works better than reading slides.

LearnWizeTake the full test on LearnWize
Flashcards→Matching→Audit

FRIA and DPIA are not the same, but they should speak to each other

Public sector teams often ask whether the FRIA replaces the DPIA. It does not.

Article 27(4) says that where obligations are already met through a GDPR Article 35 DPIA, the FRIA complements that DPIA.

That is a useful legal instruction. Privacy is not the whole public sector rights picture, but it is usually part of it. So the practical move is to connect the two assessments instead of running them in different silos.

If your municipality already has a solid DPIA process, build the FRIA around it. Then expand beyond privacy into discrimination, procedural fairness, accessibility, explainability, human oversight, and complaint pathways. The DPIA vs FRIA comparison is useful if your team is still mixing the two up.

Typical municipal use cases that deserve immediate review

Not every AI tool in local government is automatically high-risk. But some categories should make your team sit up straight.

Access to essential public services

If a municipality uses AI in processes affecting access to essential public services or benefits, Annex III analysis should happen early. This is one of the clearest zones where fundamental rights risk is real.

HR and recruitment

Municipal recruitment, candidate ranking, or workforce management tools can fall within the employment-related high-risk category. Public bodies sometimes forget that “internal” HR use can still trigger major AI Act duties.

Education and youth-related decision support

Where municipal functions intersect with education allocation, assessment, or youth services, the rights analysis should be careful and concrete, especially where minors or vulnerable groups are involved.

Public order, enforcement, and risk scoring

Anything that looks like risk classification, prioritisation of enforcement action, or profiling in a public authority setting deserves immediate legal and rights scrutiny.

In all these cases, the Annex III classification and the FRIA should be examined before operational enthusiasm outruns legal discipline.

A practical FRIA workflow for municipalities

If you want a workable municipal process, keep it simple and serious.

  1. Classify the use case. Confirm whether the AI system is high-risk under Article 6 and Annex III.
  2. Request provider documentation early. Ask for the Article 13 information, intended purpose, known limitations, testing evidence, and required oversight measures.
  3. Describe the municipal workflow. Map where the system enters the process and where humans intervene.
  4. Identify affected groups and risks. Do not stop at privacy. Assess discrimination, access, due process, and practical harm.
  5. Connect FRIA and DPIA. Where personal data is involved, the assessments should reinforce each other.
  6. Define governance and challenge routes. Decide who owns the system, who pauses it, who handles complaints, and how updates are reviewed.
  7. Use a real template. Our FRIA generator and FRIA template can structure the work instead of forcing the team to improvise.

Where municipalities usually get this wrong

The first mistake is treating the FRIA as vendor paperwork. It is not. The provider can support it, but the deployer owns it.

The second mistake is starting too late. A FRIA begun after every substantive implementation choice has already been made will mostly produce compliance theatre.

The third mistake is reducing the analysis to privacy only. For public sector deployments, rights such as non-discrimination, fair treatment, and effective remedy are often just as important.

The fourth mistake is keeping human oversight vague. If nobody can explain who overrides the system, then oversight is probably not real.

The fifth mistake is forgetting that complaints and governance matter after go-live, not only before it.

Where to go next

If your team needs the broader legal background, start with our complete FRIA guide. If you need a more operational starting point, use the FRIA generator. If you want to connect the work to a municipal governance context, the post on FRIA in the public sector boardroom is a useful bridge.

And if you are still at the earlier stage of figuring out whether your use case is even high-risk, do that first. A messy FRIA often starts with a messy classification exercise.

Frequently asked questions

The most important questions and answers for this topic.

⚖️ Referenced Legislation

On LearnWize:EU AI Act ComplianceTry it free

From risk classification to conformity assessment: learn it in 10 interactive modules.

Take the free AI challenge