Responsible AI Platform
πŸ›οΈGovernment

The Story of Municipality Veiligwaard

How a municipality scrutinized their fraud detection algorithm β€” and discovered that well-intentioned doesn't always mean fair

Fictional scenario β€” based on realistic situations

Scroll
01

The Trigger

How it started

πŸ“§

Municipality Veiligwaard had been using an algorithm for years to detect benefits fraud. The system generated risk scores and determined who got investigated. Nobody had ever asked: is this fair?

The Dutch childcare benefits scandal had awakened the Netherlands. Algorithms everywhere were being scrutinized. And when the municipality analyzed their own system, disturbing patterns emerged.

β€œ
"We wanted to catch fraudsters. Instead, we caught the wrong people."
02

The Questions

What did they need to find out?

1Question

How does the algorithm determine who is high-risk?

The team asked the vendor for an explanation. The answer was vague: "A combination of factors." Which factors? "That's proprietary." The municipality realized they were using a black box for decisions that affected lives.

πŸ’‘ The insight

The algorithm turned out to work with indicators that indirectly discriminated. Postcodes with many social housing units got higher scores. Certain nationalities were weighted as "risk factors." This was never explicitly intended β€” but it was the result.

🌍 Why this matters

The Dutch Data Protection Authority has fined multiple municipalities for using discriminatory risk profiles. The AI Act explicitly prohibits "social scoring" by governments. The line between permitted fraud detection and prohibited social scoring turned out to be thinner than thought.

2Question

Are certain groups systematically checked more often?

The municipality analyzed three years of control data. The results were shocking: citizens with a migration background were checked 4x more often than others. In cases of actually established fraud, there was no difference between groups.

πŸ’‘ The insight

The algorithm had adopted historical bias. Because certain groups had been checked more often in the past, there were more "hits" in those groups β€” which the algorithm interpreted as higher risks. A vicious cycle of discrimination.

🌍 Why this matters

This pattern is not unique. The Dutch government's SyRI system was banned by the court for similar reasons. The lesson: historical data reflects historical inequalities. An algorithm trained on that reproduces those inequalities.

3Question

What are our obligations as a government?

The legal department dove into the AI Act. As a government agency, they had extra obligations. Social scoring was explicitly prohibited. Risk profiling for access to benefits fell under high-risk. And there were specific requirements for fundamental rights impact assessments.

πŸ’‘ The insight

Governments have a special position under the AI Act. They can't just purchase and use a system. They must actively ensure the system doesn't discriminate, is transparent, and that citizens can exercise their rights. The responsibility lay not with the vendor β€” but with the municipality itself.

🌍 Why this matters

Article 26 of the AI Act requires deployers of high-risk AI to conduct a Fundamental Rights Impact Assessment. For governments, this is extra critical: they make decisions that directly affect citizens, often without citizens being able to switch to an alternative.

4Question

Can we even continue using this system?

The municipal executive faced a dilemma. Stopping fraud detection wasn't an option β€” the municipality had a duty to protect public funds. But continuing with a discriminatory system wasn't an option either.

πŸ’‘ The insight

The solution wasn't in stopping or continuing, but in rebuilding. The team decided to redesign the system with fairness as a core principle. No more protected characteristics as input. Regular bias audits. Full transparency about how it works. And human oversight in every decision.

🌍 Why this matters

Multiple municipalities have suspended their fraud systems after criticism. But suspending doesn't solve the underlying problem. The challenge is: how do you build a system that is effective and fair? That requires conscious choices about what data you do and don't use.

03

The Journey

Step by step to compliance

Step 1 of 6
⚠️

The wake-up call

A council member asked critical questions about the fraud detection system. How does it work? Who gets checked? The alderman couldn't answer. That was the start of an internal investigation.

Step 2 of 6
πŸ“Š

The data analysis

The team analyzed three years of control data. The patterns that emerged were uncomfortable: systematic overrepresentation of certain postcodes and backgrounds.

Step 3 of 6
πŸ’¬

The difficult conversation

The findings were presented to the executive. Reactions ranged from disbelief to shame. Nobody had wanted this β€” but it had happened.

Step 4 of 6
βš–οΈ

The legal analysis

What did this mean under the AI Act? The team mapped the obligations. High-risk classification. Social scoring prohibition. FRIA requirement. The conclusion was clear: the current system didn't comply.

Step 5 of 6
πŸ”§

The redesign

Instead of throwing away the system, it was rebuilt. Protected characteristics were removed from input. Proxy variables (like postcode) were critically evaluated. The goal shifted from "finding fraudsters" to "checking fairly".

Step 6 of 6
πŸ”

The bias audit

An external party conducted an independent audit. Were the new models fair? Initial results were encouraging β€” but the team knew: this shouldn't be a one-time check, but an ongoing process.

04

The Obstacles

What went wrong?

Obstacle 1

βœ— Challenge

The vendor didn't want to fully share how the algorithm worked

↓

βœ“ Solution

Contractually stipulating that full transparency is a condition for cooperation. If refused: switch to a different solution.

Obstacle 2

βœ— Challenge

Some staff found the extra checkpoints slowing

↓

βœ“ Solution

Explaining that the municipality had already gotten in trouble before due to lack of oversight. The extra time was an investment in trust.

Obstacle 3

βœ— Challenge

There was resistance to publishing the algorithm ("fraudsters will learn from it")

↓

βœ“ Solution

Transparency about methodology doesn't have to mean transparency about specific signals. You can explain how the system works without sharing exact thresholds.

β€œ
We thought we were efficient. We were mainly unfair. Rebuilding our system wasn't just a legal obligation β€” it was a moral necessity.
β€” Jan de Vries, Alderman for Social Affairs
05

The Lessons

What can we learn from this?

Les 1 / 4
🎯

Always ask: who is affected?

Algorithms are not neutral. They reflect the choices of their makers and the patterns in their data. For every system, ask: who suffers if this goes wrong?

Les 2 / 4
πŸ“Š

Historical data contains historical bias

If certain groups were checked more often in the past, an algorithm will continue that pattern. Critical evaluation of training data is essential.

Les 3 / 4
πŸ”

Transparency is not a luxury

Citizens have the right to know how decisions about them are made. Governments have the duty to explain.

Les 4 / 4
πŸ‘οΈ

Human oversight is not optional

An algorithm may flag, but a human must decide. Especially for decisions that affect fundamental rights.

Does your organization use AI for decisions about citizens?

Discover what obligations the AI Act places on governments and public organizations.