Responsible AI Platform
πŸ›οΈGovernment

The Story of Municipality Jeugdveilig

How a municipality scrutinized their "preventive" risk model β€” and discovered it was making vulnerable families even more vulnerable

Fictional scenario β€” based on realistic situations

Scroll
01

The Trigger

How it started

πŸ“§

Municipality Jeugdveilig had an ambitious goal: early detection of problems in families, so help could arrive before situations escalated. The JeugdSignaal system combined data from various sources to calculate risk scores. It seemed to work β€” until someone asked the question: is it working fairly?

The system was popular with neighborhood teams. It provided direction. It felt objective. But behind the scenes, a pattern was unfolding that nobody had foreseen: families in certain neighborhoods were systematically scored higher, regardless of their actual situation.

β€œ
"We wanted to protect children. We stigmatized entire neighborhoods."
02

The Questions

What did they need to find out?

1Question

What data feeds the model, and is that data neutral?

The team inventoried all data sources. Debt registrations came from the credit bureau. School absenteeism from truancy officers. Police contacts from law enforcement databases. It seemed like objective information. But was it?

πŸ’‘ The insight

Each data source carried its own bias. Debts were more often registered for people who didn't have access to informal loans. School absenteeism was reported more strictly in schools with fewer resources. Police contacts reflected where police patrolled, not where problems were. The "objective" data was a mirror of existing inequalities.

🌍 Why this matters

Researchers call this "embedded bias": the prejudices already present in the data before the model processes them. An algorithm that learns from unequal data reproduces that inequality β€” and gives it an appearance of objectivity.

2Question

What does a "risk score" actually mean?

Neighborhood team workers used the scores daily. But what did a score of 0.7 actually mean? The team interviewed colleagues. The answers varied: "70% chance of problems", "quite concerning", "probably something going on". Nobody really knew.

πŸ’‘ The insight

A risk score is not a prediction β€” it's pattern recognition. The model saw characteristics that were historically associated with child welfare interventions. But those historical interventions were themselves the result of who was monitored, not who actually needed help. A self-reinforcing cycle.

🌍 Why this matters

Predictive policing and social risk scores are internationally controversial. In the US, similar systems for child protection came under fire when it turned out that Black families systematically scored higher. The AI Act classifies such systems as high-risk for a reason.

3Question

Who has access to this information and what do they do with it?

The risk scores were widely shared: neighborhood teams, child protection workers, sometimes even schools. But was there control over what happened with those scores? The team did an audit. The findings were disturbing.

πŸ’‘ The insight

In some cases, home visits were scheduled purely based on the score, without further cause. Families didn't know they were "in view". There was no objection procedure. And once labeled as "risk", that status often remained in systems for years.

🌍 Why this matters

Both the GDPR and the AI Act require that people be informed when AI systems influence decisions about them. The right to object is fundamental. But in practice, many citizens don't even know they're being assessed by algorithms.

4Question

Is preventive intervention based on predictions even ethical?

This was the hardest question. The system was built with the best intentions: protecting children before it was too late. But where is the line between prevention and surveillance? Between offering help and stigmatizing?

πŸ’‘ The insight

The team realized that "prevention" had become a euphemism for "monitoring without consent". Real prevention would mean: investing in neighborhoods, providing broad support, lowering barriers to asking for help. Not: making lists of "risk cases" and waiting until you have reason to intervene.

🌍 Why this matters

The discussion about predictive social services touches on fundamental questions about the relationship between government and citizen. Can a government that "helps" you based on algorithms still be trusted? The AI Act tries to set boundaries here, but the ethical questions go deeper than legislation.

03

The Journey

Step by step to compliance

Step 1 of 6
πŸ“°

The critical question

An investigative journalist requested access to the algorithm under freedom of information laws. The municipality couldn't answer basic questions about how the system worked. That was the starting signal for internal investigation.

Step 2 of 6
πŸ”

The data audit

An external agency analyzed the data sources. The conclusion: each source carried significant bias. Neighborhoods with more police surveillance had more "signals" β€” not more problems.

Step 3 of 6
πŸ“Š

The impact analysis

The team investigated what had happened to families that scored high. In 60% of cases, no intervention had been needed. But the "risk family" label had certainly had consequences.

Step 4 of 6
πŸ’¬

Conversations with affected families

The municipality organized conversations with families that had been flagged by the system. Their experiences were sometimes traumatic: unexpected home visits, the feeling of being constantly watched, shame toward neighbors.

Step 5 of 6
βš–οΈ

The ethical reflection

The municipal executive organized an ethics committee with external experts, experience experts, and human rights organizations. The question: can we even deploy this system responsibly?

Step 6 of 6
πŸ›‘

The decision

After months of research, the executive made a courageous decision: the system wouldn't be repaired, but discontinued. The approach would have to be fundamentally different.

04

The Obstacles

What went wrong?

Obstacle 1

βœ— Challenge

Neighborhood teams wanted to keep the system β€” it provided guidance in complex work

↓

βœ“ Solution

Investing in better training and support for professional judgment, instead of leaning on algorithms.

Obstacle 2

βœ— Challenge

The data had been collected and shared for years β€” privacy had already been violated

↓

βœ“ Solution

Systematically deleting old data where there was no legal retention requirement. Being transparent about what had happened.

Obstacle 3

βœ— Challenge

The public debate was polarized: for or against technology

↓

βœ“ Solution

Bringing nuance: the problem wasn't technology, but how it was deployed. The municipality took responsibility for the choices that had been made.

β€œ
We thought we were ahead with data-driven policy. We were mainly ahead in classifying our own citizens. Stopping the system wasn't a step back β€” it was the only step forward.
β€” Sandra Bakker, Alderman for Youth and Care
05

The Lessons

What can we learn from this?

Les 1 / 4
πŸ“Š

Data is not neutral

Every dataset carries the biases of how, where, and by whom the data was collected. "Objective" data doesn't exist.

Les 2 / 4
⚠️

Risk scores create risks

Labeling people as "risk" has consequences itself. Surveillance is not neutral β€” it changes the relationship between government and citizen.

Les 3 / 4
πŸ›‘οΈ

Prevention is not surveillance

Real prevention means investing in support for everyone, not monitoring who "probably" needs help.

Les 4 / 4
πŸ›‘

Sometimes stopping is the best option

Not every system can be repaired. Sometimes the fundamental approach is wrong and rebuilding is better than muddling through.

Is AI being used in your organization for decisions about vulnerable groups?

Learn what extra safeguards the AI Act requires for systems that can affect fundamental rights.