The Story of Municipality Jeugdveilig
How a municipality scrutinized their "preventive" risk model β and discovered it was making vulnerable families even more vulnerable
Fictional scenario β based on realistic situations
The Trigger
How it started
Municipality Jeugdveilig had an ambitious goal: early detection of problems in families, so help could arrive before situations escalated. The JeugdSignaal system combined data from various sources to calculate risk scores. It seemed to work β until someone asked the question: is it working fairly?
The system was popular with neighborhood teams. It provided direction. It felt objective. But behind the scenes, a pattern was unfolding that nobody had foreseen: families in certain neighborhoods were systematically scored higher, regardless of their actual situation.
"We wanted to protect children. We stigmatized entire neighborhoods."
The Questions
What did they need to find out?
What data feeds the model, and is that data neutral?
The team inventoried all data sources. Debt registrations came from the credit bureau. School absenteeism from truancy officers. Police contacts from law enforcement databases. It seemed like objective information. But was it?
π‘ The insight
Each data source carried its own bias. Debts were more often registered for people who didn't have access to informal loans. School absenteeism was reported more strictly in schools with fewer resources. Police contacts reflected where police patrolled, not where problems were. The "objective" data was a mirror of existing inequalities.
π Why this matters
Researchers call this "embedded bias": the prejudices already present in the data before the model processes them. An algorithm that learns from unequal data reproduces that inequality β and gives it an appearance of objectivity.
What does a "risk score" actually mean?
Neighborhood team workers used the scores daily. But what did a score of 0.7 actually mean? The team interviewed colleagues. The answers varied: "70% chance of problems", "quite concerning", "probably something going on". Nobody really knew.
π‘ The insight
A risk score is not a prediction β it's pattern recognition. The model saw characteristics that were historically associated with child welfare interventions. But those historical interventions were themselves the result of who was monitored, not who actually needed help. A self-reinforcing cycle.
π Why this matters
Predictive policing and social risk scores are internationally controversial. In the US, similar systems for child protection came under fire when it turned out that Black families systematically scored higher. The AI Act classifies such systems as high-risk for a reason.
Who has access to this information and what do they do with it?
The risk scores were widely shared: neighborhood teams, child protection workers, sometimes even schools. But was there control over what happened with those scores? The team did an audit. The findings were disturbing.
π‘ The insight
In some cases, home visits were scheduled purely based on the score, without further cause. Families didn't know they were "in view". There was no objection procedure. And once labeled as "risk", that status often remained in systems for years.
π Why this matters
Both the GDPR and the AI Act require that people be informed when AI systems influence decisions about them. The right to object is fundamental. But in practice, many citizens don't even know they're being assessed by algorithms.
Is preventive intervention based on predictions even ethical?
This was the hardest question. The system was built with the best intentions: protecting children before it was too late. But where is the line between prevention and surveillance? Between offering help and stigmatizing?
π‘ The insight
The team realized that "prevention" had become a euphemism for "monitoring without consent". Real prevention would mean: investing in neighborhoods, providing broad support, lowering barriers to asking for help. Not: making lists of "risk cases" and waiting until you have reason to intervene.
π Why this matters
The discussion about predictive social services touches on fundamental questions about the relationship between government and citizen. Can a government that "helps" you based on algorithms still be trusted? The AI Act tries to set boundaries here, but the ethical questions go deeper than legislation.
The Journey
Step by step to compliance
The critical question
An investigative journalist requested access to the algorithm under freedom of information laws. The municipality couldn't answer basic questions about how the system worked. That was the starting signal for internal investigation.
The data audit
An external agency analyzed the data sources. The conclusion: each source carried significant bias. Neighborhoods with more police surveillance had more "signals" β not more problems.
The impact analysis
The team investigated what had happened to families that scored high. In 60% of cases, no intervention had been needed. But the "risk family" label had certainly had consequences.
Conversations with affected families
The municipality organized conversations with families that had been flagged by the system. Their experiences were sometimes traumatic: unexpected home visits, the feeling of being constantly watched, shame toward neighbors.
The ethical reflection
The municipal executive organized an ethics committee with external experts, experience experts, and human rights organizations. The question: can we even deploy this system responsibly?
The decision
After months of research, the executive made a courageous decision: the system wouldn't be repaired, but discontinued. The approach would have to be fundamentally different.
The Obstacles
What went wrong?
β Challenge
Neighborhood teams wanted to keep the system β it provided guidance in complex work
β Solution
Investing in better training and support for professional judgment, instead of leaning on algorithms.
β Challenge
The data had been collected and shared for years β privacy had already been violated
β Solution
Systematically deleting old data where there was no legal retention requirement. Being transparent about what had happened.
β Challenge
The public debate was polarized: for or against technology
β Solution
Bringing nuance: the problem wasn't technology, but how it was deployed. The municipality took responsibility for the choices that had been made.
We thought we were ahead with data-driven policy. We were mainly ahead in classifying our own citizens. Stopping the system wasn't a step back β it was the only step forward.
The Lessons
What can we learn from this?
Data is not neutral
Every dataset carries the biases of how, where, and by whom the data was collected. "Objective" data doesn't exist.
Risk scores create risks
Labeling people as "risk" has consequences itself. Surveillance is not neutral β it changes the relationship between government and citizen.
Prevention is not surveillance
Real prevention means investing in support for everyone, not monitoring who "probably" needs help.
Sometimes stopping is the best option
Not every system can be repaired. Sometimes the fundamental approach is wrong and rebuilding is better than muddling through.
Is AI being used in your organization for decisions about vulnerable groups?
Learn what extra safeguards the AI Act requires for systems that can affect fundamental rights.
Ga verder met leren
Ontdek gerelateerde content