Responsible AI Platform
πŸ›οΈGovernment

The Story of CivicBot

When a municipal chatbot unintentionally gives voting advice β€” and the regulator raises the alarm

Fictional scenario β€” based on realistic situations

Scroll
01

The Trigger

How it started

πŸ“§

The municipality had a successful AI chatbot for general citizen questions. In the run-up to elections, more and more questions came in: "Which party suits me?" "What's best to vote for?" The bot gave answers β€” but they turned out to be systematically skewed.

The Data Protection Authority tested chatbots as voting guides and discovered an alarming pattern: more than 55% of all advice went to just two parties, regardless of the voter profile entered. With some bots even 80%. The municipality's chatbot showed the same pattern.

β€œ
"55% to two parties. Regardless of what you ask." The municipality was stuck with a chatbot pushing citizens in the wrong direction.
02

The Questions

What did they need to find out?

1Question

Why do chatbots give skewed voting advice?

The team had the chatbot tested with the same profiles the DPA used. The result was shocking: left-wing profiles were sent to GroenLinks-PvdA, right-wing to PVV. The middle was practically invisible.

πŸ’‘ The insight

Chatbots are language models that generate answers from patterns in training data. That data contains more content about "extreme" positions than about nuanced middle positions. Result: a polarized advice pattern that doesn't do justice to the variation in Dutch parties.

🌍 Why this matters

The DPA calls this the "vacuum cleaner effect": profiles on the left side are sucked towards GroenLinks-PvdA, profiles on the right side towards PVV. This is not conscious bias β€” it's an artifact of how language models are trained. But the effect is the same: voter influence.

2Question

What is the "vacuum cleaner effect" the DPA describes?

The team analyzed why the distribution was so skewed. In a balanced test, each party should roughly get a comparable share. That didn't happen β€” two parties dominated.

πŸ’‘ The insight

The vacuum cleaner effect describes how chatbots lose nuance. Instead of differentiating between D66, VVD, CDA and other parties, the model "sucks" answers to the most extreme poles. This is not a fault in the question β€” it's a system flaw in how generative AI works.

🌍 Why this matters

The DPA tested with balanced profiles: for each party an equal number of profiles was entered. The outcome: most parties came first less than 5% of the time, while two parties together scored 55%. With some bots even 80%. This is not a subtle deviation β€” it's a fundamental malfunction.

3Question

How do you ensure a chatbot doesn't give political advice?

The team had to act quickly. It was election time and every day the bot gave voting advice was a day of potential voter influence. What could they do?

πŸ’‘ The insight

Intent recognition is key. Build a filter that recognizes questions about "who to vote for," "which party suits me" or "what's best to choose." Route to reliable sources: explanation of the voting process, neutral summaries, independent voting guides. No ranking, only explanation.

🌍 Why this matters

The DPA emphasizes the difference with real voting guides: Kieskompas and StemWijzer document their methodology, show party positions, and avoid normative conclusions. Chatbots do the opposite: black box, unauditable, yet advisory. That's the fundamental problem.

4Question

What are the limits of a government chatbot?

The municipality had to think about role purity. A municipal Q&A has no mandate to give voting advice. But how do you make that clear to citizens and to the bot itself?

πŸ’‘ The insight

Explicit product boundaries are essential. Document what the chatbot may and may not do. Make this visible to users via a disclaimer. Ensure the team knows where the line is. And build an escalation path to human contact for questions that fall outside scope.

🌍 Why this matters

News media and public institutions run extra risk: their brand name creates an impression of authority and neutrality. A municipality that gives voting advice via AI β€” however unintended β€” damages trust in both the technology and the government itself.

03

The Journey

Step by step to compliance

Step 1 of 6
⚠️

The warning

The Data Protection Authority published research: chatbots give skewed voting advice. 55% to two parties.

Step 2 of 6
πŸ”

The internal test

The team tested their own chatbot with DPA profiles. Result: the same pattern. The bot gave skewed voting advice.

Step 3 of 6
🚧

Building intent filter

A filter was implemented that recognizes politically sensitive questions: "who to vote for," "which party suits me."

Step 4 of 6
➑️

Setting up referrals

Instead of advising itself, the bot referred to Electoral Commission information, neutral summaries, and independent voting guides.

Step 5 of 6
πŸ“’

Adding disclaimer

A clear message was added: "I am an information bot, not a voting guide. For voting advice I refer you to independent sources."

Step 6 of 6
πŸ“Š

Starting bias monitoring

Periodic tests with balanced profiles were scheduled to monitor whether the bot still showed implicit preferences.

04

The Obstacles

What went wrong?

Obstacle 1

βœ— Challenge

Chatbot systematically gave skewed voting advice (55%+ to two parties)

↓

βœ“ Solution

Intent filter that recognizes political questions and routes to neutral sources

Obstacle 2

βœ— Challenge

Citizens expected authority from municipal source

↓

βœ“ Solution

Explicit disclaimer and referral to independent voting guides

Obstacle 3

βœ— Challenge

No monitoring whether bot showed implicit preferences

↓

βœ“ Solution

Periodic bias tests with balanced profiles according to DPA methodology

β€œ
We thought we were helping citizens by answering quickly. We realized too late that speed without boundaries can lead to voter influence.
β€” Marieke van den Berg, Head of Digital Services
05

The Lessons

What can we learn from this?

Les 1 / 4
πŸ—³οΈ

Chatbots are not voting guides

Generative models systematically give skewed advice. They lack the transparency and methodology of real voting guides.

Les 2 / 4
πŸ”

Intent recognition is essential

Build filters that recognize politically sensitive questions early and route to neutral sources.

Les 3 / 4
🚧

Maintain role purity

An information bot has no mandate to give advice. Make that boundary explicit.

Les 4 / 4
πŸ“Š

Monitor for bias

Test periodically with balanced profiles. Measure whether the distribution deviates from the input.

Does your organization offer a chatbot to citizens or customers?

Check if you have intent filters for sensitive use cases and monitor for bias.