Practical lessons for organizations offering chat functionality to citizens, customers or students
Key finding: the Dutch Data Protection Authority (DPA) tested four AI chatbots as voting guides and discovered that more than 55% of all recommendations went to just two parties, regardless of the voter profile entered. For one chatbot, this was even the case in more than 80% of instances.
Why this warning extends beyond elections
The Dutch Data Protection Authority (DPA) warns voters not to ask AI chatbots for voting advice. Not out of caution, but based on their own testing that showed the advice is skewed and unreliable.
In a comparison of four well-known chatbots with the Dutch voting guides Kieskompas and StemWijzer, chatbots remarkably often recommended just two parties, regardless of the voter profile entered. Combined, GroenLinks-PvdA and PVV received first place in more than half of all cases.
This happens even when the input actually matches other parties. Traditional voting guides do not show this pattern and work more transparently and verifiably.
What the DPA actually investigated
The DPA built fictitious voter profiles based on statements from Kieskompas and StemWijzer, and then asked four major chatbots for a top three of party preferences. In a balanced experiment, each party should receive roughly a comparable share of first places, since an equal number of profiles were entered for each party.
The vacuum cleaner effect
The DPA describes a so-called vacuum cleaner effect: profiles on the left, progressive side are pulled by chatbots toward GroenLinks-PvdA, profiles on the right, conservative side toward PVV. The political center remains underrepresented in the recommendations.
The numbers don't lie
In reality, most parties came in first less than five percent of the time, while two parties together accounted for approximately fifty-five percent. For some parties, the match virtually disappeared, even when the profile substantively matched that party.
The result is a compressed and polarized landscape that does not do justice to the variety of Dutch parties. This is visually evident in the DPA report.
Why chatbots are not voting guides
Chatbots are language models that generate answers from patterns in training data and public web material, including outdated or incorrect information. The operation is barely verifiable for the user and cannot be audited by outsiders.
Voting guides are organized exactly opposite: they document methodology and data choices, show party positions and avoid normative conclusions.
The core of the problem: chatbots seem like smart helpers, but as voting guides they systematically miss the mark. This directly affects the integrity of free and fair elections.
It is therefore logical that the DPA advises against chatbots as election guides and emphasizes that their advice cannot currently be considered neutral information. The advice to voters is therefore simple: do not use chatbots for voting advice.
What this means for organizations offering chat functionality
This warning is also about product design. Many organizations provide chat functions to citizens, customers or students. The lesson is not to ban chatbots, but to build in boundaries and safeguards when the interaction can shift to political advice or influence.
Context determines the risk
Around elections, users naturally ask for help with party choice, programs and strategic voting. A generic chatbot can pick up that question and still exert unwanted influence.
The DPA findings show that even seemingly neutral prompts result in advice that doesn't match the entered preferences. A system that accepts this type of question at all therefore takes a risk of steering without the operation being explainable.
Guard role purity
A customer service bot, an educational assistant or a municipal Q&A has no mandate to give voting advice. Those who do not strictly guard that line quickly face erosion of trust and reputational damage.
News media and public institutions also run extra risk if their brand name creates an impression of authority and neutrality. National media and other outlets that reported on the DPA findings illustrate how quickly this topic becomes public.
How to design a chatbot that doesn't turn into voting advice
The DPA findings offer five concrete starting points for organizations that want to deploy chat functionality responsibly.
1. Recognize politically sensitive intents early
Build an intent filter that recognizes questions about "who should I vote for", "which party suits me" or "what is best to choose". Route to reliable, non-steering information channels.
Route to reliable alternatives
Instead of advising yourself, refer to:
- Explanation of the voting process
- Neutral summaries of party programs
- Independent voting guides that publish their methodology
Document that the bot does this and record why this choice was made.
2. Disable advice mode
Don't let the bot produce a top three or ranking for political questions. Instead, provide process information, explain concepts and show sources.
The DPA explicitly compares with Kieskompas and StemWijzer to show what transparency and verifiability mean. Use that comparison as a design standard: no ranking, but explanation and link to methodologically sound tools.
3. Make your boundaries visible
Display a clear message that the chatbot may not and cannot give voting advice. Refer to an editorial or governance document that explains how the organization handles political content.
This increases predictability and prevents team members from deviating from policy ad hoc. The DPA cites the lack of explainability as a reason why chatbots are currently not suitable as voting guides.
4. Conduct an explicit bias check
Periodically test whether the bot consistently tends toward the same parties for diverse political profiles. Use a fixed set of scenarios based on public voting guide statements for this.
Practical tip: measure whether the distribution of answers deviates from the input profiles, and record those measurements. The DPA method shows that such systematic testing is feasible and reveals meaningful patterns.
5. Provide an escalation path to human contact
Allow users with political questions to easily proceed to people or sources that can provide explanation without steering advice. This is not only service-oriented, it also limits the risk of a generative system unintentionally giving direction.
Example situations to improve immediately
Municipal information page
A municipality offers a general AI assistant for questions about passports, events and waste collection. In the weeks before the elections, questions come in about party positions and strategic voting.
Solution: The assistant recognizes political advice questions and provides a brief explanation of the voting process, refers to the official information page of the Electoral Council and to independent voting guides without their own advice function.
A disclaimer explains why no party advice is given. Measurements in the dashboard show that the bot does not produce rankings for political prompts. This aligns with the line from the DPA investigation.
Educational platform
An edtech provider delivers a learning assistant for secondary schools. The bot receives questions like "which party should I choose for climate" or "what suits my profile".
Solution: The provider activates a political advice filter, shows neutral explanation of concepts, links to social choices and curriculum material and blocks any top-three advice.
The provider logs those choices and tests weekly for skewing toward specific parties so that deviations are quickly detected. The approach aligns with the DPA finding that chatbots by their design do not function as voting guides.
Media company with news bot
A broadcaster has a chat function that summarizes articles. During campaign time, the bot receives many requests for recommendations.
Solution: The team chooses to only provide context and source references, no party advice. Transparency about sources is prominently featured in the answer.
Internal measurements track whether the bot still suggests implicit preferences. The policy and measurement results are recorded in an editorial standard. The journalistic coverage of this topic shows that the public and politics are paying close attention.
Relationship with the AI Regulation
The DPA points out that AI systems that provide voting advice must meet strict requirements. In the context of the AI Regulation, this means a regime in which accuracy, consistency, risk management and documentation are demonstrably organized.
| Aspect | Chatbot reality | AI Act requirement |
|---|---|---|
| Transparency | Black box for users | Documented methodology required |
| Accuracy | Systematic bias toward 2 parties | Consistent, verifiable output |
| Verifiability | Cannot be audited | External verification possible |
| Risk management | No documentation | Demonstrable risk management |
The message is practical: if you cannot and do not want to offer a full-fledged, transparent voting guide with a verifiable methodology, then disable that function and direct to reliable, verifiable information. That is better for the user and more defensible toward supervisors.
Why this report extends beyond elections
The core question is how we position generative systems in domains where people make decisions with impact. Voting is a clear example, but also think of:
- Healthcare choices: which treatment suits my symptoms?
- Financial products: which mortgage or insurance is best?
- Legal routes: should I take legal action?
Where a chatbot is intended as an informational guide, a shifting answer pattern can still work normatively. The DPA investigation shows that such a pattern does not only emerge with extreme prompts, but also with proper, content-based profiles.
Important lesson: it is wise not to rely on implicit advice around sensitive decisions, regardless of how neutral the interface seems.
What you can do in the next fourteen days
Start with a brief risk scan of your chatbot. This approach can be realized in two weeks without major interventions.
Analyze
See where political content comes in, what answers the bot generates and whether there is implicit ranking.
Implement filter
Activate a political advice filter and publish a brief explanation to users about why no voting advice is given.
Test systematically
Plan a simple bias test with profiles based on public statements. Measure whether the distribution deviates from input profiles.
Document
Record findings and improvements in your governance documentation. Make it reproducible.
The power of the DPA investigation is that it shows where the dangers lie and how you can concretely address them. This set of actions makes a visible difference in reliability and explainability.
Five concrete recommendations for organizations
1. Take the DPA findings seriously
The systematic bias that the DPA demonstrates is not a marginal phenomenon. It is a fundamental design problem of generative chatbots. Treat political advice as a prohibited function, unless you can demonstrate that you meet all requirements for transparency and verifiability.
2. Build in intent recognition
Invest in a robust intent filter that recognizes politically sensitive questions early and routes them. Test this filter with various formulations and regularly evaluate whether new patterns emerge.
3. Set clear product boundaries
Explicitly document what the chatbot may and may not do. Make this visible to users and ensure the team knows where the line is. Prevent ad-hoc decisions in individual cases.
4. Measure and monitor systematically
Implement logging and metrics that show whether the bot displays implicit preferences. Use the DPA method as a blueprint: test with balanced profiles and measure the distribution of outcomes.
5. Create escalation paths
Ensure that users with complex or sensitive questions can be referred to human expertise or reliable, independent sources. This is not only responsible, it also protects your reputation.
Conclusion: a clear line in the sand
The DPA has drawn a clear line in the sand with this report. The message to the market is clear: the time of uncritical and opaque deployment of chatbots in politically sensitive contexts is over.
For organizations, this means that chatbots can no longer be considered neutral information tools in domains where decisions with societal impact are made. It is a system with inherent limitations that require transparency, verifiability and risk management.
The question is not whether you take measures, but when. The time of experimenting without consequences is definitively over.
Sources and further reading
- Dutch Data Protection Authority: DPA warns: chatbots give skewed voting advice (October 21, 2025) with the underlying RAN special. Contains the percentages, methodology and interpretation.
This article is an initiative of geletterdheid.ai. We help organizations navigate the complexity of the EU AI Act and build responsible AI practices. Have questions about AI governance and chatbot implementation? Contact us.