DPA advice published 11 March 2026: The Dutch Data Protection Authority has reviewed the proposed Wet gegevensvergaring openbare orde (Public Order Data Collection Act) and found it falls short. Without clear demarcation, police could in theory pull in the entire internet, including sensitive data on innocent citizens, DPA chair Aleid Wolfsen warned.
On 4 February 2026, the Dutch Data Protection Authority (Autoriteit Persoonsgegevens, AP) assessed a draft law that would give police broad powers to automatically collect online data about people, even where no concrete suspicion of a criminal offence exists. The AP published its assessment on 11 March 2026. The conclusion is direct: the draft law is insufficient and opens the door too wide for large-scale online surveillance of citizens.
For legal and compliance professionals working in the Dutch public sector, or advising organisations that interact with law enforcement, this development deserves close attention. The issues it raises go beyond national law. They sit at the intersection of the EU AI Act, the GDPR, and the fundamental question of what government-deployed predictive AI systems are permitted to do in a democratic society.
What the proposed law would allow
The proposed Wet gegevensvergaring openbare orde aims to enable police to detect potential public order disturbances in advance by automatically collecting online data about people. One of the examples given in the explanatory notes is assessing whether someone intends to join a demonstration. In addition to bulk data collection, the law would also allow police to electronically follow specific individuals online.
A judge must give prior approval before police exercise these powers. That is a procedural safeguard, but its strength depends entirely on the criteria that determine when approval is granted. And it is precisely those criteria that the draft law leaves dangerously vague.
DPA chair Aleid Wolfsen stated the concern plainly: "This proposal opens the door too wide for large-scale and unfocused online monitoring of citizens. Without clear demarcation, police could in theory pull in the entire internet, including sensitive data on innocent citizens."
Four missing boundaries
The DPA identifies four areas where the draft law fails to set adequate limits. The first is the absence of specification about which online sources may be searched. Without this, automated crawlers could follow hyperlinks across the web, building systematic profiles of individuals or groups from an effectively unlimited range of sources. The second is a lack of clarity about which technical systems police may deploy, leaving open whether advanced AI analysis tools fall within scope and under what conditions. The third gap is the absence of a time limit: how far back can police look? Digital data can remain accessible for years or even decades, and a profile built over ten years is fundamentally different in nature and intrusiveness from a focused current search. The fourth missing element is any requirement that searches be limited in scope, creating the risk that crawlers systematically collect far more than intended and generate surveillance of specific communities or movements without any concrete trigger.
Together, these four omissions create the conditions for what the DPA describes as systematic surveillance of individuals and groups who have no concrete reason to be monitored, driven not by deliberate policy choices but by the unrestricted operation of automated scraping and crawling systems.
The AI Act dimension
From an AI governance perspective, the systems envisaged by this draft law fit squarely within the category of AI that the EU AI Act treats as high-risk. Annex III of the AI Act explicitly lists AI systems used by law enforcement for risk assessment, profiling, and evaluation of individuals as high-risk. A system that assesses whether a person intends to participate in a demonstration is an archetypal example of that category.
High-risk AI systems under the AI Act must meet substantial requirements: thorough risk assessment, technical documentation, accuracy standards, meaningful human oversight, and transparency toward affected persons. These requirements are not optional for particularly sensitive cases. They are baseline obligations for every organisation deploying such systems.
There is, however, an even more fundamental issue. Article 5 of the AI Act prohibits certain AI practices outright. Among the prohibited systems are those performing social scoring, defined as the evaluation or classification of individuals based on their social behaviour over a period of time. The line between a system that models someone's online behaviour to predict whether they will attend a demonstration and a system performing prohibited social scoring is not as clear as the law's drafters may assume. That question needs a rigorous legal answer before such powers are enshrined in national legislation.
The AI Act also requires fundamental rights impact assessments where high-risk AI is deployed by public bodies. A law that grants police the power to use automated surveillance tools is precisely the kind of framework where such an assessment should form part of the legislative process itself, not an afterthought.
GDPR: purpose limitation, proportionality and data minimisation
The GDPR provides the most direct legal benchmark, and it tests this proposal on three grounds.
Purpose limitation requires that personal data be collected for specified, explicit, and legitimate purposes. "Public order monitoring" is a broad and contested category. Without clear specification of the scope of each search and the sources that may be consulted, there is no reliable mechanism to ensure that data collected is actually limited to that purpose. In practice, data swept up by automated crawlers may well include information about political views, religious affiliation, or health, all of which attract heightened protection under the GDPR.
Proportionality requires that any interference with fundamental rights not exceed what is necessary for the intended purpose. Automatically mapping a person's digital footprint, on the basis of judicial authorisation but without any suspicion of criminal conduct, is a significant interference with the right to privacy and the freedoms of expression and assembly. The proportionality case for doing so in order to prevent public order disturbances is not self-evident, particularly when the person concerned has not yet done anything to justify attention.
Data minimisation requires collecting no more data than strictly necessary. Automated crawlers are inherently expansive: they follow links, scrape pages, and retrieve what is available. That character sits in direct tension with the principle of minimum data collection.
What this means for public sector organisations
For compliance professionals in the Dutch public sector, the DPA's assessment carries implications beyond this specific draft law. The message from the regulator is consistent: public bodies deploying automated systems for monitoring, profiling, or assessment of individuals are expected to demonstrate that those systems comply with strict standards of proportionality, purpose limitation, and data minimisation. That expectation applies not only to police but to any public institution using AI in decisions that affect the rights of citizens.
The DPA also notes that these kinds of powers should not be decided law by law in isolation, but should be embedded in a broader national framework. The Netherlands needs a coherent framework for law enforcement AI rather than a series of separate legislative proposals that each probe the boundaries of what is permissible without building from a shared constitutional foundation. Organisations planning to deploy AI for supervisory or enforcement tasks should treat this recommendation as a clear signal about where the regulatory direction of travel is heading.
A pattern worth noting
The advice on the Wet gegevensvergaring openbare orde is not an isolated case. In the same week, the DPA published criticism of a separate draft law on police intelligence teams operating through informants, which also fell short of required standards. A pattern is emerging: legislative proposals that expand police data powers are being submitted without sufficient attention to the fundamental rights and data protection conditions that the AI Act and GDPR impose.
The DPA's willingness to state this criticism publicly, and in direct terms, is significant. The regulatory challenge to this law on paper will eventually become a judicial challenge in practice. The legal case for proportionate, bounded, and framework-governed law enforcement AI is not merely a compliance checklist. It is the condition under which such powers can legitimately exist in a society that takes fundamental rights seriously. Organisations and legislators alike would do well to treat the DPA's critique as an opportunity to get that foundation right before the law is passed, rather than after the first court has struck it down.