Responsible AI Platform

Dutch DPA AI Impact Barometer Turns Red: What RAN 6 Means for Organizations

··7 min read
Delen:
Dutch version not available

The Dutch Data Protection Authority (Autoriteit Persoonsgegevens, AP) has published the sixth edition of its AI & Algorithms Report for the Netherlands (RAN), and the picture is alarming. Four of the nine indicators in the AI Impact Barometer now stand at red, double the number from the previous edition. The regulator's message is clear: risks are growing faster than the measures to contain them.

TL;DR
  • 4 of 9 indicators now red in the AI Impact Barometer (doubled)
  • AI in recruitment is growing fast, but transparency falls short
  • Organizations are trying to dodge the AI Act by classifying AI systems as "regular algorithms"
  • August 2026 deadline for high-risk AI in recruitment is approaching fast
  • The new Dutch cabinet must accelerate the national implementation law and oversight structure

What is the RAN and why does it matter?

The RAN is the biannual report through which the AP, as the coordinating supervisor for algorithms and AI, assesses the state of play. It is not a dry statistical report. The AP analyzes the key risks, translates them into nine indicators, and provides a snapshot of how the Netherlands is doing on responsible AI use.

Those indicators are now clearer than ever. Four of nine are red, the highest alert level. The AP is specifically concerned about the lack of progress in establishing oversight structures, the slow development of standards, inadequate algorithm registration by government agencies, and insufficient visibility into incidents.

Three findings that demand attention

1. AI in recruitment: growing use, growing risks

More and more employers are using AI in recruitment and selection. The scale and risks are increasing rapidly. The AP finds that transparency and explainability in these systems often fall short. Particularly with online and game-based assessments, it is unclear how they predict candidate suitability, how decisions are reached, and how candidates can challenge outcomes.

The result: some candidates barely get a chance from the start, without knowing why. That is not just undesirable, it will soon be unlawful. Under the AI Act, AI systems for recruitment and selection are classified as high-risk. From August 2026, they must meet strict requirements for accuracy, non-discrimination, and explainability.

2. Transparency and explainability are falling short

The transparency problem extends beyond recruitment. The AP signals more broadly that organizations provide insufficient insight into how their AI systems work and make decisions. This is problematic because transparency is one of the pillars of the AI Act. Without explainability, meaningful human oversight is impossible, and without human oversight, fundamental rights cannot be effectively protected.

This directly relates to the requirements the AI Act places on deployers: understanding what the system does, maintaining adequate oversight, and intervening when necessary. If an organization cannot explain how a decision is made, it fails to meet those requirements by definition.

3. Preparation for the AI Act is lagging behind

The third finding may be the most concerning. Despite the AI Act already being in force and deadlines approaching, the AP observes that preparation is structurally falling behind. The regulator identifies four specific concerns: lack of progress in establishing oversight, slow development of standards, inadequate algorithm registration by government agencies, and insufficient visibility into incidents.

AP Chair Aleid Wolfsen is unequivocal in his message: "Five years after the benefits scandal, the lessons are clear, but the follow-up lags behind. As the pressure to embrace AI increases, we must protect fundamental rights. Anyone who wants to prevent a new scandal must act now."

The classification trick: calling AI a "regular algorithm"

One of the most concerning trends the AP identifies is that organizations are trying to evade the AI Act by classifying their systems as "regular algorithms." The example the AP cites is telling: OxRec, a tool used by probation organizations to predict recidivism. The system was registered as an algorithm in the Dutch algorithm registry, when in reality it is an AI system.

That distinction is not trivial. If a system is classified as an AI system under the AI Act, it falls under strict rules for transparency, risk management, and human oversight. If labeled as a "regular algorithm," the organization sidesteps those obligations. The AP notes that this is not an isolated incident: every week, the regulator sees new registrations of systems as algorithms when they are in fact AI systems.

Note: Commercial organizations are also trying to dodge their responsibilities. The AP explicitly warns that this comes at the expense of customers and users. Deliberately misclassifying an AI system is not a clever compliance strategy; it is a risk that will come back to haunt you.

The broader risk landscape

Beyond the three core findings, the AP paints a broader picture of growing risks. The regulator points to the uncontrollable increase of deepfakes, AI-driven fraud, psychological harm from chatbots, and AI security measures that increasingly lag behind technological developments.

The AP references recent incidents: the proliferation of AI-powered voting guides and the problems with Grok, which could generate indistinguishable nude images of arbitrary individuals. These are no longer theoretical risks. They already affect fundamental rights and cybersecurity, and the protections against them are inadequate.

This directly connects to the AP's earlier warning about AI agents, in which the regulator flagged security risks of autonomous AI systems.

What must the new Dutch cabinet do?

The AP is unusually direct in its message to policymakers. According to the regulator, the new cabinet must urgently address four issues:

  1. Finalize the Dutch implementation law. The AI Act is European law but requires national implementation. That law does not yet exist.
  2. Designate supervisory authorities. It is still not fully clear which bodies will exercise which oversight functions.
  3. Structure financing for oversight. Oversight without a budget is not oversight.
  4. Push for clarity in Europe on the discussions around postponement and simplification of regulation, so organizations know where they stand.

What does this mean for organizations?

The August 2026 deadline for high-risk AI systems in recruitment is concrete and approaching fast. Organizations using AI in their recruitment processes need to start working on compliance now, if they have not already. That means: inventorying which systems you use, assessing whether they qualify as high-risk, and working on transparency, explainability, and human oversight.

But it goes beyond recruitment alone. The RAN makes clear that the time for waiting is over. An AI Impact Barometer that turns red on four indicators is a signal organizations must take seriously.

Practical step: Use the AI Act compliance tool to check whether your AI systems qualify as high-risk. Start with an inventory and classify your systems honestly. The AP is watching.

📬 AI Act Weekly

Elke dinsdag de belangrijkste AI Act ontwikkelingen in je inbox.

Gratis aanmelden →

Related reading