On 20 April 2026 State Secretary Aerdts (Digital Economy and Sovereignty) launched the public consultation on the Implementation Act of the AI Regulation (Uitvoeringswet AI-verordening). The draft law anchors the EU AI Act into the Dutch legal system: which authorities will supervise, how they cooperate, and what enforcement powers they get towards companies and public bodies that deploy AI.
For organisations already working on risk classification, FRIAs or AI procurement policy, this is not an administrative footnote. It is the framework within which inspections, fines and enforcement decisions will land. Until 1 June 2026, anyone can respond via internetconsultatie.nl.
What exactly does the Implementation Act regulate?
The AI Regulation applies directly in every EU Member State. But on a number of points Brussels deliberately leaves room for Member States to make their own choices. The Implementation Act fills in that space for the Netherlands:
The AI Regulation is the European framework. The Implementation Act is the Dutch configuration file: which authority supervises what, how it plugs into existing legislation, and what procedural rules apply to enforcement.
Concretely, the draft law covers three things:
- The supervisory structure โ which national authorities receive which tasks under the AI Act.
- The role of the Dutch Data Protection Authority (AP) as the fallback supervisor for areas without a clearly designated sectoral body.
- Cooperation and procedures between supervisors, to avoid organisations falling in grey zones between multiple bodies.
The core choice: cooperation between existing supervisors
Other Member States chose a single new, centralised AI authority. The Netherlands explicitly picks a different model: existing sectoral supervisors retain oversight within their own domain, and cooperate where AI systems touch multiple domains.
For most organisations, this means they will not face a brand-new authority but the one they already know:
Financial sector
DNB and AFM supervise AI systems already within their mandate โ think credit scoring, fraud detection and insurance-chain algorithms. See also our guide on EBA mapping for financial institutions.
Healthcare
The Healthcare Inspectorate (IGJ) takes the lead on AI in medical devices and care processes, aligning with the existing MDR route.
Public sector
The AP becomes the supervisor for government AI and areas without a clear sectoral body. Read our analysis on the AI Act in the public sector.
Work & recruitment
The Netherlands Labour Authority covers AI systems in employment โ think CV screening and recruitment tools.
The Dutch DPA gets the role of coordinating supervisor and fallback: wherever no sectoral body logically fits, the AP takes over. That is consistent with their current role on algorithmic decision-making and profiling under the GDPR.
Why this choice makes sense โ and where the risks are
Sectoral supervisors know their domain, already have inspection powers, and can assess AI in context. You evaluate an HR AI system differently from a medical AI system. One generic AI authority would miss that context.
Organisations with AI systems that touch multiple domains โ for instance a platform facilitating both HR decisions and credit assessments โ may face multiple supervisors at the same time. The cooperation arrangements in the Implementation Act must cover that grey zone.
This is exactly where consultation responses from companies and public bodies can make a difference. How do we avoid three simultaneous inspections of the same system? How is it made clear which supervisor is your first point of contact? And how are information requests aligned so that you do not have to hand over the same documentation three times?
Prohibited practices remain fully in force
The Implementation Act changes nothing about the substantive norms of the AI Regulation. Prohibited AI practices such as:
- Manipulative AI exploiting vulnerabilities of specific groups,
- Social scoring by public or private actors,
- Untargeted scraping of facial images for biometric databases,
- Emotion recognition in workplaces and education (with narrow exceptions),
...remain banned at EU level. The Implementation Act only decides who enforces this in the Netherlands and what procedures apply.
High-risk AI: the requirements to prepare for now
For high-risk AI systems, the substantive content does not change. What changes is practical enforcement. The obligations already in force โ and on which Dutch supervisors will soon test โ are:
Data quality and governance
Training, validation and test data must be relevant, representative and as free from errors as possible. Documentation on origin, processing and bias analysis becomes a hard supervisory question.
Risk management system
A documented, iterative process identifying, mitigating and re-evaluating risks across the entire lifecycle of the system. Not a document in a drawer, but a living process.
Human oversight (Article 14)
Effective human oversight while the AI system is in use. See our in-depth piece on human control and oversight.
Transparency towards users
Deployers must inform affected persons. Generative systems face specific labelling and disclosure duties from Article 50.
Learn the EU AI Act by doing
No slides. No boring e-learning. Try an interactive module.
Try it yourself
3 interactive activities. Earn XP. See why this works better than reading slides.
What should you do with this consultation?
Many organisations see public consultations as something for industry associations and lawyers. That is a missed opportunity. The Implementation Act will determine how strict, how coordinated and how predictable Dutch AI supervision becomes. A few concrete actions:
Map which of your AI systems fall under which sectoral supervisor according to the proposal. Is that consistent, or do you have systems that would fall under multiple bodies simultaneously? That is consultation-worthy feedback.
The AP as supervisor of government AI builds on its current GDPR mandate, but the capacity and specialisation needed for this role is still under debate. Signals from municipalities and implementing agencies are relevant here.
How does Dutch supervision relate to the AI Office at EU level and supervisors in other Member States? For cross-border providers, predictability of process and cooperation between Member States is essential.
Timeline: what's at stake until 1 June 2026?
The AI Regulation becomes enforceable in phases. Key milestones around the Implementation Act:
| Date | What happens |
|---|---|
| 20 April 2026 | Public consultation launched; Parliament informed |
| 1 June 2026 | End of consultation period โ final moment to respond |
| After consultation | Processing of responses, Council of State advice, submission to Parliament |
| In parallel | AI Act obligations for high-risk systems continue to become enforceable in phases โ see our omnibus analysis |
Responses can be submitted online at internetconsultatie.nl/uaiv/b1.
Conclusion: from policy file to supervisory reality
The Implementation Act of the AI Regulation is the moment where the AI Act in the Netherlands shifts from a European policy file to concrete supervisory reality. Organisations that already have their inventory and risk classification in order gain an advantage: they will know immediately which sectoral supervisor to engage with.
Those who still need to start can use this consultation window as an internal deadline. Which systems, which risk class, which supervisor? The AI Regulation leaves no room for "we'll see"; the Implementation Act cements that feeling.
The substantive requirements of the AI Act do not change. What changes is that they will soon have Dutch inspectors with Dutch enforcement powers behind them.