Responsible AI Platform

AI Supervision Congress 2025: Complete Overview

··22 min read
Delen:
Dutch version not available

A complete recap of the first Dutch AI Supervision Congress

📅 December 10, 2025 – Around 750 supervisors, businesses, and experts gathered at the first AI Supervision Congress, organized by the Dutch Digital Infrastructure Agency (RDI) and the Dutch Data Protection Authority (AP).

The First AI Supervision Congress in the Netherlands

On December 10, 2025, a historic moment occurred in the Dutch AI landscape: the very first AI Supervision Congress. RDI and AP, jointly responsible for coordinating AI supervision in the Netherlands, brought together nearly 750 participants to discuss how the AI Regulation can contribute to safe and innovative use of AI.

The core message

"Together at the helm" was the central theme. AI supervision is not just for regulators—it concerns all of society. AI raises issues that affect us all as citizens: our fundamental rights, digital safety, and health.

Opening Session: Supervision and Innovation Hand in Hand

Inspector General Angeline van Dijk (RDI) opened the congress in conversation with AP board member Katja Mür. The key message was clear:

"See supervision as your partner, who wants to walk hand in hand with the innovative partner from the business community."

Angeline van Dijk, Inspector General RDI

Prince Constantijn of the Netherlands, actively involved in the tech sector, asked the question on many attendees' minds: "How do we prevent AI supervision from 'over-regulating' and stifling innovation?"

The answer: risk-based supervision with an eye for business viability. RDI deliberately avoids the "ticket book and checklist" approach, instead engaging with the market through regulatory sandboxes.


The Keynote Speakers

The congress opened with five influential keynotes from national and international speakers.

1

Sven Stevenson (Dutch DPA)

The AI Act in a Nutshell – The Director of Algorithm Coordination at the AP provided a clear overview of the core of the AI Regulation.

2

Michiel Boots (Ministry of Economic Affairs)

Government Policy and AI – The Director-General for Economy and Digitalization outlined the government's vision on AI innovation and regulation.

3

UNESCO

Supervising AI by Competent Authorities – A practical toolkit for European supervisors, developed in collaboration with RDI.

4

Focco Vijselaar (VNO-NCW)

The Business Perspective – The Director of VNO-NCW spoke about the role of entrepreneurs in responsible AI.

📥 Download all keynote presentations in our Knowledge Base under AI Supervision Congress.


The 11 Sessions: From Standards to Sandbox

After the plenary opening, participants spread across eleven interactive sessions. Here's a summary of key insights from each session.


Session 1: Standards and Conformity Assessments

Speakers: Isabel Barberá (AP), Dr. Theresa Marschall (RDI), Willy Tadema (RDI)

The session covered the "New Legislative Framework" – the European system of harmonized standards for product regulation. RDI actively contributes to AI standard development, as it did earlier with GSM, 5G, and 6G.

How does it work?

When an AI system complies with harmonized standards published in the EU's Official Journal, a presumption of conformity applies. This simplifies the conformity assessment.

CEN/CENELEC standards in development:

StandardSubjectAI Act Article
prEN 18228AI Risk ManagementArt. 9
prEN 18229-1Logging, transparency, human oversightArt. 12, 13, 14
prEN 18229-2Accuracy and robustnessArt. 15
prEN 18282Cybersecurity for AIArt. 15
prEN 18283Bias managementArt. 10
prEN 18286Quality Management SystemArt. 17

Notified Bodies: For biometric systems and critical infrastructure, assessment by an external notified body (accredited by RvA) is mandatory, including periodic audits.


Session 2: High-Risk AI in the Financial Sector

Speakers: Hans Brits (DNB), Mirèl ter Braak (AFM), Damian Borstel (AFM)

The financial sector already has strict regulations via DNB and AFM. With the AI Regulation, another layer is added. This session focused on the overlap between existing financial legislation and the new AI requirements.

High-risk in the financial sector (Annex III, point 5)

Two specific AI applications are explicitly designated as high-risk:

  • 5b: AI for creditworthiness assessment or credit scoring
  • 5c: AI for risk assessment and pricing in life and health insurance

Why high-risk? According to recital 58 of the AI Regulation:

  • Credit scores determine access to financial resources and essential services (housing, electricity, telecom)
  • Risk assessment in insurance can significantly impact livelihoods
  • Risks of exclusion and discrimination are significant

Overlap with existing legislation: The AI Regulation explicitly refers to Capital Requirements Directive, Consumer Credit Directive, Mortgage Credit Directive, Solvency II, and Insurance Distribution Directive (IDD). Supervision can be integrated into existing mechanisms.

Conformity assessment: For financial AI systems, an internal control procedure applies (Article 43(2)) – no notified body required.


Session 3: Protection of Fundamental Rights

Speakers: Justin Hoegen Dijkhof, Naomi Appelman, Isabelle Schipper (AP)

"Fundamental rights protection is one of the main objectives of the AI Regulation; all involved actors play a role in protecting fundamental rights with AI."

The session emphasized that fundamental rights protection is not a checklist, but context-dependent. Three key points:

  1. All fundamental rights are potentially involved
  2. There are different types of obligations
  3. The concrete impact must be assessed

Fundamental Rights Impact Assessment (FRIA) - Article 27

The FRIA is mandatory for public bodies and public service providers using high-risk AI systems. The assessment must be registered with the market surveillance authority.

Instruments for fundamental rights assessment:

1

Algorithm Framework

The Algorithm Framework on Overheid.nl provides guidelines for responsible algorithm use.

2

IAMA

The Human Rights & Algorithms Impact Assessment will receive an AI Act update in 2026.

3

AIIA

Sectoral AI Impact Assessments for specific domains.

4

Self-Assessment

AI Literacy Self-Assessment for organizations.

Read more: See our DPIA vs FRIA comparison for a practical explanation of the differences.


Session 4: AI Platforms and Consumer Interests

Speakers: Kari Spijker (AI/Algorithm Governance Specialist, ACM), Stefan Haas (Strategic Advisor Digital Economy, ACM), Menno Israel (Director Taskforce Data and Algorithms, ACM)

The ACM mission: making markets work well for all people and businesses. This session covered how ACM deals with AI in platform markets, consumer protection, and competition.

ACM Digital Economy - priorities

  • Acting against abuse, deception, and manipulation in online sales and gaming
  • Supervising disinformation and hate speech on social media (together with EU supervisors)
  • Addressing addictive designs, especially for minors
  • Stimulating a safe and reliable data economy

Relevant legislation supervised by ACM:

LegislationFocus
DMADigital Markets Act - platform regulation
DSADigital Services Act - algorithmic transparency
DGAData Governance Act - data economy
DA+Data Act - data sharing and access

Taskforce Data and Algorithms (TDA): ACM has a team of 40 employees (data scientists, lawyers, economists) supervising markets where data and algorithms play a role.


Session 5: AI and Cybersecurity

Speakers: Max Landkroon (RDI), Bob van der Meulen (RDI)

RDI supervises multiple cybersecurity legislative frameworks. This session addressed the question: "How does AI supervision relate to cybersecurity supervision?" The answer: they are inseparably linked.

Statement from the session

">95% of all AI products/services don't fall under the AI Act, but do fall under 'AI supervision' through open norms in other legislation like GDPR, NIS-2, and GPSR."

Cybersecurity requirements in AI-related legislation:

1

AI Act (Art. 15)

High-risk systems must provide "an appropriate level of cybersecurity."

2

NIS-2

Appropriate technical and organizational measures for network and information systems.

3

GDPR (Art. 32)

Appropriate security of personal data during processing.

4

CRA

Cyber Resilience Act - cybersecurity for connected products.

Long list of legislation: RDI presented an overview of 17+ laws with cybersecurity requirements: DORA, Data Act, eIDAS 2.0, NIS-2, AI Act, CRA, Cyber Solidarity Act, RED, GDPR, and more. AI supervision is also cybersecurity supervision!


Session 6: Building AI Literacy

Presented by: Directorate for Algorithm Coordination (AP)

This is now mandatory! Since February 2, 2025, all organizations using or providing AI must ensure adequate AI literacy among their staff (Article 4 AI Regulation).

Article 4 AI Regulation - literal text

"Providers and deployers of AI systems shall take measures to ensure, to the best extent possible, a sufficient level of AI literacy of their staff and other persons dealing with the operation and use of AI systems on their behalf, taking into account their technical knowledge, experience, education and training and the context the AI systems are to be used in."

What is sufficient is not prescribed – skills and knowledge differ per sector and situation. The session covered European developments including the Living Repository from the EU AI Office and Q&A from the Commission.

The four pillars of AI literacy:

1

Technical understanding

What is AI? Basic knowledge of how AI systems work, their capabilities and limitations.

2

Risk awareness

What can go wrong? Understanding potential risks, bias, and unintended consequences.

3

Ethical conduct

What is responsible? Ethical considerations when using AI in practice.

4

Regulation

What are the rules? Knowledge of relevant legislation including the AI Act.

Test yourself: Take our AI Literacy Test to measure your own knowledge level.


Session 7: Internet of Agents

Speakers: Timon Daniels (AI Safety & Security Lab, RDI), Lara van Zuilen (AISSL, RDI)

The AI Safety & Security Lab (AISSL) at RDI focuses on complex and new AI risks. In this session, the concept "Internet of Agents" was introduced as a new threat landscape.

What is the Internet of Agents?

The evolution from single AI agents to multi-agent systems that:

  • Collaborate between systems (over the internet)
  • Dynamically organize without predetermined structure
  • Autonomously make decisions without human intervention

Examples of agents in practice:

  • Email agents
  • Coding assistants
  • Recruiters
  • Smart home systems
  • Customer service bots

New threat landscape: With ~360 million businesses and ~2.5 billion households potentially deploying agents, new risks emerge:

  • Misinformation at scale
  • Overload of systems
  • Cybersecurity - lack of standards and protocols for agent-to-agent communication

Session 8: Recruitment as a Practical Example

Presented by: Directorate for Algorithm Coordination (AP)

AI in HR is one of the most discussed high-risk applications under the AI Act (Annex III). This session covered the shared responsibility between providers and deployers in the AI value chain.

Obligations for providers of high-risk R&S AI

  • Meet essential requirements: risk management, data governance, documentation
  • Conformity assessment (standard prEN 18286 now open for comment)
  • Registration of AI system
  • Affixing CE marking
1

🚫 Prohibited

AI that detects emotions during job interviews falls under prohibited practices since February 2, 2025.

2

⚠️ High-risk

Automated CV screening and ranking systems for candidates require conformity assessment.

What is allowed, what isn't?

AI applicationClassificationWhat to do?
Emotion recognition in interviewsProhibitedStop immediately
CV screening & matchingHigh-riskConformity assessment required
Employee turnover predictionHigh-riskConformity assessment required
Schedule optimizationLow riskNo specific requirements

Read more: See our HR & Employment sector page.


Session 9: What is an AI System?

Format: Interactive workshop

This session went deeper into the definition of an AI system (Article 3(1)). Participants worked together on a case about a recidivism prediction system to understand the definition elements.

Underestimation in the market: The session emphasized that many organizations underestimate the scope of the AI Regulation. Assessment happens on a case-by-case basis, with attention to the full lifecycle of AI systems.

The 8 elements of the AI system definition

An AI system is a machine-based system with explicit or implicit objectives, that operates with autonomy, can exhibit adaptiveness, processes input through inference capability to output, which influences physical or virtual environments.

Key questions to determine if something is an AI system:

1

Autonomy?

Does the system work independently or entirely controlled by fixed rules?

2

Adaptation?

Does the system learn or adapt after deployment?

3

Inference?

Does the system derive conclusions from data itself?

4

Output?

Does it generate predictions, decisions, or content?

Examples from the session:

SystemAI system?Why?
Spam filter with MLYesLearns patterns, makes predictions
Excel spreadsheetNoFormulas, no inference
Script-based chatbotNoFixed rules, no learning
ChatGPT integrationYesLLM, generates content based on inference

Session 10: Regulatory Sandbox in the AI Regulation

Speakers: Ewout van der Kleij (AP), Alany Reyes Pichardo (AP), Tim van den Belt (RDI)

What does the AI Act require? (Article 57)

  • Paragraph 1: Each member state must establish a sandbox before August 2026
  • Paragraph 5: The sandbox facilitates developing, validating, and placing AI systems on the market
  • Paragraph 6: The supervisor must provide guidance, supervision, and support

Four goals of the sandbox:

1

Legal certainty

Clarity upfront about which rules apply.

2

Good practices

Developing best practices and guidance for the market.

3

Regulatory learning

Supervisors learn from innovative applications.

4

Market access

Facilitating innovation and access to the European market.

NL Regulatory Sandbox: The Netherlands follows Article 57 with a general AI Regulatory Sandbox through a single point of contact. Supervisors help with legal and technical questions, but do not support development itself.

Relevant blog: Read more about AI Regulatory Sandboxes in the Netherlands.


Session 11: Generative AI

Presented by: Directorate for Algorithm Coordination (AP)

The AP presented their vision on generative AI: "Responsibly Forward". The session covered the technology, AI Regulation rules for GPAI models, the Code of Practice, and transparency requirements.

Statistic from the session

77% of Dutch people expect generative AI to make their work easier and more enjoyable.

Applications and trends of generative AI:

  • AI agents as basis for image and sound
  • Social actor and 24/7 personal assistant
  • Researcher and search engine
  • Coding and automation

GPAI obligations (from August 2025):

ObligationWhat does it entail?For whom?
Technical documentationDescription of capabilities, limitations, and risksAll GPAI providers
Training data summaryPublic overview of training data usedAll GPAI providers
Copyright policyRespect for copyrights, opt-out mechanismAll GPAI providers
Systemic risk evaluationExtensive testing, red teaming, incident reportingHigh-impact models only

Note for deployers: If you integrate ChatGPT, Claude, or other GPAI models into your own products, you also have obligations. Think about transparency to end users and marking AI-generated content.

Deep dive: See our GPAI Guide for a complete explanation.


International Cooperation: UNESCO Toolkit

A special highlight was the presentation of the UNESCO toolkit "Supervising AI by Competent Authorities." This practical guide, co-developed by RDI, helps supervisors worldwide set up AI oversight.

"Strong international cooperation is indispensable. As supervisors, we must 'speak with one voice' because much AI is not limited to one sector or one country."

RDI

RDI plays a leading role as chair of the European working group of supervisors and was involved in establishing the Global Network of AI Supervision (GNAIS) in Bangkok.


Conclusion: Together at the Helm

The AI Supervision Congress 2025 made one thing clear: implementing the AI Act is a joint effort. Supervisors, businesses, experts, and citizens must work together to keep AI safe and innovative.

The 5 core messages of the congress:

  1. Partner, not police – Supervision wants to collaborate, not just enforce
  2. Context is everything – The same AI technology requires different rules depending on application
  3. Sandboxes offer opportunities – Innovating with guidance becomes possible
  4. International harmonization – The Netherlands plays a leading role in Europe
  5. Fundamental rights central – Technical and ethical supervision go hand in hand

Download All Presentations

All presentations that may be shared publicly are available in our Knowledge Base:

📥 AI Supervision Congress 2025 - All Documents

Download all keynotes and session presentations directly from our Knowledge Base.

View all presentations →


Sources

Dutch Digital Infrastructure Agency: AI Supervision Congress 2025 - Report (December 2025)

🎯 Need help with AI compliance? Schedule a free consultation to discuss how your organization implements the AI Act.