Source: This article is based on session 11 "Generative AI" from the AI Supervision Congress 2025, presented by the Directorate for the Coordination of Algorithms of the Dutch Data Protection Authority.
The question many organizations forget to ask
77% of Dutch people expect generative AI to make their work easier and more enjoyable. Chances are your organization already uses ChatGPT, Claude, Gemini, or Copilot. But do you know that as a user of these tools, you also have obligations under the AI Act?
During the AI Supervision Congress, the Dutch Data Protection Authority (AP) made clear: the AI Act isn't just about the makers of AI models like OpenAI or Anthropic. Downstream providers (companies that integrate genAI in their products) and deployers (organizations that use genAI) also have obligations.
First understand: Model vs. System
The crucial distinction
The AI Act distinguishes between:
- GPAI model: The underlying AI model (e.g., GPT-4, Claude 3)
- AI system: The application using the model (e.g., a chatbot you build with the GPT-4 API)
This distinction determines which rules apply to you and who supervises you.
The definition from the law:
"General-purpose AI model": an AI model, including where such an AI model is trained with a large amount of data using self-supervision at scale, that displays significant generality and is capable of competently performing a wide range of distinct tasks...
Practically this means: Both GPAI model rules and system rules can apply to generative AI. It depends on your role in the chain.
Who has which obligations?
| Role | Example | Obligations | Supervised by |
|---|---|---|---|
| GPAI model provider | OpenAI, Anthropic, Google | Documentation, copyright, training data summary | AI Office (EU) |
| Downstream provider | Company building chatbot with GPT-4 API | High-risk requirements (if applicable), transparency | National supervisor (DPA/RDI) |
| Deployer | Organization using ChatGPT internally | Transparency to users, deepfake labeling | National supervisor |
Transparency obligations: What must you disclose?
The AI Act contains in Article 50 specific transparency obligations for AI systems that interact with people or generate content.
1. Chatbots: People must know they're talking to AI
Obligation for providers: If you offer an AI system intended to interact with people, you must inform persons that they're interacting with AI. Think of customer service chatbots, virtual assistants, or AI-based advisors.
Examples where this applies:
- Customer service chatbot on your website
- AI assistant in your app
- Automated phone systems with AI
2. Synthetic content: Mark as AI-generated
AI systems that generate synthetic audio, image, video, or text content must ensure the output is marked in machine-readable format.
This is relevant for:
- AI-generated images for marketing
- Automated social media content
- Voice-overs made with AI
3. Deepfakes: Explicitly label
Deepfake obligation
As a deployer, you must explicitly mark deepfakes as AI-generated. This applies to image, audio, or video content that has been created or edited to resemble a real person.
The supervision structure: Who supervises whom?
During the congress, it became clear that supervision of generative AI is a collaboration between European and national supervisors.
AI Office (EU)
Supervises: GPAI models (OpenAI, Anthropic, etc.)
Powers: Documentation requirements, incident reporting, systemic risk evaluation
National supervisors (DPA/RDI)
Supervises: AI systems in which GPAI is integrated
Powers: Prohibited practices, high-risk requirements, transparency
From the congress: "While the task of supervising GPAI models lies with the AI Office, supervision of AI systems in which these models are integrated will in many cases fall to national market surveillance authorities. Supervision of GPAI will in practice therefore be a matter for both the AI Office and national supervisors."
How does the collaboration work?
- Information requests: National supervisors can request documentation via the AI Office
- Investigation requests: National supervisors can ask the AI Office to take action
- Complaints: Downstream providers can file complaints about GPAI models
The Dutch DPA vision: "Responsibly Forward"
The AP presented their vision on generative AI during the congress, called "Responsibly Forward" (Verantwoord Vooruit). This provides insight into how supervision will develop.
Principles for a desired future
| Characteristic | What does this mean? |
|---|---|
| European digital autonomy | Stimulating EU providers of genAI |
| Knowledge and resilience | Promoting AI literacy |
| Democratic governance | Expertise for parliamentary representatives |
| Ability to correct | Correction methods through the AI chain |
| Transparent and insightful | Transparency to users |
| Systems in controlled management | Use of open-weight models recommended |
Practical applications: what will you encounter?
AI as personal assistant
24/7 available assistants for employees or customers. Note: transparency obligation!
AI for research and search
GenAI answering search queries. Verification of output remains essential.
AI for image and sound
AI-generated images, videos, or audio. Marking required.
AI for coding and automation
Copilot-like tools. Less strict requirements, but AI literacy still needed.
Checklist: What should your organization arrange?
If you use genAI (deployer)
- [ ] Inform users they're interacting with AI (chatbots)
- [ ] Label deepfakes explicitly as AI-generated
- [ ] Train employees in AI literacy
- [ ] Monitor risks of the AI systems you use
If you integrate genAI in your product (downstream provider)
- [ ] Check if your system is high-risk (Annex III)
- [ ] Mark synthetic output in machine-readable format
- [ ] Request documentation from your GPAI vendor
- [ ] Conformity assessment if high-risk
Contact the Dutch DPA about generative AI
GenAI desk opened: The AP has opened a special desk for questions about generative AI:
๐ง genai-loket@autoriteitpersoonsgegevens.nl
This desk is intended for organizations with questions about applying GDPR and AI Act to generative AI.
Conclusion: Know your role in the chain
The AI Act creates shared responsibility throughout the entire AI value chain. Whether you make a GPAI model, integrate it in your product, or simply use ChatGPT in your organization โ there are obligations you must comply with.
Three key points:
- Model โ System โ Understand your role in the chain
- Transparency is key โ Inform users they're talking to AI
- Supervision is layered โ AI Office for models, national supervisors for systems
Organizations that map their obligations now are better prepared for August 2026 when high-risk requirements become fully effective.
Further reading
- The Code of Practice for general-purpose AI โ In-depth analysis of the Code of Practice
- AI Supervision Congress 2025: All insights โ Complete overview
- Transparency requirements for AI content (Article 50) โ Practical implementation
Sources
๐ฏ Need training on generative AI? Schedule a call to discuss how your team can work responsibly with genAI within the AI Act.