Responsible AI Platform

Generative AI and the AI Act: What do you need to arrange when using ChatGPT or Claude?

ยทยท9 min read
Delen:
Dutch version not available

Source: This article is based on session 11 "Generative AI" from the AI Supervision Congress 2025, presented by the Directorate for the Coordination of Algorithms of the Dutch Data Protection Authority.

The question many organizations forget to ask

77% of Dutch people expect generative AI to make their work easier and more enjoyable. Chances are your organization already uses ChatGPT, Claude, Gemini, or Copilot. But do you know that as a user of these tools, you also have obligations under the AI Act?

During the AI Supervision Congress, the Dutch Data Protection Authority (AP) made clear: the AI Act isn't just about the makers of AI models like OpenAI or Anthropic. Downstream providers (companies that integrate genAI in their products) and deployers (organizations that use genAI) also have obligations.


First understand: Model vs. System

The crucial distinction

The AI Act distinguishes between:

  • GPAI model: The underlying AI model (e.g., GPT-4, Claude 3)
  • AI system: The application using the model (e.g., a chatbot you build with the GPT-4 API)

This distinction determines which rules apply to you and who supervises you.

The definition from the law:

"General-purpose AI model": an AI model, including where such an AI model is trained with a large amount of data using self-supervision at scale, that displays significant generality and is capable of competently performing a wide range of distinct tasks...

Practically this means: Both GPAI model rules and system rules can apply to generative AI. It depends on your role in the chain.


Who has which obligations?

RoleExampleObligationsSupervised by
GPAI model providerOpenAI, Anthropic, GoogleDocumentation, copyright, training data summaryAI Office (EU)
Downstream providerCompany building chatbot with GPT-4 APIHigh-risk requirements (if applicable), transparencyNational supervisor (DPA/RDI)
DeployerOrganization using ChatGPT internallyTransparency to users, deepfake labelingNational supervisor

Transparency obligations: What must you disclose?

The AI Act contains in Article 50 specific transparency obligations for AI systems that interact with people or generate content.

1. Chatbots: People must know they're talking to AI

Obligation for providers: If you offer an AI system intended to interact with people, you must inform persons that they're interacting with AI. Think of customer service chatbots, virtual assistants, or AI-based advisors.

Examples where this applies:

  • Customer service chatbot on your website
  • AI assistant in your app
  • Automated phone systems with AI

2. Synthetic content: Mark as AI-generated

AI systems that generate synthetic audio, image, video, or text content must ensure the output is marked in machine-readable format.

This is relevant for:

  • AI-generated images for marketing
  • Automated social media content
  • Voice-overs made with AI

3. Deepfakes: Explicitly label

Deepfake obligation

As a deployer, you must explicitly mark deepfakes as AI-generated. This applies to image, audio, or video content that has been created or edited to resemble a real person.


The supervision structure: Who supervises whom?

During the congress, it became clear that supervision of generative AI is a collaboration between European and national supervisors.

1

AI Office (EU)

Supervises: GPAI models (OpenAI, Anthropic, etc.)

Powers: Documentation requirements, incident reporting, systemic risk evaluation

2

National supervisors (DPA/RDI)

Supervises: AI systems in which GPAI is integrated

Powers: Prohibited practices, high-risk requirements, transparency

From the congress: "While the task of supervising GPAI models lies with the AI Office, supervision of AI systems in which these models are integrated will in many cases fall to national market surveillance authorities. Supervision of GPAI will in practice therefore be a matter for both the AI Office and national supervisors."

How does the collaboration work?

  • Information requests: National supervisors can request documentation via the AI Office
  • Investigation requests: National supervisors can ask the AI Office to take action
  • Complaints: Downstream providers can file complaints about GPAI models

The Dutch DPA vision: "Responsibly Forward"

The AP presented their vision on generative AI during the congress, called "Responsibly Forward" (Verantwoord Vooruit). This provides insight into how supervision will develop.

Principles for a desired future

CharacteristicWhat does this mean?
European digital autonomyStimulating EU providers of genAI
Knowledge and resiliencePromoting AI literacy
Democratic governanceExpertise for parliamentary representatives
Ability to correctCorrection methods through the AI chain
Transparent and insightfulTransparency to users
Systems in controlled managementUse of open-weight models recommended

Practical applications: what will you encounter?

1

AI as personal assistant

24/7 available assistants for employees or customers. Note: transparency obligation!

2

AI for research and search

GenAI answering search queries. Verification of output remains essential.

3

AI for image and sound

AI-generated images, videos, or audio. Marking required.

4

AI for coding and automation

Copilot-like tools. Less strict requirements, but AI literacy still needed.


Checklist: What should your organization arrange?

If you use genAI (deployer)

  • [ ] Inform users they're interacting with AI (chatbots)
  • [ ] Label deepfakes explicitly as AI-generated
  • [ ] Train employees in AI literacy
  • [ ] Monitor risks of the AI systems you use

If you integrate genAI in your product (downstream provider)

  • [ ] Check if your system is high-risk (Annex III)
  • [ ] Mark synthetic output in machine-readable format
  • [ ] Request documentation from your GPAI vendor
  • [ ] Conformity assessment if high-risk

Contact the Dutch DPA about generative AI

GenAI desk opened: The AP has opened a special desk for questions about generative AI:

๐Ÿ“ง genai-loket@autoriteitpersoonsgegevens.nl

This desk is intended for organizations with questions about applying GDPR and AI Act to generative AI.


Conclusion: Know your role in the chain

The AI Act creates shared responsibility throughout the entire AI value chain. Whether you make a GPAI model, integrate it in your product, or simply use ChatGPT in your organization โ€“ there are obligations you must comply with.

Three key points:

  1. Model โ‰  System โ€“ Understand your role in the chain
  2. Transparency is key โ€“ Inform users they're talking to AI
  3. Supervision is layered โ€“ AI Office for models, national supervisors for systems

Organizations that map their obligations now are better prepared for August 2026 when high-risk requirements become fully effective.


Further reading


Sources

Dutch Data Protection Authority: Session 11: Generative AI (December 2025)
Dutch Data Protection Authority: Responsibly Forward: AP vision on generative AI (2025)

๐ŸŽฏ Need training on generative AI? Schedule a call to discuss how your team can work responsibly with genAI within the AI Act.