Responsible AI Platform
Article 50 of 11344%
🇳🇱 Nederlands

Article 50: Transparency obligations for providers and deployers of certain AI systems

EU Official:
Applies from 2 Aug 2026
Title IV: Transparency ObligationsEntry into force: 2026-08-02

Official text

||

Source: EUR-Lex, Regulation (EU) 2024/1689 — text reproduced verbatim.

📥 Download AI Act (PDF)

🎯 What does this mean for you?

🏭 Provider
Ensure your AI system clearly indicates that users are interacting with AI. For chatbots: display this at first contact. For generated content: provide metadata or labels indicating AI generation.
🏢 Deployer
Inform your customers and employees when they interact with AI. This applies to chatbots, automated decision-making, and AI-generated content. Deepfakes must always be labelled.
🏪 SME / Startup
Do you use an AI chatbot on your website? Ensure a clear notice: 'You are speaking with an AI assistant'. This is mandatory and easy to implement.
🏛️ Public Sector
Citizens have the right to know when they interact with AI. This applies to chatbots on government websites, automated decisions, and AI-assisted enforcement.

✅ Compliance Checklist

  • AI interaction is clearly communicated
  • AI-generated content is labelled
  • Deepfakes are marked as artificial
  • Emotion recognition is disclosed to affected persons

Want to save your progress? Create a free account

📖 Related Recitals

🛠 Related tools

⚖️ Related Enforcement

  • Italy fines OpenAI €15 million for ChatGPT privacy violationsGarante per la protezione dei dati personali · Dec 2024
  • Italy investigates ChatGPT — temporary ban lifted after measuresGarante per la protezione dei dati personali · May 2023

Cross-references

Recitals

Frequently asked questions

What transparency obligations apply to AI?
Article 50 requires that users are informed when interacting with an AI system (such as chatbots), when content is AI-generated (deepfakes), and when emotion recognition or biometric categorisation is applied.
Do I need to disclose that my chatbot is AI?
Yes, Article 50 requires that natural persons are informed they are interacting with an AI system, unless this is obvious from the circumstances.
Do SMEs also need to comply with Article 50 of the AI Act?
Article 50 of the AI Act does not provide a general exemption for SMEs. However, the AI Act includes supportive measures and potentially lighter obligations for small and medium-sized enterprises, depending on their role in the AI value chain.
How does Article 50 of the AI Act relate to the GDPR?
Article 50 of the AI Act complements the GDPR. While the GDPR protects personal data, the AI Act focuses on the safety and trustworthiness of AI systems. Organisations must comply with both regulations when their AI system processes personal data.
What are the deadlines for Article 50 of the AI Act?
The AI Act follows a phased implementation. Prohibited AI practices apply from February 2025, obligations for high-risk AI systems from August 2026, and other provisions take effect gradually. The specific deadline for Article 50 depends on the category of the obligation.
Does Article 50 of the AI Act also apply to AI systems I purchase?
Yes, Article 50 of the AI Act may also be relevant when you purchase AI systems. As a deployer, you have your own obligations under the AI Act, regardless of whether you developed the system yourself or purchased it from a provider.
What is the difference between provider and deployer under Article 50 of the AI Act?
Under Article 50 of the AI Act, the provider is the entity that develops or places the AI system on the market, while the deployer is the entity that uses the system under its own authority. Both roles carry different obligations.
What documentation does Article 50 of the AI Act require?
Article 50 of the AI Act requires that relevant documentation is maintained as part of the compliance process. This may include technical documentation, instructions for use, logs or declarations of conformity, depending on the classification of the AI system.
Do I need to label deepfakes under the AI Act?
Yes, Article 50(4) requires deployers publishing deepfakes (AI-generated or manipulated image, audio or video content) to clearly disclose that the content is artificially generated or manipulated. This applies to both visible labels and machine-readable marking.
When do the transparency obligations of Article 50 take effect?
The transparency obligations of Article 50 take effect on 2 August 2026, together with most other provisions of the AI Act. The AI Office has published a Code of Practice to help organisations with implementation.
Do I need to tell users they are talking to a chatbot?
Yes, Article 50(1) requires providers of AI systems intended for direct interaction with persons (such as chatbots) to ensure users are informed they are communicating with an AI system, unless this is obvious from the circumstances.
Does the labelling obligation also apply to AI-generated text on news websites?
Yes, Article 50(4) states that AI-generated or manipulated text published to inform the public on matters of public interest must be labelled as artificially generated. This is specifically relevant for news websites and media organisations.

📬 AI Act Weekly

Get the most important AI Act developments in your inbox every week.

Subscribe