Most Article 50 conversations start in the wrong place. Teams ask whether they need a watermark, a label, or a disclaimer. That is too narrow. The real first question is simpler: are you the provider, the deployer, or both?
That distinction decides who has to make outputs detectable, who has to disclose deepfakes, who has to inform users they are talking to AI, and who can rely on editorial control as an exception.
The practical value of Article 50 is exactly there. It splits transparency duties between providers and deployers of certain AI systems. If you miss that split, compliance turns into finger-pointing. The vendor says the customer should label the content. The customer says the vendor should have solved it in the product. Neither side can prove much when regulators ask questions.
What Article 50 actually says
The legal text of Article 50 has seven paragraphs. Read together, they form four practical buckets.
1. AI systems interacting with people
Article 50(1) is a provider duty. Providers must design AI systems intended to interact directly with natural persons so that the people concerned are informed that they are interacting with an AI system, unless that is obvious in the circumstances.
This is the chatbot rule, but not only for chatbots. Voice assistants, customer service agents, support bots, intake agents, and similar interfaces all sit in this territory.
The key practical question is not whether the interface uses AI in the background. The question is whether a reasonably well-informed, observant, and circumspect person would understand they are interacting with AI. If not, disclosure is required.
2. Synthetic audio, image, video, and text
Article 50(2) is also mainly a provider duty. Providers of AI systems, including general-purpose AI systems, that generate synthetic audio, image, video, or text must ensure the outputs are marked in a machine-readable format and detectable as artificially generated or manipulated.
That does not necessarily mean one specific technical solution. The legal standard is broader: the solution must be effective, interoperable, robust, and reliable as far as technically feasible, taking into account the type of content, cost of implementation, and state of the art.
That is why Article 50 increasingly needs to be read together with the emerging Code of Practice on AI content transparency and the wider GPAI Code of Practice discussion.
The paragraph also contains two important exceptions. The obligation does not apply where the system performs an assistive function for standard editing or does not substantially alter the deployer’s input data or its semantics. And it does not apply where systems are lawfully used for criminal offence detection, prevention, investigation, or prosecution.
3. Emotion recognition and biometric categorisation
Article 50(3) shifts the burden to deployers. Deployers of emotion recognition systems or biometric categorisation systems must inform the people exposed to the system that it is operating.
This matters because Article 50 is not only about generative AI content. It also covers transparency toward people who are subject to certain AI systems in real-world settings.
If an employer, school, public authority, or venue operator uses emotion recognition or biometric categorisation, the deployer cannot hide behind the provider’s documentation. The deployer has its own live transparency duty.
4. Deepfakes and public-interest text
Article 50(4) is the paragraph most organizations care about, and also the one they misread most often.
First, deployers of AI systems that generate or manipulate image, audio, or video content constituting a deepfake must disclose that the content has been artificially generated or manipulated.
Second, deployers of AI systems that generate or manipulate text published for the purpose of informing the public on matters of public interest must disclose that the text has been artificially generated or manipulated.
That second sentence is narrower than many people assume. It does not say that every AI-assisted text needs a label. It focuses on text published to inform the public on matters of public interest.
Then comes the editorial control exception. The text disclosure duty does not apply where the AI-generated content has undergone human review or editorial control and a natural or legal person holds editorial responsibility for the publication.
That means Article 50 is not banning AI-assisted journalism, policy communication, or public-interest publishing. But it does reward organizations that can prove real editorial control instead of pretending the prompt chain was “review.”
Provider versus deployer, the practical split
This is the simplest useful way to think about Article 50.
Provider responsibilities
If you are the provider, you should focus on what the system technically enables.
- For direct interaction systems, inform people they are dealing with AI when that is not obvious.
- For synthetic content systems, make outputs machine-readable and detectable as AI-generated or manipulated.
- Build solutions that are effective, interoperable, robust, and reliable as far as technically feasible.
- Document limits and exceptions clearly for deployers.
If that documentation is thin, you are also creating downstream problems under Article 13, especially where deployers need evidence for procurement, governance, or Article 26 controls.
Deployer responsibilities
If you are the deployer, you should focus on what the organization actually publishes, shows, or exposes people to.
- If you use emotion recognition or biometric categorisation, inform the natural persons concerned.
- If you publish deepfake image, audio, or video, disclose that it was artificially generated or manipulated.
- If you publish AI-generated or AI-manipulated text to inform the public on matters of public interest, disclose it unless the editorial control exception applies.
- Make sure the information is clear, distinguishable, and accessible.
This is exactly why the generative AI deployer obligations guide matters. A deployer cannot solve Article 50 by procurement language alone. There must also be publication governance.
Learn the EU AI Act by doing
No slides. No boring e-learning. Try an interactive module.
Try it yourself
3 interactive activities. Earn XP. See why this works better than reading slides.
Three common Article 50 scenarios
Scenario 1, a vendor offers an image generator to enterprise customers
The vendor is the provider. Article 50(2) means the provider must ensure the generated outputs are detectable in a machine-readable way. If customers later publish those outputs as campaign material or public communications, those customers may still have deployer-side duties depending on the context.
Scenario 2, a municipality publishes an AI-generated explainer video
The municipality is the deployer. If the video is a deepfake or materially AI-generated or manipulated image, audio, or video content, disclosure is needed under Article 50(4). If the municipality is also using a third-party vendor, that vendor still has its own provider-side obligations under paragraph 2.
Scenario 3, a newsroom uses AI to draft a public-interest article
If the text is published with the purpose of informing the public on matters of public interest, Article 50(4) is relevant. But the disclosure duty for text may fall away if there has been real human review or editorial control and a natural or legal person holds editorial responsibility.
That exception is powerful, but only if the newsroom can actually show the editorial process. “A human looked at it quickly” is a weak defense.
Paragraphs 5 to 7 matter more than they look
Article 50(5) says the information required under paragraphs 1 to 4 must be provided clearly and distinctly at the latest at the time of first interaction or exposure. It also has to meet applicable accessibility requirements.
This kills a common lazy approach, hiding the disclosure in terms and conditions, footers, or metadata nobody sees. Article 50 wants the relevant person to receive the information in a visible and timely way.
Article 50(6) says these duties do not replace Chapter III duties or other transparency duties in Union or national law. So if your system is also high-risk, or if consumer protection, media law, platform rules, or sector law create extra transparency duties, Article 50 is not your ceiling.
Article 50(7) points toward codes of practice and possible Commission implementing acts. In other words, this area will get more detailed, not less. If you are working with GPAI vendors or building content workflows today, keep one eye on that guidance now rather than waiting for summer 2026.
What organizations should do now
The best Article 50 preparation is boring in a good way. It turns transparency into a normal control rather than a scramble at publication time.
- Map roles first. Decide when your organization is acting as provider, deployer, or both.
- Classify use cases. Separate interaction systems, synthetic content generation, biometric categorisation, emotion recognition, deepfake publication, and public-interest text.
- Write procurement requirements. If you buy models or tools, require machine-readable detectability and usable technical documentation.
- Build a publication rule. Decide who labels content, who checks exceptions, and who signs off on editorial control.
- Keep evidence. If you want to rely on the editorial control exception, prove the human review chain.
- Make disclosures readable. Article 50 cares about clarity, distinction, and accessibility, not legal poetry.
If your organization is still unclear about role allocation in the AI value chain, the risk assessment tool and the post on when you are a deployer of an AI agent are useful starting points.
Where teams usually get this wrong
The first mistake is collapsing provider and deployer duties into one vague “AI labeling” task.
The second mistake is thinking Article 50 is only about deepfakes. It also covers direct interaction systems, synthetic outputs more broadly, and deployers of emotion recognition or biometric categorisation.
The third mistake is over-labeling everything while under-governing the workflow. Labels do not fix missing role allocation.
The fourth mistake is assuming the editorial control exception applies automatically whenever a human touches the draft. It does not. Real review and real editorial responsibility must exist.
The fifth mistake is ignoring accessibility. Article 50 explicitly requires the information to conform to accessibility requirements.
Frequently asked questions
The most important questions and answers for this topic.