Practical implementation of transparency obligations for AI content
Deadline Approaching: The transparency obligations from Article 50 of the AI Act will apply from August 2, 2026. The European Commission published the first draft of the Code of Practice on December 17, 2025. Feedback runs until January 23, 2026, followed by a second draft around mid-March 2026 and finalization toward June 2026.
Why Article 50 Is Organizationally More Difficult Than It Seems
Many organizations still see Article 50 as "just adding a label." In reality, it affects your entire chain: procurement, product development, marketing, communications, security, data governance, and even accessibility. The draft Code of Practice makes this visible because it's not just about an icon, but also about watermarking, metadata, provenance, detectors, logging, and preventing removal of markings.
Article 50 has two worlds that often overlap:
Providers of Generative AI
Must ensure that outputs are detectable as AI-generated or manipulated. Think of machine-readable marking, watermarking, and detection mechanisms.
Deployers of Generative AI
Must in certain cases provide visible disclosure to the public, with exceptions such as editorial control for public interest text.
If you don't make this separation clear, you get two typical problems: teams expect the vendor to "handle it," while your publication process still requires disclosure; or conversely: you put labels everywhere, but you can't demonstrate that your technical detection and robustness are in order.
What Article 50 Requires, in Plain Language
The core: deepfakes and AI-generated text must in certain cases be recognizable as artificial or disclosed, and the information must be provided clearly, distinctly, and accessibly.
The Code of Practice addresses this by translating the obligations into two tracks:
| Track | For Whom | Measures |
|---|---|---|
| Marking and detection | Providers | Metadata, watermarks, provenance, detectors, APIs |
| Labeling and disclosure | Deployers | Consistently label deepfakes and public interest texts, with attention to exceptions, artistic context, and accessibility |
What's New and Operational in the First Draft Code of Practice
The draft explicitly chooses a layered approach. Not one technique, but several simultaneously, because each technique can be circumvented or has limitations per modality. This is reflected in the combination of metadata, invisible watermarks, and fingerprinting/logging.
1. Multiple Marking Techniques, Per Modality
Metadata
Linked to the moment of generation and digitally signed for integrity verification.
Imperceptible watermarking
Woven "into" the content and must survive processing.
Fingerprinting or logging
As fallback, for example hashing for images or logging for text.
Provenance certificate
For content where embedding is difficult, so you can still prove origin.
2. Detection for Third Parties: Not Just Internal
The draft expects providers to make an interface or detector available (for example API or UI) with which users or other parties can verify whether content was generated or manipulated by their system.
Procurement tip: When purchasing a generative model, you can now require that a verification mechanism exists that supports your downstream use cases.
3. Reliability and Robustness as Measurable Topics
The draft doesn't just talk about "marking," but also about quality: false positives and false negatives, sample-based evaluation, robustness across distribution channels. This pushes Article 50 toward a testing and assurance discussion.
4. Deployer Side: Taxonomy and Icon
For disclosure of deepfakes and public interest text, the draft proposes a common taxonomy and a common icon (temporarily "pending" awaiting EU-wide standardization).
| Category | Meaning |
|---|---|
| Fully AI-generated | Entirely generated by AI |
| AI-assisted | Supported by AI with greater human role |
An Approach That Works: From Content Label to Control Framework
If you want to do this without chaos in 2026, it helps to treat Article 50 as a mini-control framework with three layers.
Layer 1: Classify Use Cases with an Article 50 Trigger
Don't start with tools, but with publication moments and interactions. Create a register with at minimum:
- What content is generated or manipulated (text, image, audio, video)?
- Is it published or shared externally?
- Is it intended to inform the public about matters of public interest, or is it marketing, HR, or internal communication?
- Is there human review and who bears editorial responsibility?
A simple "AI inventory" helps, but only when you link it to these triggers does it become useful for Article 50.
Layer 2: Establish Technical Requirements in Procurement and Architecture
For providers or vendors, you can translate the draft into contractual requirements:
- Support for machine-readable marking (metadata, watermarking, provenance)
- A verification mechanism (detector or API) for your content stream
- Agreements on non-removal: not just technology, also policies and terms of use that prohibit removal of marks
Watch the Chain
This isn't just for "GenAI vendors." Tooling in your chain can also destroy marks, such as social media compressors, video-edit pipelines, DAM systems, or export flows. Your architecture reviews not just the model, but also the distribution.
Layer 3: Build Disclosure Into Your Content and Publication Process
For deployers, disclosure is primarily a process question: where in your workflow does the label come, who decides on exceptions, and how do you prove there was editorial control?
Think of a standard "AI disclosure step" in:
- your CMS workflow (draft, review, publication)
- your social publishing tooling
- your video-edit pipeline
- your press and spokesperson process
The draft also emphasizes accessibility: disclosure must be understandable and where necessary supportive for people with disabilities (for example alt-text, captions, sufficient contrast). This is a concrete point where legal and UX truly need each other.
Three Scenarios You Can Test Tomorrow
Municipality publishes AI-generated campaign video
The video contains synthetic voice-over and manipulated images. Test:
- Does the output get a mark (watermark or metadata)?
- Does that mark remain intact after export and upload?
- Is there a visible disclosure icon or text in the publication context?
News or educational organization uses GenAI for text about public debate
Here the exception comes into play: if there is demonstrable human review and editorial responsibility, the disclosure obligation may work out differently. The question becomes: "Can we provide evidence of editorial control?" via workflow logs, review steps, and role assignment.
Corporate communications uses AI to retouch photos
The draft taxonomy explicitly mentions examples such as object removal and context modification. Here you learn whether your organization has a practical criterion: when is something "AI-assisted" with disclosure impact, and when is it regular editing without misleading risk?
What You Want to Have Demonstrably Before Summer 2026
If you want one benchmark for "are we ready," it's this:
Use case overview
Publication and interaction use cases with Article 50 triggers (not a loose tool list).
Supplier requirements
For marking and detection, including testable criteria.
Verification path
Internally or via vendor API be able to demonstrate whether content comes from your GenAI stream.
Disclosure workflow
With ownership: who decides, who places, who checks exceptions.
Audit trail
For editorial control where you want to use that exception.
UX guidelines
For clear and accessible disclosure.
The direction is clear: Article 50 becomes a combination of technology and publication governance. Organizations that set this up early won't need to improvise with loose labels in 2026, but can show that transparency is part of their normal process.
Relationship with the Earlier Draft Code of Practice
This article builds on the analysis in The EU Commission's Draft Code of Practice on AI Content Transparency, where the content of the draft is covered in more detail. This article focuses on practical implementation: how to set up processes, tooling, and UI to comply with Article 50.
📚 Deepen Your Knowledge: Check out the Complete EU AI Act Guide for a full overview of all aspects of the AI legislation.