Practical analysis of the draft code of practice for transparency around AI-generated content
Draft for Consultation: The European Commission has recently published a first draft of a Code of Practice on transparency in AI content. The document is explicitly a draft, intended to gather feedback and provide direction for the practical implementation of transparency obligations under the AI Act.
Why This Code Exists: Making Article 50 AI Act Concrete
Article 50 of the AI Act requires, in specific situations, that it be made clear that content was created or manipulated by AI. In broad terms, this concerns two worlds that intersect:
Providers of AI systems must ensure that AI content, where technically feasible, can be marked and is detectable. Think of including provenance information or building in signals that can later be used to determine that an image, audio, or video was generated or modified by an AI system.
Deployers of AI content must, in certain cases, visibly label content for the public. Think of deepfakes, manipulated images, or AI-generated text shared in a context of public interest.
Chain Responsibility
Transparency is chain work: if the provider delivers nothing, the deployer cannot label effectively. And if the deployer has no process, the provider's technology helps less too. The code therefore addresses both sides simultaneously.
This Is a Draft, But It Is Directional
Because it is a draft, you cannot read it as a definitive set of requirements. You can read it as a clear signal about the direction Europe is choosing:
- Technology and Organization: Transparency is elaborated as a combination of marking/detection and labeling/governance/accountability
- Interoperability: Not every company with its own label and definitions, but shared agreements as much as possible
- Realistic Limitations: Not every modality can be marked equally "hard," and markings can often be removed—this is translated into a layered approach
Organizations that take this document seriously now gain time: not because you need to implement everything already, but because you can now determine what your organization will be held accountable for in audits, procurement, and supervision.
Core 1: Providers Must Move Toward "Layered Marking" and Provable Detection
For providers, the central idea is that one technique is rarely enough. The draft code therefore steers toward a multi-layered approach:
Metadata & Provenance
Origin data that travels with the content, ideally with a digital signature so that integrity can be verified.
Watermarking
A (preferably invisible) marking in the content itself, so it's not just "in the packaging" but also "in the product."
Fingerprinting & Logging
Techniques that can later determine whether something was generated by your model, even if metadata is missing or has been removed.
Detectability as a Service
Notably, the code doesn't stop at "mark it." It also steers toward detectability as a service. Providers are pushed toward a free or publicly accessible way to verify content, for example via a web interface or API with confidence scores.
Practically, this means providers must not only build a technical solution but also think about:
- How do you scale verification without it becoming a cost or security problem?
- How do you handle false positives and false negatives?
- How do you provide transparency without immediately revealing model secrets or abuse information?
Tension Between Transparency and Security: Transparency builds trust but can also become a guide for circumvention. The draft code tries to solve this by combining multiple layers and also stimulating "forensic" detection that doesn't rely solely on watermarks.
Core 2: Deployers Get a Labeling Obligation That Goes Beyond a Text Line
For deployers, the code strongly emphasizes a recognizable and consistent labeling approach. This is not just about "put 'made with AI' somewhere." The draft code works toward a shared vocabulary, a recognizable icon, and agreements about where and when a label should be visible.
Taxonomy for AI Content
| Category | Definition | Example |
|---|---|---|
| Fully AI-generated | Entirely generated by AI | AI image of a person who doesn't exist |
| AI-assisted | Supported by AI, with greater human role | Photo with AI adjustments to background |
This distinction seems simple, but in practice, this is exactly where discussions arise. A marketing department that has a photo "just slightly" adjusted by generative AI often feels this is minor. From a transparency and trust perspective, the public may experience it differently.
Modality-Specific Implementation
The code also works toward a (temporary) label icon and later a broader EU icon that can also be interactive. Audio requires different measures than images. Think of a podcast fragment or voice-over: a label in a description is often insufficient when people are only listening. Therefore, disclosure that also returns "in" the experience is discussed, for example repeated during longer audio.
Core 3: There Will Be Exceptions, But They Require Discipline
Transparency in the AI Act has exceptions and nuances. The draft code elaborates on these further. Two stand out:
Artistic and Satirical Content
Content for artistic, satirical, or fictional purposes receives proportional treatment: you don't want a label that destroys the work, while you still want to be honest about the origin.
Text Under Human Control
AI-generated text on subjects of public interest has room not to be labeled if there is human control and someone bears final responsibility. This requires minimal documentation: you must be able to explain that it was not published "unseen."
Governance Requirement: If you want to use an exception, you need a process that makes this demonstrable. Otherwise, "human review" becomes an empty phrase you cannot prove.
What Does This Mean for Your Organization If You're Not an AI Provider?
Many organizations are not providers of AI systems but are intensive users. Think of municipalities publishing images, educational institutions with communications departments, HR teams creating recruitment materials, or legal departments generating summaries.
For these organizations, the core message is: transparency becomes a workflow requirement.
You will need to know in your processes:
- Where AI is used in the chain (text, image, audio, video, translation, editing)
- Which output goes external and which stays internal
- In which cases you need to label, and how you handle exceptions
- How you stay consistent across channels: website, social media, newsletters, presentations
Practical Example
An organization has a video made for a campaign. The images are partly real, partly AI-generated, and the voice-over is synthetic. Without agreements, fragmentation occurs: one channel labels, another doesn't. With a consistent taxonomy, a standard label, and a simple content checklist, it becomes manageable.
What Does This Mean for Providers and Product Teams?
For providers and product teams, the draft code has a second effect: it transforms transparency into a product feature that customers will start asking about.
When procurement asks: "Can we detect, demonstrate, and label AI output?", you need more than a policy document. You need technical building blocks that fit into the ecosystem:
- Provenance metadata
- Watermarks
- Verification interfaces
- Logging
Product Choices
For product teams, this also means you must make choices about:
- Default settings: Marking on by default, or opt-in?
- User experience: How do you provide transparency without paralyzing the workflow?
- Integrations: How do you connect to CMS systems, DAM systems, social publishing tools, and archives?
What You Can Do Now, Without Waiting for the Final Version
You don't need to make everything "AI Act-proof" today. You can already take actions that will definitely hold value later.
Inventory AI Content Flows
Not just "we use ChatGPT," but concretely: where is AI used in creation, editing, and publication? Which teams, which tools, which channels?
Define Internal Taxonomy
At minimum, adopt the distinction the draft code mentions: fully AI-generated versus AI-assisted. Document how you determine this internally, and link it to examples.
Label Process in Publication Chain
Build labeling in as part of your content review. Think of a field in your CMS, a checkbox in your social publishing tool, or a required question in your review template.
Document Human Review
If you want to rely on "editorial responsibility" in certain cases, make it simply provable who reviewed and when. Reproducible, not necessarily heavy.
Ask Vendors About Marking
If you work with generative image tools, video platforms, or AI voice: what marking or provenance do they include? Is there a verification API? Can you integrate it?
Strategic Advantage: The draft code is not yet finished. The direction is clear: transparency is not just about words, but about technology and organization reinforcing each other. Organizations that already translate this into their workflows will need to improvise less when the rules actually take effect.
📚 Deepen Your Knowledge: Check out the Complete EU AI Act Guide for a full overview of all aspects of the AI legislation.