Responsible AI Platform

GPAI Code of Practice Signatory Taskforce: what it means for organizations building or using AI models

··5 min read
Delen:
Dutch version not available

How the new Signatory Taskforce bridges the gap between voluntary codes and binding regulation of general-purpose AI

The clock is ticking. In August 2026, enforcement of the AI Act rules for general-purpose AI (GPAI) begins. To help companies prepare, the EU AI Office has established a Signatory Taskforce under the GPAI Code of Practice. This is more than a bureaucratic body: it signals that the Commission is serious about moving from rules on paper to practical compliance.

What is general-purpose AI, exactly?

Let's start with clarity on what we're talking about. General-purpose AI - also known as foundation models - refers to AI models trained on vast amounts of data that can be used for a wide range of tasks. Think of large language models (LLMs) like GPT, Claude, Gemini, or Llama, but also multimodal models that combine text, image, and audio.

The key distinction the AI Act makes: a GPAI model is not designed for a specific purpose but can be deployed by others (so-called "downstream providers" and deployers) for diverse applications. This makes regulation complex, because who is responsible for what?

The Code of Practice: a voluntary framework with real weight

The GPAI Code of Practice is a voluntary instrument that helps providers of general-purpose AI models demonstrate compliance with the AI Act. The code translates the legal obligations from the AI Act into concrete, actionable steps.

Key point: The GPAI obligations under the AI Act have been formally applicable since August 2, 2025. Enforcement starts in August 2026. The Code of Practice is designed to provide clarity during this transition period on what "compliance" looks like in practice.

Why does a voluntary code matter when enforcement is coming? Because the AI Act explicitly states that compliance with the Code of Practice can serve as a presumption of conformity. Organizations that follow the code will be in a stronger position when regulators come knocking.

What does the Signatory Taskforce do?

The Taskforce brings together companies that have signed the Code of Practice. Chaired by the AI Office, it functions as a forum for:

  • Practical interpretation: how do you apply the code's obligations in day-to-day operations?
  • Knowledge sharing: exchange on technological developments, research findings, and emerging insights relevant to compliance.
  • Input on guidance: the Taskforce can contribute to guidance documents, without replacing the AI Office's formal public consultation processes.
  • Stakeholder input: insights from third-party stakeholders can be incorporated.

The AI Office has committed to transparency: a Vademecum has been published with a list of participants, and meetings are registered with high-level summaries.

The obligations: what does the AI Act require?

The AI Act establishes GPAI provider obligations in three core articles:

Article 51: Transparency as the foundation

Providers of GPAI models must:

  • Maintain and make available technical documentation
  • Provide information and documentation to downstream providers integrating the model
  • Establish a policy for compliance with European copyright law
  • Publish a sufficiently detailed summary of training data

Article 52: Copyright and training data

This article specifically addresses the relationship between GPAI and intellectual property. Providers must be transparent about what data they use for training and must respect the rights of copyright holders. In practice, this means respecting opt-out mechanisms (such as the Text and Data Mining exception from the DSM Directive).

Article 55: Systemic risk

GPAI models with systemic risk - think of the most powerful models with broad societal impact - face additional obligations:

  • Conducting model evaluations, including adversarial testing
  • Assessing and mitigating systemic risks
  • Tracking serious incidents and reporting to the AI Office
  • Ensuring adequate cybersecurity

Note: The threshold for "systemic risk" is determined partly by the computing power used for training. The AI Office can also designate models as systemic risk based on other criteria. This currently applies to a limited number of very large models, but the number may grow.

What does this mean for your organization?

The obligations don't just affect the big model developers. They have a cascading effect through the entire value chain.

If you provide a GPAI model (provider)

  1. Document thoroughly: ensure your technical documentation is comprehensive, including descriptions of training procedures, evaluation results, and known limitations.
  2. Publish a training data summary: this is mandatory and must contain enough detail to enable copyright holders to enforce their rights.
  3. Build compliance structures: don't wait until August 2026. Set up processes now for incident reporting, model evaluation, and risk assessment.
  4. Consider signing the Code of Practice: participation in the Taskforce gives you direct access to the latest interpretations and expectations from the AI Office.

If you use GPAI models (deployer)

  1. Know your supplier: actively request technical documentation and compliance status from the GPAI models you deploy. The AI Act requires providers to supply this information.
  2. Check downstream obligations: if you integrate a GPAI model into your own product or service, you may become a provider of an AI system yourself, with corresponding obligations.
  3. Assess copyright risks: if you generate content with GPAI tools, it's wise to understand how the underlying model was trained and whether there are intellectual property risks.
  4. Follow Taskforce output: the practical interpretations emerging from the Taskforce will signal what regulators will expect.

The bigger picture

The Signatory Taskforce follows a model the EU has used before with the Code of Conduct on Disinformation under the Digital Services Act. The pattern is recognizable: voluntary cooperation first, then binding regulation, with voluntary instruments serving as the bridge.

For organizations working with AI - whether as developers or users - the message is clear: the time for waiting is over. The Code of Practice and the Taskforce offer a concrete starting point for compliance, well ahead of the enforcement deadline.

Practical first step: Map out which GPAI models your organization uses, who provides them, and what documentation your supplier makes available. This overview is the foundation for any further compliance action.

The contours of GPAI enforcement will sharpen in the coming months. Organizations that invest in understanding and preparation now won't face surprises later.