Important date: On July 10, 2025, the European Commission published the definitive version of the Code of Practice for general-purpose AI models. This code of conduct comes into effect on August 2, 2025.
Why this code of practice is crucial
On July 10, 2025, the European Commission published the definitive version of the Code of Practice for general-purpose AI models. This code of conduct was developed to help AI model providers comply with the obligations from the AI Act, which comes into effect on August 2, 2025.
Although it has a voluntary character, the Code will in practice function as the reference for supervision, compliance and governance. Signing not only provides legal advantages, but also reputation gains in a market where trust becomes crucial.
The code of conduct specifically targets general-purpose AI models (GPAI) ā AI systems that can be deployed for various purposes, such as large language models and multimodal AI systems. These models form the backbone of many modern AI applications and therefore require specific governance.
Transparency obligations: document, share and update
The code of conduct requires providers to transparently document their models. This documentation includes technical characteristics such as architecture, input and output modalities, training data, energy consumption and intended applications.
The Model Documentation Form
Mandatory documentation elements
Providers must complete a standardized Model Documentation Form that contains the following information:
- Technical architecture and specifications
- Input and output modalities
- Training data (origin, scope, representativeness)
- Energy consumption and environmental impact
- Intended applications and limitations
- Known risks and mitigation measures
This is done through a standard Model Documentation Form, which aligns with Article 53 of the AI Act. The information must be accessible to downstream developers and upon request to the AI Office. The goal is to facilitate responsible integration of models into third-party AI systems.
Particularly noteworthy is that information about the data used ā such as origin, scope, and degree of representativeness ā must also be documented. This gives transparency about bias and data quality a central place in the compliance process.
Exception for open-source: For open-source models, an exception to the documentation requirements applies in principle, unless there are systemic risks.
Copyright: respect digital rights in web scraping and model training
The second chapter of the Code establishes how providers must deal with copyrighted content. This is a crucial aspect that directly impacts how AI models are trained.
Legitimate access to training data
- Respect technical access restrictions
It is forbidden to circumvent technical access restrictions such as paywalls. AI models may only be trained on legitimately accessible data.
- Follow the Robot Exclusion Protocol
The use of crawlers must comply with robots.txt files and other technical guidelines for web scraping.
- Recognize rights reservations
Providers are required to technically recognize and respect rights reservations ā such as those included in robots.txt or via metadata.
This aligns with Article 4 of the DSM Directive on text and data mining. Additionally, the Code requires providers to take technical measures that limit the risk of infringing output.
Complaint procedure for rights holders
Rights holders must also be able to submit complaints about possible unlawful use in an accessible way. The Code requires providers to:
- Provide a clear contact option for copyright complaints
- Respond to complaints within reasonable time
- Communicate transparently about measures taken
- Have an escalation procedure for complex cases
Practical tip: Implement an automated system for recognizing copyright metadata and robots.txt instructions to ensure compliance.
Safety and systemic risks: only for the heaviest models
Not all AI models fall under the systemic risk regime. Only the most advanced foundation models ā with, for example, self-improving capabilities or risks to public safety ā fall under these heavier obligations.
When does the systemic risk regime apply?
Criterion | Threshold | Implication |
---|---|---|
Computing power | ℠10²ⵠFLOPs | Automatic classification as systemic risk |
Self-improvement | Autonomous code generation | Risk assessment required |
Public safety | Critical infrastructure | Extensive evaluation needed |
For these models, the Code requires an integral risk management process, consisting of continuous evaluation, mitigation, monitoring and reporting to the AI Office.
Obligations for systemic models
Models classified as systemic must comply with strict standards:
Mandatory measures for systemic risks
- External audits by independent parties
- Access for independent evaluators
- Red-teaming for identifying vulnerabilities
- Responsible incident management and reporting
- Executive responsibility at C-level
- Transparent communication about risks
The management of the AI company must also be explicitly responsible for managing these risks. These obligations are based on Article 55 of the AI Act and ensure that risks remain manageable not only in theory, but also in practice.
From voluntary framework to normative standard
Although the Code of Practice is not a legal obligation, it is growing into a de facto standard. This has important implications for all stakeholders in the AI value chain.
Why voluntary compliance is strategic
Benefits of early adoption
Organizations that implement the Code now benefit from:
- Supervisory advantage: Supervisors will use the Code as a reference
- Market advantage: Customers see compliance as a quality mark
- Risk reduction: Early compliance prevents future problems
- Reputation advantage: Proactive attitude strengthens trust among stakeholders
Supervisors will base themselves on it and customers will see it as a quality mark. For AI providers, this means that voluntary compliance today is a strategic investment for tomorrow.
Implementation in practice
For successful implementation of the Code, we recommend:
- Start with a gap analysis to determine where your organization stands relative to the Code requirements
- Develop an implementation plan with clear milestones and responsibilities
- Invest in tooling for automated compliance monitoring
- Train your teams in the new requirements and procedures
- Monitor developments as the Code will likely continue to evolve
Practical tip: Start with the transparency requirements ā these are the most concrete and provide a good basis for further compliance efforts.
Conclusion: structure in a complex landscape
The code of conduct provides structure, clarity and direction in a landscape where regulation, societal expectations and technological innovation are increasingly intertwined. For AI providers, this is the moment to act proactively.
The Code of Practice for general-purpose AI marks an important step in the maturation of AI governance. By combining transparency, copyright respect and safety measures, it creates a framework that enables innovation within clear boundaries.
Organizations that now invest in compliance with the Code position themselves not only for compliance, but also for competitive advantage in a market that increasingly values responsible AI development.
Want to know more about implementing the Code of Practice in your organization? Contact us for a personal consultation.