EU AI regulation code of practice: tech giants sign, Meta refuses

9 min read
Dutch version not available

Major AI developers choose different paths with European regulation

Important development: The European Commission has officially confirmed that the voluntary code of practice for general AI models serves as a legitimate compliance instrument under the AI regulation. With major tech companies such as OpenAI, Google and Anthropic signing, while Meta refuses, the tech industry shows a divided approach to European AI regulation.

What is the EU AI regulation code of practice?

The code of practice for general AI models represents a collaborative effort involving more than 1,000 stakeholders, including model providers, SMEs, academics, AI safety experts, rights holders and civil society organizations. Developed by 13 independent experts, this voluntary framework serves as an official compliance path for companies operating general AI models under the EU AI regulation.

The code is structured around three main sections:

  1. Transparency: Applicable to all GPAI model providers
  2. Copyright: Also applicable to all GPAI model providers
  3. Safety and security: Only applicable to providers of GPAI models with systemic risk (above the 10^25 computational threshold)

Companies that sign the code commit to various core obligations, including providing updated documentation about their AI tools and services, avoiding training AI on illegally copied content, and complying with requests from content owners to exclude their works from training datasets.

The main players: who joins and who doesn't

The signatories

Several major AI companies have embraced the code of practice:

  • OpenAI: Was among the first to announce their intention to sign, showing early support for the framework
  • Anthropic: The company stated: "We believe the code promotes the principles of transparency, safety and responsibility - values long championed by Anthropic for frontier AI development"
  • Google: Confirmed their commitment to sign the European general AI code of practice
  • xAI: Signed specifically for the safety and security chapter, indicating a targeted approach to compliance
  • Mistral: Joined as an early signatory alongside OpenAI

Other major tech companies including Microsoft, IBM and Amazon are also listed among the initial signatories.

Meta's divergent position

Meta's decision to refuse participation has drawn much attention. Joel Kaplan, Meta's Chief Global Affairs Officer, clearly articulated the company's position: "We have carefully reviewed the European Commission's code of practice for general AI models and Meta will not sign it."

Kaplan went further and stated that "Europe is going the wrong way with AI," and criticized the code because it "introduces legal uncertainties for model developers, as well as measures that go far beyond the scope of the AI regulation."

This places Meta in a unique position as one of the few major AI companies choosing not to participate in the voluntary framework.

Benefits of signing

Companies that choose to sign the code of practice receive various benefits:

  • Reduced administrative burden: Standardized compliance path
  • More legal certainty: Clear route to regulatory compliance
  • Less regulatory oversight: Expected reduction in supervision
  • Potential fine reduction: Smaller penalties for violations

Geopolitical tensions and American criticism

The code of practice has become more than just a regulatory instrument - it has evolved into a point of geopolitical tension. The US government has criticized the European Commission's approach and accuses it of forcing American companies to sign the agreement.

This criticism reflects broader concerns about the EU's regulatory reach and its impact on American tech companies active in European markets. The AI regulation itself has been characterized as "a pawn in a geopolitical battle," emphasizing the intersection of technology regulation and international relations.

Geopolitical dimension: The code has become a symbol in the broader discussion about technological sovereignty between the EU and US. American companies find themselves caught between European compliance requirements and American political pressure.

Implementation timeline and compliance requirements

The rules for general artificial intelligence came into force on August 2, 2025, making compliance urgent for affected companies. The European Commission published the list of initial signatories on August 1, just one day before the rules took effect.

It is important to note that companies that choose not to sign the code must still comply with the requirements of the AI regulation. The code serves as a voluntary compliance path, not as an exemption from regulation. Non-signatories will need to demonstrate compliance through alternative means, possibly with more regulatory oversight and administrative complexity.

Technical requirements and practical implementation

Transparency requirements

All signatories must provide extensive documentation about:

  • Model architecture and training methodologies
  • Datasets used and their sources
  • Known limitations and risks
  • Evaluation procedures and performance metrics

Copyright protection

The code requires companies to:

  • Not use copyright-protected material without permission
  • Implement mechanisms to respect opt-out requests
  • Provide transparency about data sources and licenses
  • Establish procedures for handling IP claims

Safety and security measures

For models above the systemic risk threshold, additional requirements apply:

  • Robust red-team evaluations
  • Incident response procedures
  • Cybersecurity measures
  • Monitoring of downstream applications
CompanyStatusSpecific approach
OpenAISignatoryFull code
AnthropicSignatoryFull code
GoogleSignatoryFull code
xAISignatorySafety & security only
MetaRefusalAlternative compliance

Implications for the AI industry

Competition and market distribution

The divided response to the code of practice could lead to strategic advantages for signatories who benefit from reduced regulatory oversight, while non-signatories such as Meta may face more compliance costs and complexity.

Innovation vs. regulation

The tension between Meta's arguments about innovation delay and the EU's focus on safety and transparency illustrates the broader debate about the right balance between technological progress and regulation.

Precedent for other jurisdictions

The EU's approach could serve as a model for other regions developing their own AI governance, with potential harmonization or fragmentation of global AI standards as a result.

Practical steps for companies

For organizations considering participation:

  1. Assess applicability: Does your model fall under the GPAI definition?
  2. Evaluate compliance costs: Compare costs of code vs. alternative compliance
  3. Analyze competitive advantages: What are the strategic implications?
  4. Plan implementation: Which processes need to be adapted?
  5. Monitor developments: How does the regulatory landscape evolve?

Looking ahead: the future of AI regulation

Monitoring and enforcement

The real test of the code of practice lies in implementation and enforcement. Regulators will closely monitor whether signatories fulfill their obligations and whether the promised benefits materialize.

Evolution of the code

As a living document, the code of practice can be adapted based on practical experiences, technological developments and stakeholder feedback. This flexibility is crucial in the rapidly evolving AI landscape.

Global harmonization

The question remains whether other jurisdictions will adopt similar frameworks or whether we will see a fragmented landscape of AI governance, with different standards in different regions.

Strategic insight: Companies that invest early in robust AI governance position themselves not only for European compliance, but also for future global standards that are likely to be based on similar principles.

Final thoughts

The EU AI regulation code of practice marks a crucial moment in the evolution of AI governance. While the division between companies such as Anthropic, which sees the code as promoting "transparency, safety and responsibility," and Meta, which considers it regulatory overreach, reflects broader debates about the proper scope and methods of AI regulation.

The coming months will be crucial to see how enforcement unfolds and whether the promised benefits of participation materialize. The success or failure of this approach could influence how other jurisdictions structure their own AI regulatory frameworks.

Europe's determination to formalize this voluntary framework as an official compliance instrument, despite resistance from some major players and criticism from other governments, shows the EU's determination to lead in AI governance. Whether this approach ultimately promotes innovation while ensuring safety and transparency remains to be seen, but it undoubtedly marks a new chapter in the global governance of artificial intelligence.


The AI regulation rules for general AI models came into force on August 2, 2025, with major implications for how AI companies operate in the European market. As this regulatory landscape continues to evolve, staying up to date with compliance requirements and industry reactions will be crucial for anyone involved in AI development or deployment.