Responsible AI Platform
βœ… GPAI Rules Now In Effect

General Purpose AI

ChatGPT, Claude, Gemini under the EU AI Act

GPAI rules have been in effect since August 2, 2025. The Code of Practice was published July 10, 2025 and signed by all major AI companies. Here is what you need to know.

βœ“
Now in effect
10²⁡
FLOPs threshold
4
Code of Practice chapters
~12 min readLast updated: January 2026
πŸ€–

What is General Purpose AI?

The models you encounter everywhere

General Purpose AI (GPAI) are AI models that can be used for many different tasks without substantial modification. Think of large language models (LLMs) like ChatGPT, Claude, Gemini, Llama and Mistral. These models can generate text, write code, answer questions, create images and much more. The EU AI Act sets specific requirements for providers of these models and for organizations using them. Image generators like DALL-E, Midjourney and Stable Diffusion also fall under GPAI if broadly deployable.

πŸ€–

ChatGPT / GPT-4o

OpenAI

🧠

Claude 3.5

Anthropic

✨

Gemini 2.0

Google

πŸ¦™

Llama 3.3

Meta (open-source)

βœ…

Status: Rules Are Now In Effect

What does this mean for you in January 2026?

Since August 2, 2025, GPAI rules are in effect. The definitive Code of Practice was published on July 10, 2025 and signed by Google, Microsoft, OpenAI, Anthropic, Amazon, Meta and other major players. Providers must now maintain technical documentation, respect copyright and publish training data summaries. Enforcement with fines formally starts August 2026, but supervisors can already inform and warn. For deployers (organizations using GPAI), the AI literacy obligation (Article 4) and transparency obligation (Article 50) already apply.

βœ…

July 10, 2025

Code of Practice published

βœ…

Aug 2, 2025

GPAI rules in effect

⏳

Aug 2, 2026

Formal enforcement starts

βš–οΈ

Aug 2, 2027

Full enforcement

Related articles

πŸ“Š

Two Categories of GPAI

Standard vs. Systemic Risk

The EU AI Act distinguishes two levels of GPAI. Standard GPAI models must maintain technical documentation via a Model Documentation Form, provide information to downstream providers, respect EU copyright (including robots.txt), and publish a summary of training data. GPAI models with systemic risk (trained with >10²⁡ FLOPs, think GPT-4, Claude 3 Opus, Gemini Ultra) have additional obligations: external audits, red teaming, systemic risk assessments, incident reporting to the European Commission, and extensive cybersecurity measures.

πŸ”΅

Standard GPAI

Documentation, transparency, copyright

πŸ”΄

Systemic Risk

>10²⁡ FLOPs training compute

πŸ“

Model Documentation Form

Standard documentation format

πŸ›‘οΈ

Extra obligations

Audits, red teaming, incident reporting

πŸ“‹

Code of Practice

The code of conduct for GPAI (published July 10, 2025)

The Code of Practice provides a "presumption of compliance" - if you follow the code, you are presumed to comply with the law. This is crucial for legal certainty. The code was developed in a multi-stakeholder process and contains four chapters: 1) Governance - internal structure, responsibilities, C-level ownership, 2) Transparency - documentation of architecture, training data, capabilities and limitations, 3) Copyright - respect for copyrights, following robots.txt, opt-out procedures for rights holders, 4) Safety - specifically for systemic risk models: red teaming, cybersecurity, incident management.

1️⃣

Governance

C-level responsibility

2️⃣

Transparency

Model Documentation Form

3️⃣

Copyright

Robots.txt & opt-out

4️⃣

Safety

Red teaming & incident reporting

Related articles

πŸ‘οΈ

Transparency & AI Content Labeling

Article 50 and the draft Code for AI content

In addition to the GPAI Code of Practice, the Commission is working on a separate code of conduct for transparency around AI-generated content (Article 50). Providers must ensure AI content is markable and detectable via watermarking, metadata and fingerprinting. Deployers must label AI-generated content, especially for deepfakes and content of public interest. There will be a distinction between "fully AI-generated" and "AI-assisted" content. Detection must become available as a service - free or at low cost.

πŸ’§

Watermarking

Invisible marking in content

πŸ“Š

Metadata

Include provenance data

🏷️

Labeling

Clear labels for end users

πŸ”

Detection API

Verification as service

Related articles

πŸ‘₯

What Does This Mean for You as User?

Using GPAI in your organization (January 2026)

If you use GPAI tools (like ChatGPT, Copilot, Claude) in your organization, concrete obligations already apply. AI Literacy (Article 4): since February 2025 you must train employees - document this! Transparency (Article 50): inform users/customers they are dealing with AI. Understand limitations: know the hallucination risks and limitations of your tools. Policy: establish internal AI policy with clear rules. Watch for provider risk: if you integrate a GPAI into your own product and market it, you may become a provider of the combined system with all associated obligations!

πŸ“š

AI Literacy

Now mandatory - document!

πŸ‘οΈ

Transparency

Inform about AI use

πŸ“‹

Policy

Establish internal AI policy

⚠️

Provider risk

Integration = possibly provider!

Related articles

Frequently Asked Questions

Answers to the most common questions about the EU AI Act

Ready to get started?

Discover how we can help your organization with EU AI Act compliance.

500+
Professionals trained
50+
Organizations helped