General Purpose AI
ChatGPT, Claude, Gemini under the EU AI Act
GPAI rules have been in effect since August 2, 2025. The Code of Practice was published July 10, 2025 and signed by all major AI companies. Here is what you need to know.
What is General Purpose AI?
The models you encounter everywhere
General Purpose AI (GPAI) are AI models that can be used for many different tasks without substantial modification. Think of large language models (LLMs) like ChatGPT, Claude, Gemini, Llama and Mistral. These models can generate text, write code, answer questions, create images and much more. The EU AI Act sets specific requirements for providers of these models and for organizations using them. Image generators like DALL-E, Midjourney and Stable Diffusion also fall under GPAI if broadly deployable.
ChatGPT / GPT-4o
OpenAI
Claude 3.5
Anthropic
Gemini 2.0
Llama 3.3
Meta (open-source)
Status: Rules Are Now In Effect
What does this mean for you in January 2026?
Since August 2, 2025, GPAI rules are in effect. The definitive Code of Practice was published on July 10, 2025 and signed by Google, Microsoft, OpenAI, Anthropic, Amazon, Meta and other major players. Providers must now maintain technical documentation, respect copyright and publish training data summaries. Enforcement with fines formally starts August 2026, but supervisors can already inform and warn. For deployers (organizations using GPAI), the AI literacy obligation (Article 4) and transparency obligation (Article 50) already apply.
July 10, 2025
Code of Practice published
Aug 2, 2025
GPAI rules in effect
Aug 2, 2026
Formal enforcement starts
Aug 2, 2027
Full enforcement
Two Categories of GPAI
Standard vs. Systemic Risk
The EU AI Act distinguishes two levels of GPAI. Standard GPAI models must maintain technical documentation via a Model Documentation Form, provide information to downstream providers, respect EU copyright (including robots.txt), and publish a summary of training data. GPAI models with systemic risk (trained with >10Β²β΅ FLOPs, think GPT-4, Claude 3 Opus, Gemini Ultra) have additional obligations: external audits, red teaming, systemic risk assessments, incident reporting to the European Commission, and extensive cybersecurity measures.
Standard GPAI
Documentation, transparency, copyright
Systemic Risk
>10Β²β΅ FLOPs training compute
Model Documentation Form
Standard documentation format
Extra obligations
Audits, red teaming, incident reporting
Code of Practice
The code of conduct for GPAI (published July 10, 2025)
The Code of Practice provides a "presumption of compliance" - if you follow the code, you are presumed to comply with the law. This is crucial for legal certainty. The code was developed in a multi-stakeholder process and contains four chapters: 1) Governance - internal structure, responsibilities, C-level ownership, 2) Transparency - documentation of architecture, training data, capabilities and limitations, 3) Copyright - respect for copyrights, following robots.txt, opt-out procedures for rights holders, 4) Safety - specifically for systemic risk models: red teaming, cybersecurity, incident management.
Governance
C-level responsibility
Transparency
Model Documentation Form
Copyright
Robots.txt & opt-out
Safety
Red teaming & incident reporting
Transparency & AI Content Labeling
Article 50 and the draft Code for AI content
In addition to the GPAI Code of Practice, the Commission is working on a separate code of conduct for transparency around AI-generated content (Article 50). Providers must ensure AI content is markable and detectable via watermarking, metadata and fingerprinting. Deployers must label AI-generated content, especially for deepfakes and content of public interest. There will be a distinction between "fully AI-generated" and "AI-assisted" content. Detection must become available as a service - free or at low cost.
Watermarking
Invisible marking in content
Metadata
Include provenance data
Labeling
Clear labels for end users
Detection API
Verification as service
What Does This Mean for You as User?
Using GPAI in your organization (January 2026)
If you use GPAI tools (like ChatGPT, Copilot, Claude) in your organization, concrete obligations already apply. AI Literacy (Article 4): since February 2025 you must train employees - document this! Transparency (Article 50): inform users/customers they are dealing with AI. Understand limitations: know the hallucination risks and limitations of your tools. Policy: establish internal AI policy with clear rules. Watch for provider risk: if you integrate a GPAI into your own product and market it, you may become a provider of the combined system with all associated obligations!
AI Literacy
Now mandatory - document!
Transparency
Inform about AI use
Policy
Establish internal AI policy
Provider risk
Integration = possibly provider!
Frequently Asked Questions
Answers to the most common questions about the EU AI Act
Ready to get started?
Discover how we can help your organization with EU AI Act compliance.