EU AI Act
Everything you need to know about the world's first AI legislation
From risk classifications to compliance deadlines, from fines to best practices. The definitive resource for AI professionals.
What is the EU AI Act?
An introduction to the world's first AI legislation
The EU AI Act is the world's first comprehensive legislation regulating artificial intelligence. The law entered into force on August 1, 2024 and introduces a risk-based framework that categorizes AI systems into four categories: unacceptable risk (prohibited), high risk (strictly regulated), limited risk (transparency requirements) and minimal risk (free to use). The goal is to stimulate innovation while protecting fundamental rights and the safety of EU citizens.
World's first
The EU leads with the world's first comprehensive AI regulation.
Risk-based
Proportionate requirements based on the risk that AI systems pose.
Protection
Protects fundamental rights while stimulating innovation.
High fines
Up to âŦ35 million or 7% of global annual turnover.
Core Concepts
The fundamental concepts you need to know
To understand the EU AI Act, you first need to know the core concepts. An AI system is defined as a machine-based system that, with varying degrees of autonomy, can generate outputs such as predictions, recommendations or decisions. The law distinguishes between providers (developers who place AI on the market) and deployers (organizations that use AI). There are also specific definitions for GPAI (general-purpose AI) like ChatGPT.
AI system definition
Machine-based system that can generate output with autonomy.
Provider
Developer who places the AI system on the market.
Deployer
Organization that uses the AI system professionally.
GPAI
General-purpose AI like large language models.
Risk Classification
The four risk levels of AI systems
The EU AI Act uses a risk-based approach with four categories. Unacceptable risk: prohibited applications like social scoring and manipulative AI. High risk: strictly regulated systems in critical sectors like HR, education and healthcare. Limited risk: systems with transparency requirements like chatbots. Minimal risk: most AI applications, free to use. The classification determines which obligations apply.
Unacceptable risk
Prohibited: social scoring, manipulative AI, biometric categorization.
High risk
Strict requirements for 8 sectors: HR, education, healthcare, etc.
Limited risk
Transparency requirements for chatbots and deepfakes.
Minimal risk
Free to use: spam filters, AI games, etc.
Related articles
Compliance & Governance
How do you meet the requirements?
Compliance with the EU AI Act requires a systematic approach. For high-risk AI, you need to implement a risk management system, prepare technical documentation, ensure data governance and arrange human oversight. There are also specific obligations such as the FRIA (Fundamental Rights Impact Assessment) and conformity assessments. A good AI governance structure within your organization is essential.
Documentation
Extensive technical documentation is mandatory for high-risk AI.
Risk management
A continuous risk management system throughout the lifecycle.
Human oversight
Meaningful human control over AI decisions.
FRIA & DPIA
Impact assessments for fundamental rights and privacy.
Related articles
Sector-specific Impact
What does the AI Act mean for your sector?
The EU AI Act has different impacts per sector. The financial sector faces overlap with EBA guidelines. The public sector must pay extra attention to fundamental rights. Startups and SMEs get proportionate requirements. For each sector: inventory your AI systems, determine the risk class and start with compliance preparation.
Financial sector
Extra guidelines via EBA for credit and insurance decisions.
Public sector
Special attention to fundamental rights and transparency.
Startups & SMEs
Proportionate requirements and access to regulatory sandboxes.
Healthcare & Education
Strict requirements for AI in diagnosis and assessment.
Supervision & Enforcement
Who supervises and what are the fines?
Enforcement of the EU AI Act is coordinated by the European AI Office at EU level and national supervisors in each member state. In the Netherlands this is the Algorithm Supervisor. Fines are significant: up to âŦ35 million or 7% of global annual turnover for prohibited practices, up to âŦ15 million or 3% for other violations. Incident reporting is mandatory for serious malfunctions.
AI Office
The European AI Office coordinates enforcement at EU level.
National supervisor
Each member state designates a national supervisor.
Fines
Up to âŦ35 million or 7% of global turnover.
Incident reporting
Reporting obligation within 15 days for serious malfunctions.
Related articles
Supervision: The agencies enforcing the AI Act
Who are the supervisors and what is their role?
Read more âEnforcement readiness for organizations
How do you prepare for audits and inspections?
Read more âIncident reporting: The consultation explained
When and how should you report incidents?
Read more âAI Literacy
The mandatory knowledge requirement from February 2025
Since February 2, 2025, all organizations using AI are required to ensure their staff is sufficiently AI literate. This applies to everyone working with AI, regardless of the risk class of the AI system. AI literacy includes the ability to understand, critically evaluate and responsibly use AI. Organizations must develop policies and offer training.
Already in effect
The obligation applies since February 2, 2025.
Broad scope
Applies to all organizations using AI, regardless of size.
Training required
Organizations must adequately train personnel.
Policy needed
Develop testable AI literacy policy.
General-Purpose AI
Rules for ChatGPT and other large models
GPAI (General-Purpose AI) like ChatGPT, Claude and Gemini get specific rules from August 2025. Providers must be transparent about training data, respect copyright and provide technical documentation. For models with systemic risk (>10²âĩ FLOPs) extra requirements apply. The EU is working on a Code of Practice that providers must follow.
August 2025
GPAI rules take effect on August 2, 2025.
Documentation
Technical documentation and training data transparency required.
Copyright
Respect for copyrights in training data.
Systemic risk
Extra requirements for models >10²âĩ FLOPs.
Related articles
Key Deadlines
All implementation dates at a glance
EU AI Act enters force
The law officially enters into force.
Prohibited AI & AI Literacy
First obligations take effect.
GPAI rules
Rules for general-purpose AI.
High-risk obligations
Main high-risk AI requirements.
Full compliance
Transition period ends.
Frequently Asked Questions
Answers to the most common questions about the EU AI Act
Ready to get started?
Discover how we can help your organization with EU AI Act compliance.