Responsible AI Platform
Complete Guide 2025

EU AI Act

Everything you need to know about the world's first AI legislation

From risk classifications to compliance deadlines, from fines to best practices. The definitive resource for AI professionals.

â‚Ŧ35M
Maximum fine
8
High-risk sectors
2026
Key deadline
~20 min readLast updated: December 2025
↓
📋

What is the EU AI Act?

An introduction to the world's first AI legislation

The EU AI Act is the world's first comprehensive legislation regulating artificial intelligence. The law entered into force on August 1, 2024 and introduces a risk-based framework that categorizes AI systems into four categories: unacceptable risk (prohibited), high risk (strictly regulated), limited risk (transparency requirements) and minimal risk (free to use). The goal is to stimulate innovation while protecting fundamental rights and the safety of EU citizens.

🌍

World's first

The EU leads with the world's first comprehensive AI regulation.

📊

Risk-based

Proportionate requirements based on the risk that AI systems pose.

đŸ›Ąī¸

Protection

Protects fundamental rights while stimulating innovation.

đŸ’ļ

High fines

Up to â‚Ŧ35 million or 7% of global annual turnover.

Related articles

📖

Core Concepts

The fundamental concepts you need to know

To understand the EU AI Act, you first need to know the core concepts. An AI system is defined as a machine-based system that, with varying degrees of autonomy, can generate outputs such as predictions, recommendations or decisions. The law distinguishes between providers (developers who place AI on the market) and deployers (organizations that use AI). There are also specific definitions for GPAI (general-purpose AI) like ChatGPT.

🔧

AI system definition

Machine-based system that can generate output with autonomy.

👤

Provider

Developer who places the AI system on the market.

🏭

Deployer

Organization that uses the AI system professionally.

🌐

GPAI

General-purpose AI like large language models.

Related articles

âš ī¸

Risk Classification

The four risk levels of AI systems

The EU AI Act uses a risk-based approach with four categories. Unacceptable risk: prohibited applications like social scoring and manipulative AI. High risk: strictly regulated systems in critical sectors like HR, education and healthcare. Limited risk: systems with transparency requirements like chatbots. Minimal risk: most AI applications, free to use. The classification determines which obligations apply.

đŸšĢ

Unacceptable risk

Prohibited: social scoring, manipulative AI, biometric categorization.

🔴

High risk

Strict requirements for 8 sectors: HR, education, healthcare, etc.

🟡

Limited risk

Transparency requirements for chatbots and deepfakes.

đŸŸĸ

Minimal risk

Free to use: spam filters, AI games, etc.

Related articles

✅

Compliance & Governance

How do you meet the requirements?

Compliance with the EU AI Act requires a systematic approach. For high-risk AI, you need to implement a risk management system, prepare technical documentation, ensure data governance and arrange human oversight. There are also specific obligations such as the FRIA (Fundamental Rights Impact Assessment) and conformity assessments. A good AI governance structure within your organization is essential.

📝

Documentation

Extensive technical documentation is mandatory for high-risk AI.

âš™ī¸

Risk management

A continuous risk management system throughout the lifecycle.

đŸ‘Ĩ

Human oversight

Meaningful human control over AI decisions.

📊

FRIA & DPIA

Impact assessments for fundamental rights and privacy.

Related articles

đŸĸ

Sector-specific Impact

What does the AI Act mean for your sector?

The EU AI Act has different impacts per sector. The financial sector faces overlap with EBA guidelines. The public sector must pay extra attention to fundamental rights. Startups and SMEs get proportionate requirements. For each sector: inventory your AI systems, determine the risk class and start with compliance preparation.

💰

Financial sector

Extra guidelines via EBA for credit and insurance decisions.

đŸ›ī¸

Public sector

Special attention to fundamental rights and transparency.

🚀

Startups & SMEs

Proportionate requirements and access to regulatory sandboxes.

đŸĨ

Healthcare & Education

Strict requirements for AI in diagnosis and assessment.

Related articles

đŸ‘ī¸

Supervision & Enforcement

Who supervises and what are the fines?

Enforcement of the EU AI Act is coordinated by the European AI Office at EU level and national supervisors in each member state. In the Netherlands this is the Algorithm Supervisor. Fines are significant: up to â‚Ŧ35 million or 7% of global annual turnover for prohibited practices, up to â‚Ŧ15 million or 3% for other violations. Incident reporting is mandatory for serious malfunctions.

đŸ‡ĒđŸ‡ē

AI Office

The European AI Office coordinates enforcement at EU level.

đŸ‡ŗđŸ‡ą

National supervisor

Each member state designates a national supervisor.

💸

Fines

Up to â‚Ŧ35 million or 7% of global turnover.

đŸ“ĸ

Incident reporting

Reporting obligation within 15 days for serious malfunctions.

Related articles

🎓

AI Literacy

The mandatory knowledge requirement from February 2025

Since February 2, 2025, all organizations using AI are required to ensure their staff is sufficiently AI literate. This applies to everyone working with AI, regardless of the risk class of the AI system. AI literacy includes the ability to understand, critically evaluate and responsibly use AI. Organizations must develop policies and offer training.

📅

Already in effect

The obligation applies since February 2, 2025.

đŸ‘Ĩ

Broad scope

Applies to all organizations using AI, regardless of size.

📚

Training required

Organizations must adequately train personnel.

📋

Policy needed

Develop testable AI literacy policy.

Related articles

🤖

General-Purpose AI

Rules for ChatGPT and other large models

GPAI (General-Purpose AI) like ChatGPT, Claude and Gemini get specific rules from August 2025. Providers must be transparent about training data, respect copyright and provide technical documentation. For models with systemic risk (>10²âĩ FLOPs) extra requirements apply. The EU is working on a Code of Practice that providers must follow.

📅

August 2025

GPAI rules take effect on August 2, 2025.

📄

Documentation

Technical documentation and training data transparency required.

ÂŠī¸

Copyright

Respect for copyrights in training data.

⚡

Systemic risk

Extra requirements for models >10²âĩ FLOPs.

Related articles

Key Deadlines

All implementation dates at a glance

Done

EU AI Act enters force

The law officially enters into force.

Done

Prohibited AI & AI Literacy

First obligations take effect.

Now

GPAI rules

Rules for general-purpose AI.

Coming

High-risk obligations

Main high-risk AI requirements.

Coming

Full compliance

Transition period ends.

Frequently Asked Questions

Answers to the most common questions about the EU AI Act

Ready to get started?

Discover how we can help your organization with EU AI Act compliance.

500+
Professionals trained
50+
Organizations helped