Responsible AI Platform

Who Enforces the EU AI Act? Meet the Key Players

ยทยท18 min read
Dutch version not available

Important: The EU AI Act is not a paper tiger. With a completely new supervisory system becoming operational in the coming years, it's crucial that organizations understand which agencies and authorities they'll deal with and what their specific powers are.

The EU AI Act will gradually come into force over the coming years and introduces a completely new supervisory system. For providers and users of AI, a practical question arises: who will actually enforce it, who will you be dealing with, and which authority handles which type of AI application?

In this blog, I'll guide you through the most important agencies and authorities surrounding the EU AI Act. Not from the perspective of legal detail per article, but from the question: who does what, and what does that mean for your organization.

1. The European AI Office: The New Nerve Center

At the heart of the system is the European AI Office, a new service within the European Commission. The AI Office is positioned as the knowledge center for AI in Europe and as the foundation for a uniform governance system across all member states.

Core Tasks of the AI Office

The European AI Office functions as the central coordination point for AI governance in Europe. With specific focus on general-purpose AI models (GPAI) and systemic risks, the office plays a crucial role in ensuring uniform compliance with the AI Act across all 27 member states.

What Does This Office Do Concretely?

GPAI Model Supervision

Monitoring general-purpose AI models, including large foundation models like GPT, Claude, and Gemini

AI Safety & Systemic Risks

European approach to AI safety, including monitoring systemic risks that could threaten fundamental rights

Coordination & Compliance

Coordination of national authorities, collecting notifications, incidents, and reports

Guidelines & Implementation

Supporting the development of guidelines, model codes, and implementing acts under the AI Act

The AI Office is internally divided into several thematic units, such as Regulation and Compliance, AI Safety, Excellence in AI and Robotics, and AI for Societal Good. This shows that it's not just legal supervision, but also policy, innovation, and technical expertise.

What Does This Mean for Companies?

The AI Office plays a leading role with GPAI models and the associated Code of Practice. In July 2025, a voluntary code for general-purpose AI was presented, serving as a stepping stone toward full compliance with the AI Act.

Practical implication: If your organization develops or integrates large models, the line to Brussels will increasingly run through this AI Office, for example when submitting notifications about systemic risks, participating in sandboxes, and applying technical standards.

2. The AI Board, Advisory Forum, and Scientific Panel: The "Administrative Triangle"

In addition to the AI Office, the AI Act introduces a set of European coordination bodies: the AI Board, the Advisory Forum, and the Scientific Panel of Independent Experts. Together they form the administrative triangle around the AI Office.

BodyCompositionPrimary Role
AI BoardRepresentatives from member statesCoordination of enforcement & interpretation
Advisory ForumBusinesses, SMEs, social partners, NGOsStakeholder input & practical feedback
Scientific PanelUp to 60 independent AI expertsTechnical expertise & risk assessment

AI Board: Coordination Between Member States

The AI Board is a body with representatives from member states that:

  • Coordinates the implementation of the AI Act between member states
  • Discusses enforcement strategies and priorities
  • Advises the Commission and the AI Office on interpreting the regulation

Why the AI Board Matters

In practice, the Board becomes the platform where national supervisors share their enforcement experiences. Expect that much "soft law" such as guidelines, best practices, and joint interpretations will be prepared here before being officially released by the AI Office or the Commission.

Advisory Forum: The Voice of Practice

The Advisory Forum consists of representatives from businesses, SMEs, social partners, standardization bodies, and civil society organizations. This forum provides input on policy and implementation measures.

Scientific Panel: Technical Expertise

The Scientific Panel of Independent Experts is perhaps the most technically oriented body in the system. Their tasks focus heavily on general-purpose AI:

  • Developing assessment methods and tools
  • Advising on model classification and systemic risks
  • Formulating warnings about emerging risks
  • Supporting national authorities on technically complex matters

Crucial for GPAI providers: The way risks are measured, tested, and qualified will largely be established through this network of experts. This panel helps determine how "systemic risk" is operationalized in practice.

3. National Market Surveillance Authorities and Other Competent Authorities

The AI Act is European legislation, but day-to-day enforcement takes place largely at the national level. Each member state must designate at least two types of national bodies:

1

Market Surveillance Authorities (MSA)

Monitors compliance when AI systems are placed on the market or in use

2

Notifying Authorities

Designates notified bodies and supervises conformity assessments

Market Surveillance Authority: The Enforcer

Powers of Market Surveillance Authorities

  • โœ“Investigations following complaints or signals about non-compliance
  • โœ“Requesting technical documentation and declarations of conformity
  • โœ“Conducting inspections, audits, and tests on AI systems
  • โœ“Imposing measures or fines up to โ‚ฌ35 million or 7% of global turnover

In many member states, such a role will be fulfilled by existing bodies, such as a consumer authority, competition authority, or technical inspection service. In sectors with heavily regulated AI applications, such as financial services or medical devices, existing sectoral supervisors are likely to play a role alongside or in combination with the MSA.

Notifying Authority: The Certifier

In addition, each member state must designate a notifying authority. This authority is responsible for:

  • Designating and supervising notified bodies
  • Communicating information about those notified bodies to the Commission and other member states

For organizations, this means: you're not only dealing with Brussels, but especially with a national contact point that can request your AI systems, assess them, and, in extreme cases, have them removed from the market.

4. Notified Bodies and Conformity Assessment: The Inspection Bodies

For certain categories of high-risk AI systems, self-assessment is sufficient. For others, external conformity assessment is mandatory. This is where notified bodies come in.

What Do Notified Bodies Do?

Notified bodies are independent institutions designated by the notifying authority to conduct conformity assessments. They function as "inspection bodies" for high-risk AI systems where the EU deems an external review necessary.

Tasks of Notified Bodies

Quality Systems

Assess whether the provider's quality management system complies with the AI Act

Documentation

Evaluate technical documentation and risk management measures

Audits & Tests

Can conduct audits and tests on the AI system in production environments

Certification

Issue certificates or reports necessary to place the product on the market

We already know this structure from other product legislation, such as medical devices or machinery safety. The AI Act aligns with this: notified bodies become the "inspection bodies" for those high-risk AI systems where the EU deems an external review necessary.

Practical implication for providers: In addition to internal compliance work, you must account for an external assessment process, with associated costs (often โ‚ฌ10,000-โ‚ฌ100,000+), lead times (3-12 months), and potential findings that require adjustments.

5. Data Protection Authorities and the EDPB: GDPR and AI Act Overlap

Because many AI systems process personal data, the role of data protection authorities is essential. The AI Act explicitly confirms that the GDPR remains fully applicable to AI systems that process personal data.

EDPB Statement July 2024

The European Data Protection Board adopted an important statement in July 2024 on the role of DPAs within the AI Act framework. Key point: DPAs should be designated as market surveillance authorities for high-risk AI systems in law enforcement, border control, justice, and democratic processes.

Why DPAs Are Crucial for AI Supervision

AspectGDPRAI ActSupervisor
Lawfulness of processingโœ“-DPA
Transparency & information dutyโœ“โœ“DPA + MSA
AI system risk management-โœ“MSA
Automated decision-makingโœ“โœ“DPA + MSA
Bias & discriminationโœ“ (indirect)โœ“DPA + MSA

In other words: for many AI applications that process personal data, there's a good chance that your familiar privacy supervisor (such as the Dutch Autoriteit Persoonsgegevens) will also have a role under the AI Act.

EDPB Coordination: The EDPB itself takes a coordinating position. The existence of EDPB task forces around major AI providers already shows how privacy supervision and AI supervision intersect, for example in joint investigations into transparency and data use of popular chatbots.

6. What Does This Look Like for Your Organization in Practice?

All these names and bodies can remain abstract until you translate them into concrete situations. Some typical scenarios:

Scenario 1: An AI-Based HR Screening Tool

1 HR Screening Tool

Context: A provider of an AI system for candidate selection

Relevant supervisors:

  • โ†’ National MSA: assesses as high-risk AI (impact on access to employment)
  • โ†’ National DPA: examines legal basis, transparency, data subject rights, and bias
  • โ†’ AI Office: indirectly via guidance on underlying GPAI model

For employers: labor law + privacy law + AI Act obligations converge

Scenario 2: An Industrial Producer with Predictive Maintenance AI

2 Predictive Maintenance AI

Context: A manufacturer using AI to monitor machines and predict failures

Relevant supervisors:

  • โ†’ Notified body: for conformity assessment with external assessment requirement
  • โ†’ Technical market supervisor: product safety
  • โ†’ Sectoral supervisors: when deployed in critical infrastructure

Emphasis on safety, reliability, and robustness of AI

Scenario 3: A Public Organization with Citizen-Facing AI Services

3 Public AI Services

Context: A municipality using AI for decision-making on benefits or permits

Relevant supervisors:

  • โ†’ National AI market supervisor: high-risk AI rules
  • โ†’ National DPA: profiling, automated decision-making, transparency
  • โ†’ Sectoral supervisors: depending on domain (e.g., social security)

Here it becomes clear why AI Board and AI Office are crucial for consistency

7. How Can You Prepare Now?

Although many provisions will only come fully into force in 2026 and beyond, you can already take several steps as an organization to prepare for this supervisory landscape:

๐Ÿ“‹

Step 1: Map Out Supervisors

Identify which authorities will be relevant for your AI applications: privacy, product safety, sectoral, and soon the national AI authority.

๐Ÿ‘๏ธ

Step 2: Follow the AI Office

Monitor publications on GPAI, codes of practice, and sandboxes. This is where the first concrete implementations emerge.

โš–๏ธ

Step 3: Monitor National Legislation

Watch for designations of market surveillance authority and notifying authority. This determines who will come knocking.

๐Ÿค

Step 4: Multidisciplinary Collaboration

Ensure DPO, CISO, and legal/compliance work together. AI supervision is multidisciplinary, not an exclusive privacy matter.

๐Ÿ”

Step 5: Prepare for Audits

Reserve capacity for audits, information requests, and conformity assessments by notified bodies.

Proactive Preparation Pays Off

Organizations that now start mapping relevant supervisors and building relationships experience less stress when enforcement actually starts. Moreover, they can contribute their practical experiences in early-stage consultations (such as the GPAI Code of Practice) and help shape workable compliance frameworks.

Conclusion

The EU AI Act introduces a layered, European governance architecture. The European AI Office, AI Board, and Scientific Panel form the top layer. National authorities, notified bodies, and data protection authorities ensure implementation and enforcement close to practice.

The core: The better you understand this playing field, the easier it becomes to set up AI projects so that you not only comply with the letter of the law but are also prepared for questions from various supervisors.

The message is clear: those who now understand which agencies and authorities will be at the controls can proactively build compliance and trust. And that's what it's ultimately about: AI that people can trust, supported by supervision that works.

Test your knowledge

Test your knowledge: EU AI Act Enforcement & Agencies

Test your understanding of the different supervisors, their powers, and what this means for your organization

10 questionsEstimated time: 8 minutes
Start the quiz

Sources