Responsible AI Platform

Ireland becomes first EU member state to publish national AI Act legislation: lessons for the rest of Europe

··5 min read
Delen:
Dutch version not available

On 4 February 2026, Ireland published the General Scheme of the Regulation of Artificial Intelligence Bill 2026. With this move, it became the first EU member state to put forward concrete national legislation for implementing the AI Act. While most countries are still working out how to structure their supervision landscape, Dublin has made its choices - and those choices deserve attention.

What Ireland decided

The AI Act is an EU regulation with direct legal effect in all member states. But it deliberately leaves room for national choices on a critical point: supervision and enforcement. Article 70 requires each member state to designate at least one national competent authority and establish a single point of contact for the European Commission and other member states.

Ireland opted for what it calls a distributed model. Existing sectoral regulators receive AI supervision powers within their own domains. The financial regulator handles AI in banking, the telecom regulator covers AI in communications, and so on.

On top of this sectoral layer sits a new body: the AI Office of Ireland (Oifig Intleachta Shaorga na hEireann). This will be an independent statutory body under the Department of Enterprise, Tourism and Employment. The AI Office gets three core functions:

  1. Single Point of Contact for the EU and other member states
  2. Central coordination between the various sectoral regulators
  3. Enforcement where no sectoral regulator has jurisdiction, plus rules on penalties

Core of the Irish approach: no entirely new supervisory apparatus, but smart use of existing sectoral expertise with a central coordination point. The AI Office functions as the hub, not as an all-powerful super-regulator.

How the Netherlands compares

The Netherlands also chose a distributed supervision model, but the specifics differ in important ways. Two main supervisory bodies have been designated:

  • The Dutch Data Protection Authority (Autoriteit Persoonsgegevens, AP) as coordinator and supervisor for AI systems touching fundamental rights and personal data
  • The Authority for Consumers and Markets (ACM) for AI systems in the marketplace, focusing on fair competition and consumer protection

Additional sectoral regulators like the financial markets authority (AFM), the central bank (DNB), and the Healthcare Inspectorate play roles within their own domains. The Dutch model is broadly comparable to the Irish one, but with a key difference: the Netherlands has not (yet) established a separate AI office as a standalone body. The coordination role sits with the AP, which simultaneously serves as a substantive supervisor.

Three lessons from Dublin

1. A dedicated coordination body prevents confusion

Ireland's choice to create a standalone AI Office as coordinator - separate from the substantive supervisors - has clear advantages. A dedicated body focused entirely on coordination can operate neutrally. It does not need to balance its own enforcement interests against the coordination role.

In the Netherlands, the coordination role and the supervisory role converge at the AP. That is more efficient, but it creates a potential tension. What happens when the AP as coordinator sets a policy line that affects its own enforcement practice? In practice, that dual role will require careful navigation.

2. Speed matters in implementation

Ireland demonstrates that it is possible to present a concrete legislative proposal less than two years after the AI Act entered into force (August 2024). That speed matters, because the first enforcement moments are approaching fast. Prohibited AI practices have been in effect since February 2025. Broader enforcement of high-risk systems starts in August 2026.

The Netherlands has designated its supervisory authorities, but a comparable legislative proposal for the procedural and organizational underpinning of that supervision has not yet materialized. This means the AP and ACM are operating based on the regulation's direct effect, without additional national legislation specifying their precise powers, procedures, and sanctioning options.

3. The sanctions regime needs clarity

The Irish bill contains explicit provisions on penalties for infringements. The AI Act sets maximum fines in Article 99 (up to 35 million euros or 7% of global annual turnover), but leaves the precise design of the sanctions regime to member states. Ireland is now addressing that legislatively.

The Netherlands will need to make similar choices. Which fine categories apply? How does the AI Act sanctions regime relate to existing fining powers of the AP (under the GDPR) and the ACM (under competition law)? That clarity matters not just for supervisors, but especially for organizations that need to know where they stand.

Mind the deadline: August 2026 is the major enforcement deadline for high-risk AI systems. Organizations operating across multiple EU member states must account for different national supervisory structures. The Irish AI Office may operate quite differently from the Dutch AP or the French CNIL.

What this means for organizations

For companies and institutions that develop or deploy AI systems, Ireland's move has direct relevance:

  • Multinationals with operations in Ireland (and there are many, given Ireland's strong tech presence) now have visibility into the concrete supervisory landscape. The AI Office becomes their primary point of contact.
  • Dutch organizations would do well to follow developments in Ireland as a reference point. The choices Dublin makes around sanctions and jurisdictional divisions may preview what The Hague ultimately decides.
  • Pan-European compliance becomes more complex as member states build divergent supervisory structures. An AI system assessed by the AI Office in Ireland might fall under the AP in the Netherlands and yet another body in Germany.

Looking ahead

Ireland has taken the first step. Other member states will follow, and the diversity of national approaches will become visible over the coming months. For the AI Act as a whole, this is a test: can a European regulation function effectively when each member state designs its own supervisory architecture?

The coming months will reveal whether the Dutch model - coordination at an existing supervisor - works as effectively as the Irish model with a dedicated AI Office. What is certain: organizations can no longer wait for complete national clarity. The AI Act applies now, and enforcement has begun.