Responsible AI Platform

AGI and the EU AI Act: What Are We Actually Talking About?

··7 min read
Dutch version not available

Artificial General Intelligence is not a magic endpoint, but a spectrum. The EU AI Act doesn't regulate it as a label, but does cover the building blocks that form an AGI-like system in practice.

Key point: The EU AI Act doesn't mention "AGI" as a separate category. But future AGI-like systems do fall under the rules for general-purpose AI models (GPAI) and risk-based requirements for specific applications.

AGI: What Are We Actually Talking About?

Artificial General Intelligence (AGI) is the label often applied to an AI system that isn't just good at one task, but can reason broadly, learn and generalize across many different domains. It's explicitly not yet a clearly defined concept. Even major players and researchers use different definitions, which immediately explains why "regulating AGI" is harder than it sounds.

AGI as a spectrum

It's useful to see AGI not as one magical endpoint, but as a spectrum: systems become more broadly deployable, more autonomous, better at multi-step tasks, and therefore also harder to predict in new contexts. Precisely that combination — broad deployability plus unpredictability at the edges of use — is where governance and legislation become relevant.

Is AGI Regulated in the EU AI Act?

The EU AI Act doesn't mention "AGI" as a separate category. But that doesn't mean a future AGI-like system falls outside its scope. The AI Act fundamentally regulates:

  1. AI systems based on risk
  2. General-purpose AI models (GPAI), with additional requirements for the most powerful models that can cause "systemic risk"

The practical translation is: if an organization offers or integrates a very capable, broadly deployable model, the discussion will usually run through GPAI rules and through the question of whether a specific application is high-risk. So not "is this AGI", but "what can this model do", "how is it deployed", and "what damage can occur at scale". (EC Digital Strategy)

Timeline: The AI Act entered into force on August 1, 2024; prohibited practices and AI literacy obligations apply from February 2, 2025; governance and GPAI obligations have applied from August 2, 2025. In July 2025, the General-Purpose AI Code of Practice was published. (EC Digital Strategy)

Where Would AGI "Land" in the AI Act?

1) AGI as a General-Purpose AI Model or System

An AGI-like model is in practice almost by definition general-purpose: deployable for many tasks and integrable into many systems. In the Dutch government guidance, this is clearly explained: an AI model is a component, an AI system requires additional elements (such as an interface), and general-purpose models and systems get their own requirements. (Government.nl AI Act Guide)

For providers of general-purpose AI models, there are four core obligations:

  • Technical documentation
  • Information for downstream integrators
  • A copyright policy for training
  • A summary of training data

2) AGI as "Systemic Risk" GPAI

If a model is so large and capable that it can cause risks at scale, additional obligations come into play:

ObligationExplanation
Model evaluationsStructural evaluation of capabilities and risks
Risk mitigationMeasures to limit systemic risks
Incident registrationReporting to AI Office for serious incidents
CybersecurityAppropriate security measures

The Commission explains in its Q&A that "systemic risks" can involve large-scale harm, such as lowering thresholds for CBRN misuse or control problems with autonomous models. (EC GPAI Q&A)

3) AGI Deployed in High-Risk Context

Even if the model is general-purpose, the application can still be high-risk, depending on the domain and purpose. Think of recruitment and selection, creditworthiness assessment, access to education, or critical infrastructure.

Note: The Dutch guidance explicitly warns that as a deployer you can become a provider when you deploy a general-purpose system for a high-risk purpose, and that compliance can then be difficult. (Government.nl AI Act Guide)

Responsible Implementation: Seven Steps

If you want to implement AGI responsibly, you need an approach that is simultaneously legal, technical and organizational.

The seven steps

1) Define the boundaries of the system Document which tasks are allowed, which are not, and what autonomy you permit.

2) Create a role and chain map Who is provider, deployer, integrator? This determines which obligations you bear.

3) Classify per use-case, not per model name Stop discussions like "is this AGI". Assess per application: does this fall under prohibited practices, high-risk, transparency obligations, or GPAI?

4) Conduct structural model evaluations Red teaming, misuse scenarios, jailbreak tests, evaluation of reliability, bias and privacy leakage. Do this cyclically.

5) Build safety layers Access control, sandboxing, logging, rate limits, monitoring, circuit breakers, escalation to human oversight.

6) Organize governance as a management system Use ISO/IEC 42001 for AI management systems and NIST AI RMF as a risk framework. (ISO 42001, NIST AI RMF)

7) Arrange transparency and human contact Inform users that they are interacting with AI and provide a human point of contact when there is impact on rights.

A Concrete Example: An "AGI Assistant" in an Organization

Suppose: you build an internal assistant that can explain policy, write drafts, prepare decision memos and analyze data. Initially that seems low risk. But as soon as the same assistant is connected to HR workflows (selection, assessment), or to finance workflows (credit decisions, fraud detection), the application can shift toward high-risk.

That's why it's smart to build in "use-case gates" from day one: the assistant can be broad, but access to high-risk processes requires:

  • A separate risk assessment
  • Additional tests
  • Stricter monitoring
  • Explicit decision responsibility with people

Practical lesson: The EU AI Act doesn't regulate AGI as a label, but does cover the building blocks that form an AGI-like system. If you're already working with very capable models today, organize your governance so you can scale up in strictness per use-case.


Sources

Bronnen

European Commission: AI Act - Regulatory Framework (2024)
European Commission: The General-Purpose AI Code of Practice (2025)
Government.nl: AI Act Guide (September 2025)
European Commission: General-Purpose AI Models - Q&A (2025)
OECD: OECD AI Principles (2024)


🎯 More on Responsible AI: Check out the Responsible AI Implementation Guide for practical frameworks and best practices.