Responsible AI Platform
Article 6 of 1135%
🇳🇱 Nederlands

Article 6: Classification rules for high-risk AI systems

EU Official:
Applies from 2 Aug 2026
Title III: High-Risk AI SystemsEntry into force: 2026-08-02

📅 Implementation Timeline

1 Aug 2024
AI Act entered into force
2 Feb 2025
Prohibited practices apply (context for classification)
2 Feb 2026
EC guidelines on classification (MISSED)
2 Aug 2026
High-risk obligations apply — Art. 6(2) (Annex III) systems
2 Aug 2027
Art. 6(1) systems (safety components) — full application

Official text

||

Source: EUR-Lex, Regulation (EU) 2024/1689 — text reproduced verbatim.

📥 Download AI Act (PDF)

📘 Official Guidance

⚠️Deadline missed

Guidelines on classification of high-risk AI systems

The European Commission has missed the 2 February 2026 deadline for publishing these guidelines. The guidelines should clarify when an AI system is classified as high-risk under Article 6, including concrete examples per application area and a standard template for post-market monitoring.

Key points

  • ⚠️ 2 February 2026 deadline MISSED by European Commission
  • Should contain comprehensive list of use cases per Annex III category
  • Should clarify when the Art. 6(3) exception applies
  • Standard template for post-market monitoring plan to be included
  • Organisations should prepare based on the legal text itself pending the guidelines

🏛️ Governance — Who does what?

🏭Provider

  • Classifies the AI system as high-risk or not
  • Performs conformity assessment (Art. 43)
  • Registers in EU database (Art. 49)
  • Can claim Art. 6(3) exception with documentation

🏢Deployer

  • Verifies provider's classification
  • Uses AI system within intended purpose
  • Deviating use: reclassification required

🏛️Supervisory Authority

  • Market surveillance on correct classification
  • Can challenge Art. 6(3) exception
  • Access to EU database

🇪🇺European Commission

  • Publishes guidelines on classification (⚠️ deadline missed)
  • Can update Annex III via delegated acts
  • Establishes standard templates

🎯 What does this mean for you?

🏭 Provider
Determine whether your AI system falls under Annex I (safety component in regulated product) or Annex III (standalone high-risk application). When in doubt, use our AI Decision Tree. Classification determines which obligations apply.
🏢 Deployer
Request the conformity declaration and risk classification from your provider. As a deployer, you are responsible for correct use within the intended purpose. Deviating use may reclassify you as a provider.
🏪 SME / Startup
Many SMEs use AI as deployers, not providers. Your key action: inventory which AI you use and ask suppliers for their classification. Use our AI Readiness Score to assess your situation.
🏛️ Public Sector
Government organisations using AI for benefits, permits, or enforcement almost always fall under high-risk (Annex III, point 5). Start inventorying now via the Algorithm Register.

⚖️ Overlap with other legislation

GDPRArt. 35 (DPIA)
Complementary

High-risk classification under the AI Act often also requires a DPIA under the GDPR. The risk assessment for high-risk AI systems overlaps with the DPIA obligation, but they are not identical — the AI Act focuses on AI-specific risks, the GDPR on privacy risks.

NIS2 DirectiveArt. 21 (Cybersecurity)
Complementary

High-risk AI systems must meet cybersecurity requirements (Art. 15 AI Act). If the AI system is part of an essential or important entity under NIS2, additional security obligations apply. Measures can be combined.

Product Liability Directive (EU) 2024/2853Art. 6 (Defective product)
Reinforcing

AI systems placed on the market as products also fall under product liability. The new PLD (2024/2853) explicitly names software as a product. A high-risk AI system that does not comply with the AI Act may be considered a 'defective product'. The previously proposed AI Liability Directive has been withdrawn — the PLD is now the primary route for AI damage claims.

Machinery Regulation (EU) 2023/1230Art. 6 (High-risk machinery)
Integrated

AI systems that are safety components of machinery (Annex I, section A) automatically fall under high-risk (Art. 6(1)). The conformity assessment of the machinery and the AI system must be aligned.

Digital Omnibus (EU) 2025Amendments to Art. 6, 49, deadlines
Modifying

The Digital Omnibus (19 November 2025) proposes significant amendments to the AI Act. For Art. 6 specifically: the deadline for high-risk systems is delayed by 16 months (to December 2027), the registration requirement under Art. 6(4) is removed, and SME exemptions are expanded. This is a simplification proposal, not an expansion of obligations. Status: proposal, not yet adopted.

Sectoral legislation (Annex I)Various
Integrated

Art. 6(1) explicitly refers to Annex I — 21 pieces of EU harmonisation legislation. If an AI system is a safety component of a product covered by this legislation AND requires third-party conformity assessment, it is automatically high-risk.

📖 Related Recitals

🛠 Related tools

⚖️ Related Enforcement

  • CNIL fines Amazon €32 million for AI employee monitoringCNIL · Dec 2023
  • Hungarian bank fined for AI-based customer profilingNAIH · Aug 2021

Cross-references

Frequently asked questions

How do I determine if my AI system is high-risk?
Article 6 defines two categories of high-risk AI: (1) AI systems used as safety components in products under EU harmonisation legislation (Annex I), and (2) AI systems in specific application areas such as biometrics, critical infrastructure, education, employment and law enforcement (Annex III).
What is the difference between Annex I and Annex III high-risk AI?
Annex I concerns AI in regulated products (medical devices, machinery, toys). Annex III concerns standalone AI applications in sensitive domains such as credit scoring, recruitment and law enforcement.
When do the rules for high-risk AI apply?
The classification rules and obligations for high-risk AI systems apply from 2 August 2026.
Do SMEs also need to comply with Article 6 of the AI Act?
Article 6 of the AI Act does not provide a general exemption for SMEs. However, the AI Act includes supportive measures and potentially lighter obligations for small and medium-sized enterprises, depending on their role in the AI value chain.
How does Article 6 of the AI Act relate to the GDPR?
Article 6 of the AI Act complements the GDPR. While the GDPR protects personal data, the AI Act focuses on the safety and trustworthiness of AI systems. Organisations must comply with both regulations when their AI system processes personal data.
What are the deadlines for Article 6 of the AI Act?
The AI Act follows a phased implementation. Prohibited AI practices apply from February 2025, obligations for high-risk AI systems from August 2026, and other provisions take effect gradually. The specific deadline for Article 6 depends on the category of the obligation.
Does Article 6 of the AI Act also apply to AI systems I purchase?
Yes, Article 6 of the AI Act may also be relevant when you purchase AI systems. As a deployer, you have your own obligations under the AI Act, regardless of whether you developed the system yourself or purchased it from a provider.
What is the difference between provider and deployer under Article 6 of the AI Act?
Under Article 6 of the AI Act, the provider is the entity that develops or places the AI system on the market, while the deployer is the entity that uses the system under its own authority. Both roles carry different obligations.
Can my AI system still be not high-risk even if it's listed in Annex III?
Yes, Article 6(3) contains exceptions. An Annex III AI system is not high-risk if it performs a narrow procedural task, improves the result of a previously completed human activity, detects patterns without replacing human assessment, or performs only a preparatory task.
Do I need to document why my AI system is not high-risk?
Yes, if you conclude your Annex III AI system is not high-risk based on Article 6(3), you must document this before placing the system on the market and make this documentation available to authorities. You must also register the system in the EU database with an explanation.
What are the practical consequences of a high-risk classification?
A high-risk classification means you must comply with extensive requirements: establish a risk management system (Art. 9), ensure data governance (Art. 10), prepare technical documentation (Annex IV), implement logging (Art. 12), provide transparency (Art. 13), arrange human oversight (Art. 14), and undergo a conformity assessment.
Will the European Commission publish a list of examples of high-risk and not high-risk AI?
Yes, Article 6(5) requires the Commission to publish guidelines before 2 February 2026 with practical examples of AI systems that are and are not classified as high-risk. These guidelines will help with interpreting the classification rules.
How much does compliance with the high-risk AI requirements cost?
According to CEPS estimates, setting up a quality management system for one high-risk AI product costs between €193,000 and €330,000, with approximately €71,400 per year in maintenance costs. Exact costs vary significantly depending on complexity, existing processes and sector.

📬 AI Act Weekly

Get the most important AI Act developments in your inbox every week.

Subscribe