Article 14 of 11312%
Article 14: Human oversight
EU Official:
⏳Applies from 2 Aug 2026
Title III: High-Risk AI SystemsEntry into force: 2026-08-02
Official text
||
Source: EUR-Lex, Regulation (EU) 2024/1689 — text reproduced verbatim.
📥 Download AI Act (PDF)✅ Compliance Checklist
- ☐Human oversight possible during use
- ☐User can understand and interpret output
- ☐User can disregard or correct decisions
- ☐Emergency stop procedure available
- ☐Supervisors trained in system use
Want to save your progress? Create a free account
🛠 Related tools
Related Recitals
⚖️ Related Enforcement
No enforcement actions for this article yet. Follow developments via the Enforcement Tracker.
Cross-references
Annexes
Frequently asked questions
What does human oversight of AI entail?▼
Article 14 requires high-risk AI systems to be designed to enable effective oversight by natural persons. This includes the ability to understand, disregard or correct the output.
Do SMEs also need to comply with Article 14 of the AI Act?▼
Article 14 of the AI Act does not provide a general exemption for SMEs. However, the AI Act includes supportive measures and potentially lighter obligations for small and medium-sized enterprises, depending on their role in the AI value chain.
How does Article 14 of the AI Act relate to the GDPR?▼
Article 14 of the AI Act complements the GDPR. While the GDPR protects personal data, the AI Act focuses on the safety and trustworthiness of AI systems. Organisations must comply with both regulations when their AI system processes personal data.
What are the deadlines for Article 14 of the AI Act?▼
The AI Act follows a phased implementation. Prohibited AI practices apply from February 2025, obligations for high-risk AI systems from August 2026, and other provisions take effect gradually. The specific deadline for Article 14 depends on the category of the obligation.
Does Article 14 of the AI Act also apply to AI systems I purchase?▼
Yes, Article 14 of the AI Act may also be relevant when you purchase AI systems. As a deployer, you have your own obligations under the AI Act, regardless of whether you developed the system yourself or purchased it from a provider.
What is the difference between provider and deployer under Article 14 of the AI Act?▼
Under Article 14 of the AI Act, the provider is the entity that develops or places the AI system on the market, while the deployer is the entity that uses the system under its own authority. Both roles carry different obligations.
What documentation does Article 14 of the AI Act require?▼
Article 14 of the AI Act requires that relevant documentation is maintained as part of the compliance process. This may include technical documentation, instructions for use, logs or declarations of conformity, depending on the classification of the AI system.
How do I document compliance with Article 14 of the AI Act?▼
You document compliance with Article 14 of the AI Act by establishing a risk management system, maintaining technical documentation, and conducting internal audits. Keep all relevant documents for the period prescribed by the AI Act.
What is the difference between human-in-the-loop and human-on-the-loop?▼
In human-in-the-loop, a human actively participates in every decision of the AI system. In human-on-the-loop, a human oversees the system and can intervene when needed, but doesn't need to approve every individual decision. Article 14 requires the type of oversight to be proportional to the risks and degree of autonomy of the system.
Does the person overseeing AI need special training?▼
Yes, Article 14(4) requires persons assigned to human oversight to be able to correctly interpret the output, understand the system's operation, and be aware of automation bias. Article 26(2) requires deployers to assign competent, trained and authorised personnel.
What is automation bias and how must I deal with it under the AI Act?▼
Automation bias is the tendency of humans to uncritically trust AI output and override their own judgment. Article 14(4)(b) explicitly requires overseers to be aware of this tendency. In practice this means training, encouraging critical thinking and system design that prevents blind reliance on AI output.
Do I need to build a kill switch into my high-risk AI system?▼
Article 14(4)(e) requires the overseer to be able to stop the AI system or disregard the output. This doesn't need to be a physical kill switch, but there must be an effective mechanism allowing a human to interrupt the system, override the output or take the system out of operation.
📬 AI Act Weekly
Get the most important AI Act developments in your inbox every week.
Subscribe