Responsible AI Platform
Article 14 of 11312%
🇳🇱 Nederlands

Article 14: Human oversight

EU Official:
UpcomingApplies from 2 Aug 2026
Title III: High-Risk AI SystemsEntry into force: 2026-08-02

Article 14 is the core human oversight requirement for high-risk AI systems under Regulation (EU) 2024/1689. It makes oversight operational: people must understand limitations, recognise automation bias, override output and, where needed, interrupt the system.

Official text

||

Source: EUR-Lex, Regulation (EU) 2024/1689 — text reproduced verbatim.

Download AI Act (PDF)

🎯 What does this mean for you?

Provider+
Design the high-risk AI system so oversight works during use: clear human-machine interfaces, monitoring for anomalies, explanations around output, and technical options to disregard, override or stop the system. Explain in the instructions for use which oversight measures the deployer must implement.
Deployer+
Assign competent, trained and authorised people who can monitor operation, interpret output and intervene in time. Turn this into operating instructions: when can AI output be followed, when must it be challenged, and how is intervention logged?
🏪 SME / Startup+
If you buy a high-risk AI system, ask explicitly for the Article 14 controls: limitations, override and stop options, logs and training requirements. Start with a short AI inventory and record who owns human oversight.
Public Sector+
For public sector use, human oversight is not a paper checkpoint. Combine Article 14 with Articles 26 and 27: competent staff, clear decision routes, FRIA/IAMA evidence, logging and complaint or appeal routes for citizens.

Compliance checklist

  • Oversight measures are proportionate to risk, autonomy and context of use
  • Human overseers understand the system's capabilities and limitations
  • Automation bias is covered in training and operating instructions
  • Output can be disregarded, overridden or reversed
  • Stop or interruption procedure is available and tested
  • Deployer has assigned competent, trained and authorised overseers

Want to save your progress? Create a free account

Related recitals

🛠 Related tools

Related enforcement

No enforcement actions for this article yet. Follow developments via the Enforcement Tracker.

📝 Related Blog Posts

Cross-references

Recitals

Frequently asked questions

What does Article 14 of Regulation (EU) 2024/1689 require?+
Article 14 requires high-risk AI systems to be designed and developed so natural persons can effectively oversee them during use. Oversight must aim to prevent or minimise risks to health, safety and fundamental rights.
Who is responsible for human oversight: provider or deployer?+
The provider must identify oversight measures and, where technically feasible, build them into the system or describe deployer measures before the system is placed on the market or put into service. The deployer must use the system according to the instructions and, under Article 26(2), assign competent, trained and authorised people.
What must human overseers be able to do?+
Human overseers should be able, as appropriate and proportionate, to understand the system's capabilities and limitations, monitor operation, recognise automation bias, interpret output, decide not to use output, override or reverse it, and intervene or interrupt through a stop procedure or similar measure.
Is a kill switch mandatory under Article 14?+
Article 14(4)(e) requires, where appropriate and proportionate, that the overseer can interrupt the system through a stop button or similar procedure. The AI Act does not require one identical physical kill switch for every system.
What is automation bias?+
Automation bias is over-reliance on AI output, especially when a system provides information or recommendations for decisions by natural persons. Article 14(4)(b) explicitly requires human overseers to remain aware of this risk.
How does Article 14 connect to Articles 13 and 26?+
Article 13 requires instructions for use to include information on human oversight measures and output interpretation. Article 26 requires deployers to use the system according to those instructions and assign competent, trained and authorised people for oversight.
When does Article 14 of the AI Act apply?+
Article 14 is part of the requirements for high-risk AI systems. For most high-risk obligations, the practical application date is 2 August 2026, with specific transitional rules depending on the system and category.
Does Article 14 also apply to AI systems I purchase?+
Yes. If a purchased system is high-risk and used in the EU context, Article 14 remains relevant. Ask the supplier for the system's oversight measures, limitations, override and stop options, logs and training requirements.
Are there extra rules for biometric identification?+
Yes. For certain biometric identification systems under Annex III point 1(a), an action or decision based on identification generally requires separate verification by at least two competent, trained and authorised natural persons, unless specific law enforcement, migration, border control or asylum rules consider that disproportionate.