Definition
The requirement that high-risk AI systems are designed so that they can be effectively overseen by natural persons during their use. This must prevent AI systems from autonomously causing harm. Includes the ability to intervene and override.