Responsible AI Platform

Article 9 EU AI Act: risk management system guide

··13 min read

Most organizations think risk management starts when something goes wrong. Under the EU AI Act, Article 9 starts much earlier. It requires providers of high-risk AI systems to build a continuous risk management system before the system goes live, during development, and throughout its lifecycle.

That makes Article 9 one of the structural core provisions of the AI Act.

If Article 10 is about data governance, Article 13 about transparency, and Article 14 about human oversight, Article 9 is the layer that forces all of those measures into one disciplined process. It is not a policy memo. It is not a one-off risk register. It is an iterative compliance system that has to keep functioning as the AI system evolves.

What Article 9 actually requires

Article 9(1) states that a risk management system shall be established, implemented, documented, and maintained for high-risk AI systems.

That wording matters. The obligation is not just to think about risk. It is to set up a system that exists in practice, is documented, and remains active over time. In other words, this is not a pre-launch checklist. It is an operating model.

Article 9(2) then defines the risk management system as a continuous iterative process running throughout the entire lifecycle of the high-risk AI system and subject to regular review and updating.

The AI Act is deliberately pushing providers away from a static compliance mindset. A high-risk AI system can change because the model changes, the data changes, the deployment context changes, or user behavior changes. A risk management system that is only built at launch will become obsolete quickly.

The four core steps of Article 9(2)

Article 9(2) breaks the process into four steps.

1. Identify and analyze known and reasonably foreseeable risks

Under Article 9(2)(a), providers must identify and analyze both known risks and reasonably foreseeable risks that the system can pose to health, safety, or fundamental rights when used for its intended purpose.

This means providers cannot limit themselves to obvious technical failure. They must also consider discrimination, unfair exclusion, privacy harm, loss of access to essential services, or downstream effects on human autonomy. In practice, this is where the category of high-risk AI systems becomes concrete rather than abstract.

A recruitment screening system, for example, does not only pose a risk of incorrect sorting. It may also create discrimination risks if proxies for gender, disability, age, or migration background influence the ranking logic. A medical diagnostic system does not only pose safety risk if it misses tumors. It may also pose fundamental rights risk if performance is materially worse for underrepresented patient populations.

2. Estimate and evaluate risks, including reasonably foreseeable misuse

Article 9(2)(b) requires estimation and evaluation of risks not only under intended use, but also under conditions of reasonably foreseeable misuse.

This is one of the most strategically important phrases in the provision. It means providers cannot defend themselves by saying, "That is not how the system was meant to be used," if that form of misuse was predictable.

If a provider knows that customers are likely to reuse a scoring model in contexts beyond those validated in development, or to over-rely on outputs in ways that exceed the system's design assumptions, that risk needs to be part of the assessment. The AI Act expects providers to anticipate how systems are actually used, not how they are described in marketing decks.

3. Evaluate risks identified through post-market monitoring

Article 9(2)(c) links the risk management system to the post-market monitoring system in Article 72. That means risk management does not stop at launch. Once the system is in use, data from real-world operation must feed back into the risk evaluation.

This is a crucial bridge between pre-market compliance and operational governance. If a provider receives signals that certain outputs are unstable, that certain populations experience worse outcomes, or that users are systematically misunderstanding the system, those findings must feed back into the Article 9 process.

4. Adopt targeted risk management measures

Article 9(2)(d) requires appropriate and targeted risk management measures for the risks identified.

The wording "targeted" matters. Generic statements like "human review will be used" or "the model has been tested" are not enough. The measures must correspond to the concrete hazards identified in the risk analysis.

If the risk is automation bias, the measure may include interface changes, mandatory review procedures, and deployer training. If the risk is bias against underrepresented groups, the measure may include dataset redesign, additional testing, threshold adjustment, or limitations on intended use.

Article 9 is narrower than many providers think

Article 9(3) draws an important boundary. The risks covered by this article are only those that may be reasonably mitigated or eliminated through the development or design of the high-risk AI system, or through the provision of adequate technical information.

That means providers are not responsible for every imaginable downstream risk in the world. They are responsible for the risks that can be influenced through system design, development choices, documentation, and technical communication.

This is an important distinction because it keeps Article 9 operational. The AI Act does not ask providers to solve every governance problem created by every deployer. It asks them to control the risks that they can realistically influence through their own product decisions.

Residual risk must be acceptable

Article 9(5) introduces one of the most demanding concepts in the whole chapter: the relevant residual risk associated with each hazard, and the overall residual risk of the high-risk AI system, must be judged acceptable.

That sounds abstract until you unpack it. Residual risk is what remains after mitigation. The AI Act does not assume risk can always be eliminated entirely. But it does require a judgment that what remains is acceptable in light of the system's intended use.

This requires providers to move beyond a binary compliance mindset. The question is not just: did we add safeguards? The question is: after those safeguards, what risk remains, for whom, in what situations, and is that residual risk acceptable?

Article 9(5) also creates an order of operations:

  • first eliminate or reduce risks as far as technically feasible through design and development,
  • then implement mitigation and control measures for risks that cannot be eliminated,
  • then provide the information required under Article 13 and, where appropriate, training to deployers.

This hierarchy matters because documentation is not a substitute for better design. Providers cannot leave avoidable harms in place and attempt to solve them only through warnings in the manual.

Risk management is not separate from the rest of Chapter III

Article 9(4) requires providers to consider the combined effects of the requirements in the same section of the AI Act.

That is a subtle but very important instruction. It means risk management must integrate the other technical and governance obligations in Chapter III instead of treating them as separate compliance silos.

For example:

In practice, Article 9 is the coordination provision. It is the article that forces providers to connect these threads into one compliance architecture.

LearnWize2 minutes, zero commitment

Learn the EU AI Act by doing

No slides. No boring e-learning. Try an interactive module.

Interactive ChallengePowered by LearnWize LearnWize

Try it yourself

3 interactive activities. Earn XP. See why this works better than reading slides.

LearnWizeTake the full test on LearnWize
FlashcardsMatchingAudit

Testing is part of risk management, not a separate afterthought

Articles 9(6), 9(7), and 9(8) make testing a formal part of the risk management system.

High-risk AI systems must be tested to identify the most appropriate and targeted risk management measures. Testing must also ensure that the system performs consistently for its intended purpose and complies with the requirements of the section.

The AI Act goes further: testing should take place, as appropriate, at any time throughout development, and in any event before the system is placed on the market or put into service. Testing must be carried out against pre-defined metrics and probabilistic thresholds appropriate to the intended purpose.

This matters because many providers still test for performance in narrow technical terms only, such as accuracy or recall, while ignoring fairness, robustness, interpretability, or context drift. Article 9 expects testing to support risk management, not just product validation.

Where appropriate, testing may also include real-world conditions under Article 60. For certain high-risk systems, lab testing alone will not reveal the actual risks that emerge in operational settings.

Vulnerable groups are explicitly part of the analysis

Article 9(9) requires providers to consider whether, in view of the intended purpose, the high-risk AI system is likely to have an adverse impact on persons under 18 and, where appropriate, other vulnerable groups.

This is not a decorative recital-style reference. It is an operational instruction.

A system used in education, healthcare, welfare, recruitment, insurance, or public services may affect people whose vulnerability is directly relevant to the risk profile. If a provider ignores that dimension, the risk management system is incomplete. In many of these contexts, the use case will also fall within Annex III or trigger a downstream FRIA.

This point often matters in public sector and HR use cases. A system may appear statistically adequate overall while still having materially worse outcomes for younger users, low-literacy groups, people with disabilities, or people in precarious socio-economic positions. Article 9 requires providers to at least ask that question and incorporate it where relevant.

Sector law can be integrated, but not ignored

Article 9(10) recognizes that some providers of high-risk AI systems are already subject to internal risk management rules under other Union law. In those cases, the Article 9 requirements may be part of or combined with those existing procedures.

This is especially relevant in financial services, medical technology, and certain regulated infrastructure sectors. But the word "combined" should not be read as "automatically satisfied." Providers still need to show that their existing risk management processes actually cover the Article 9 elements.

If a bank or medical device company wants to rely on existing governance structures, it should be able to map them clearly against Article 9(1)-(10). If it cannot do that, integration is not enough.

What organizations get wrong most often

The first common mistake is treating Article 9 like a risk register. A risk register can be one artifact within the system, but it is not the system itself. Article 9 requires a continuous process, evidence of review, links to testing, and a logic for how residual risk is judged acceptable.

The second common mistake is limiting the analysis to technical failure. The article explicitly covers risks to health, safety, and fundamental rights. That means organizations must assess harms that are legal, social, and institutional, not just engineering defects.

The third common mistake is assuming deployer training or manual warnings can compensate for weak design. Article 9(5) sets a clear hierarchy: first reduce risk through design, then through controls, then through information and training. Documentation is the last line, not the first.

The fourth common mistake is launching the system and only then attempting to retrofit risk management. That reverses the structure of the provision. Article 9 expects the system to be designed with risk management in mind from the beginning.

A practical Article 9 framework for providers

If you are building or placing a high-risk AI system on the market, a practical Article 9 implementation framework usually includes five building blocks.

1. Hazard mapping. Define what kinds of harm the system may create for health, safety, and fundamental rights. Do this not only for ideal intended use but also for foreseeable misuse.

2. Evidence design. Define what metrics, thresholds, and tests will tell you whether those risks are being controlled. This includes performance testing, robustness testing, subgroup testing, and operational scenario testing.

3. Mitigation planning. Decide which risks can be reduced through design, which require operational controls, and which require deployer information or training.

4. Residual risk judgment. Document how you determine whether the remaining risk is acceptable, by whom that judgment is made, and what triggers re-evaluation.

5. Lifecycle review. Connect the system to post-market monitoring, incident handling, logging, and periodic review so that the Article 9 process stays live after launch.

If that framework sounds familiar, it should. It resembles mature product governance in other regulated sectors. The AI Act is not inventing governance from scratch. It is forcing AI providers to act with the same discipline that is already expected in other high-impact domains.

Frequently asked questions

The most important questions and answers for this topic.

On LearnWize:EU AI Act ComplianceTry it free

From risk classification to conformity assessment: learn it in 10 interactive modules.

Take the free AI challenge