Summary: Article 26 EU AI Act contains 12 paragraphs with obligations for deployers of high-risk AI systems. The essentials: use the system according to the instructions for use, assign competent persons for human oversight, monitor operation, report serious incidents, conduct a FRIA where required, inform affected persons, retain logs for at least 6 months, and register usage if you are a public body. Penalties for non-compliance: up to EUR 15 million or 3% of total worldwide annual turnover.
Why Article 26 matters more than most organizations realize
Most organizations working with AI are not developers of AI systems. They procure AI, integrate it into their processes and use it to make or support decisions. Think of a bank using an AI credit scoring model, a hospital deploying diagnostic AI, or an HR department using AI-powered recruitment software.
In the terminology of the EU AI Act, these are deployers. And for them, Article 26 is the single most relevant article in the entire regulation. It contains the complete set of obligations that apply once you put a high-risk AI system into service.
Yet Article 26 is often overlooked in practice. Organizations focus on provider (developer) obligations, assuming the vendor handles everything. That is a dangerous assumption. The AI Act explicitly imposes independent obligations on deployers, and these obligations cannot be contractually transferred.
Who is a deployer?
Article 3(4) of the AI Act defines a deployer as:
"a natural or legal person, public authority, agency or other body using an AI system under its authority, except where the AI system is used in the course of a personal non-professional activity."
The crucial element is "under its authority." The moment your organization uses an AI system for professional purposes, you are a deployer, regardless of whether you built the system yourself.
Deployer vs. provider: the difference
The role distribution in the AI Act value chain is clear:
- Provider: develops the AI system or has it developed and places it on the market under its own name
- Deployer: uses the AI system within its own organization
- Importer: brings an AI system from a third country onto the EU market
- Distributor: makes the system available on the market without modifying it
In practice, the same organization can be both provider and deployer. But for most businesses: you procure an AI system and are therefore a deployer. Not sure which role applies to you? Use our provider-vs-deployer tool to find out.
Practical examples
Bank with credit scoring AI: The bank procures an AI system that generates creditworthiness assessments. The software vendor is the provider. The bank is the deployer, as it uses the system under its own authority to make credit decisions.
Hospital with diagnostic AI: The hospital uses AI software that assists radiologists in evaluating scans. The software manufacturer is the provider. The hospital is the deployer.
HR department with recruitment AI: The company uses an AI tool that automatically screens job applications. The tool vendor is the provider. The company deploying the tool for recruitment is the deployer.
The 12 paragraphs of Article 26
Article 26 is structured across twelve paragraphs. Below, I walk through each one based on the official text.
Paragraph 1: Use according to instructions for use
Deployers shall take appropriate technical and organisational measures to ensure they use high-risk AI systems in accordance with the instructions for use accompanying the systems, pursuant to paragraphs 3 and 6. This sounds straightforward, but it presupposes that you have actually received, read and understood those instructions.
Ask your provider explicitly for the instructions for use. Without that documentation, compliance with this obligation is impossible.
Paragraph 2: Human oversight by competent persons
Deployers shall assign human oversight to natural persons who have the necessary competence, training and authority, as well as the necessary support. This is not something you do on the side. It requires designating staff who understand the system, can interpret its output and have the authority to intervene.
This is closely related to the AI literacy obligations under Article 4 of the AI Act.
Paragraph 3: Without prejudice to other obligations
The obligations set out in paragraphs 1 and 2 are without prejudice to other deployer obligations under Union or national law and to the deployer's freedom to organise its own resources and activities for the purpose of implementing the human oversight measures indicated by the provider.
In practice this means: the AI Act does not replace your existing obligations under, for example, the GDPR, sectoral legislation or labour law. It comes on top of them.
Paragraph 4: Input data relevance
To the extent the deployer exercises control over the input data, that deployer shall ensure that input data is relevant and sufficiently representative in view of the intended purpose of the high-risk AI system. This applies particularly when you feed data into the system yourself or configure which data the system processes.
Paragraph 5: Monitoring, risk signaling and incident reporting
Deployers shall monitor the operation of the high-risk AI system on the basis of the instructions for use and, where relevant, inform providers in accordance with Article 72.
Where deployers have reason to consider that use may result in the AI system presenting a risk within the meaning of Article 79(1), they shall without undue delay inform the provider or distributor and the relevant market surveillance authority, and shall suspend use.
Where deployers have identified a serious incident, they shall immediately inform first the provider, then the importer or distributor and the relevant market surveillance authorities.
Financial institutions: For deployers that are financial institutions subject to internal governance requirements under EU financial services law, the monitoring obligation is deemed fulfilled by complying with the rules on internal governance arrangements, processes and mechanisms under the relevant financial service law.
Paragraph 6: Log retention, minimum 6 months
Deployers shall keep the logs automatically generated by the high-risk AI system, to the extent such logs are under their control, for a period appropriate to the intended purpose, of at least six months, unless provided otherwise in applicable Union or national law, in particular EU law on the protection of personal data.
Financial institutions shall maintain the logs as part of the documentation kept pursuant to the relevant EU financial service law.
Paragraph 7: Informing workers before workplace deployment
Before putting into service or using a high-risk AI system at the workplace, deployers who are employers shall inform workers' representatives and the affected workers that they will be subject to the use of the high-risk AI system. This information shall be provided, where applicable, in accordance with the rules and procedures laid down in Union and national law and practice on information of workers and their representatives.
Paragraph 8: Registration for public sector deployers
Deployers that are public authorities, or EU institutions, bodies, offices or agencies shall comply with the registration obligations referred to in Article 49. When such deployers find that the high-risk AI system they envisage using has not been registered in the EU database referred to in Article 71, they shall not use that system and shall inform the provider or the distributor.
Read more about the registration obligation in our article on registering AI systems.
Paragraph 9: DPIA obligations
Where applicable, deployers shall use the information provided under Article 13 to comply with their obligation to carry out a data protection impact assessment (DPIA) under Article 35 GDPR or Article 27 of Directive (EU) 2016/680.
In practice, this means you will often need to conduct the DPIA and the FRIA in parallel.
Paragraph 10: Authorization for post-remote biometric identification
In the framework of an investigation for the targeted search of a person suspected or convicted of having committed a criminal offence, the deployer of a high-risk AI system for post-remote biometric identification shall request authorization, ex-ante (or without undue delay and no later than 48 hours), from a judicial authority or an administrative authority.
Each use shall be limited to what is strictly necessary for the investigation of a specific criminal offence. If authorization is rejected, use must be stopped immediately and personal data deleted.
Absolute prohibition: Such high-risk AI systems shall in no case be used for law enforcement purposes in an untargeted way, without any link to a criminal offence or criminal proceeding. Each use must be documented in the relevant police file.
Deployers shall submit annual reports to the relevant market surveillance and national data protection authorities on their use of post-remote biometric identification systems.
Paragraph 11: Informing affected persons
Deployers of high-risk AI systems referred to in Annex III that make decisions or assist in making decisions related to natural persons shall inform those natural persons that they are subject to the use of the high-risk AI system. This is without prejudice to Article 50, which imposes specific transparency obligations for certain AI systems.
Transparency is the key word here. People have the right to know that AI is being used in decisions that affect them.
Paragraph 12: Cooperation with authorities
Deployers shall cooperate with the relevant competent authorities in any action those authorities take in relation to the high-risk AI system in order to implement this Regulation. This includes providing information and access when requested.
FRIA: the additional obligation under Article 27
In addition to the Article 26 obligations, Article 27 prescribes that deployers of certain high-risk AI systems from Annex III, categories 5(a) and 5(b), must conduct a fundamental rights impact assessment (FRIA) before deployment. This concerns, among others, AI systems used for creditworthiness assessments and for risk assessment and pricing for life and health insurance.
The FRIA is not optional guidance. It is a legal obligation that must be carried out in a structured manner. Results must be communicated to the relevant market surveillance authority. A detailed comparison between DPIA and FRIA can be found in our DPIA vs FRIA article.
Use the FRIA generator on the Responsible AI Platform to walk through the process step by step.
Practical implementation checklist
Here is a concrete approach for organizations to comply with Article 26:
- Inventory all AI systems in use. Map out which AI systems your organization deploys, including systems procured as "tools" or "software" that are in fact AI systems. Use the AI Act Decision Tree to determine which ones fall under the AI Act.
- Determine your role per system. Are you the provider, deployer, or both? Use the provider-vs-deployer tool for quick classification.
- Request instructions for use from providers. Ensure you have the instructions for use for every high-risk AI system. Without this documentation, compliance is impossible.
- Assign human oversight roles. Designate one or more persons per system who are responsible for oversight. Ensure they are trained and have the authority to stop the system or override its output.
- Set up monitoring and incident reporting. Establish processes for ongoing monitoring of the system's operation. Define what constitutes a "serious incident," who reports it, to whom, and within what timeframe.
- Conduct a FRIA where required. For systems in Annex III categories 5(a) and 5(b), a FRIA is mandatory before deployment. Use the FRIA generator.
- Conduct a DPIA for personal data processing. Where the AI system processes personal data and there is a high risk, a DPIA is mandatory under the GDPR.
- Inform affected persons. Ensure that individuals subject to AI-assisted decisions are informed. Update your privacy notices and information provisions accordingly.
- Retain logs for at least 6 months. Set up the technical infrastructure to store and maintain access to automatically generated logs.
- Inform employee representatives. If you deploy AI in the workplace, involve works councils or other employee representatives.
- Register as a public body. Public sector deployers must register their use in the EU database.
- Cooperate with authorities. Ensure your organization is prepared and able to cooperate with market surveillance authorities when they take action.
Common mistakes and penalties
The three biggest mistakes
"The provider handles it." This is the most common misconception. The AI Act imposes independent obligations on deployers. The fact that your provider is compliant does not exempt you from your own obligations.
No human oversight established. Many organizations use AI systems without anyone specifically designated for oversight. Paragraph 2 requires competent persons with sufficient authority. A checkbox on a form is not enough.
No incident reporting process. If an AI system causes a serious incident and you have no reporting process in place, you are in violation of paragraph 5. This also applies if you are not actively monitoring when the law requires you to.
Penalties
Fines for non-compliance with deployer obligations under the AI Act can reach up to EUR 15 million or 3% of total worldwide annual turnover, whichever is higher. Proportionality provisions for SMEs and startups apply under Article 99, but the obligation itself does not disappear.
Use the fine calculator on the Responsible AI Platform to get an indication of the potential fines for your organization.
Conclusion
Article 26 is the compass for every organization deploying high-risk AI systems. The article makes clear that compliance is not just a matter for providers, but that deployers bear their own independent responsibility. From human oversight to log retention, from incident reporting to transparency toward affected persons: the obligations are concrete and enforceable.
Start today by inventorying your AI systems and setting up your compliance processes. The earlier you begin, the smoother the transition when enforcement ramps up.
Want to stay up to date with the latest AI Act developments? Subscribe to the AI Act Weekly.
Frequently Asked Questions about Article 26
What does Article 26 of the EU AI Act require? Article 26 contains 12 paragraphs with obligations for deployers of high-risk AI systems. These cover use according to instructions, human oversight, input data, monitoring, incident reporting, log retention, employee information, registration for public deployers, DPIA obligations, authorization for biometric identification, transparency toward affected persons, and cooperation with authorities.
Who is a deployer under the AI Act? A deployer is any natural or legal person, public authority, or other body that uses an AI system under its own authority for professional purposes. Personal, non-professional use is excluded. Most organizations that procure and deploy AI are deployers.
What is the difference between a provider and a deployer? A provider develops the AI system or has it developed and places it on the market. A deployer uses the system within its own organization. Both have independent obligations under the AI Act.
What are the fines for deployers who violate Article 26? Deployers that fail to comply with Article 26 obligations face fines of up to EUR 15 million or 3% of total worldwide annual turnover, whichever is higher. Proportionality provisions apply for SMEs and startups, but the obligation itself remains.
Do deployers need to conduct a FRIA? It depends on the type of high-risk AI system. Deployers of systems listed in Annex III, categories 5(a) and 5(b), such as creditworthiness assessments and insurance risk assessments, must conduct a fundamental rights impact assessment (FRIA) before deployment, under Article 27.
Related tools and articles: