Responsible AI Platform

Article 26 EU AI Act: Deployer Obligations Guide

ยทยท10 min read

Most organizations implementing AI are not building it. They are buying it, licensing it, and deploying it to make decisions about customers, employees, or patients. Under the EU AI Act, these organizations are called deployers and Article 26 is written specifically for them.

The common misconception is that compliance sits with the vendor. It does not. The AI Act explicitly places independent, non-transferable obligations on the deployer. Your contract with the provider does not shield you. Your organization's name is on the line.

Article 26 contains nine substantive obligations (with a tenth specific to law enforcement biometric use). Each one requires concrete action before you go live with a high-risk AI system. Here is what they mean in practice.

Who counts as a deployer?

Article 3(4) of the EU AI Act defines a deployer as any natural or legal person, public authority, agency or other body that uses an AI system under its own responsibility, except where the AI system is used in the course of a personal non-professional activity.

The key phrase is "under its own responsibility." The moment your organization deploys an AI system for professional purposes, you are a deployer, regardless of whether you built the system.

If your bank uses an AI credit scoring model built by a fintech vendor, the fintech is the provider. Your bank is the deployer. If your hospital uses diagnostic AI from a medical software company, the software company is the provider. Your hospital is the deployer. If your HR team uses an applicant screening tool, the tool vendor is the provider. Your organization is the deployer.

Not sure which role applies? The risk assessment tool can help you map your position in the AI Act value chain.

The 9 obligations of Article 26

1. Follow the instructions for use (Article 26(1))

Deployers must use high-risk AI systems in accordance with the instructions for use provided by the provider. This sounds straightforward. In practice, it requires that you have actually received, read, and operationalized that documentation.

The instructions for use come from Article 13, which requires providers to make their high-risk AI systems transparent and interpretable for deployers. If your vendor has not provided this documentation, you cannot comply with Article 26(1). Request it explicitly before go-live.

In an HR context: if the recruitment AI is only approved for screening CVs in certain job categories, deploying it outside those categories is a violation. The instructions define the boundaries of lawful use.

2. Assign human oversight to competent persons (Article 26(2))

You must assign the responsibility for human oversight to natural persons who have the necessary competence, training, and authority to carry out the oversight role and to intervene when required.

This is not a formality. It means identifying specific individuals, ensuring they understand how the AI system works, training them on its limitations and failure modes, and giving them the actual authority to override or suspend the system's output.

A financial institution using an AI fraud detection system needs oversight staff who understand what the model flags, what it misses, and under what circumstances a human judgment should override the automated recommendation. Assigning this to a junior analyst with no real authority does not satisfy Article 26(2).

This obligation connects directly to the AI literacy requirements under Article 4 of the EU AI Act, which require deployers to ensure their staff have sufficient AI literacy.

3. Other obligations remain in force (Article 26(3))

The obligations under paragraphs 1 and 2 are without prejudice to other obligations of the deployer under Union or national law, and without prejudice to the deployer's freedom to organize its own resources and activities to implement the human oversight measures indicated by the provider.

In practice: Article 26 does not replace your GDPR obligations, sector-specific regulations, or employment law. It adds to them. A public sector deployer using AI in social benefits decisions must comply with both the AI Act and public law procedural requirements. A healthcare deployer must comply with both the AI Act and medical device regulation.

4. Ensure data quality when you control input data (Article 26(4))

Where the deployer exercises control over the input data, they must ensure that the data is relevant and sufficiently representative for the intended purpose.

This obligation only applies when you, as deployer, are responsible for the data fed into the AI system. If you are using a software-as-a-service product where the provider manages the data pipeline, this may not apply. But if you are feeding your own datasets into a licensed model, you bear responsibility for data quality.

This is particularly relevant in public sector applications. A municipality using an AI system to allocate social housing must ensure the training and input data actually represents the population it is meant to serve. Biased or unrepresentative data produces discriminatory outcomes, and under Article 26(4) the deployer is accountable.

5. Monitor, report, and suspend when necessary (Article 26(5))

Deployers must monitor the operation of the high-risk AI system on the basis of the instructions for use. Where deployers have reason to consider that use of the system in accordance with the instructions may result in a risk within the meaning of Article 79(1), they must without undue delay inform the provider or distributor and the relevant market surveillance authority, and suspend use of that system. Where a serious incident is identified, the deployer must immediately inform first the provider, and then the importer or distributor and the relevant market surveillance authorities.

This is an active, ongoing obligation, not a one-time check. It requires a monitoring framework, clear escalation paths, and someone responsible for deciding when to pull the plug.

The reporting obligation for serious incidents mirrors the logic of GDPR data breach notification: when something goes wrong, the authority needs to know quickly. Establish your incident response procedure before you go live.

6. Keep logs for at least 6 months (Article 26(6))

Deployers must retain the logs automatically generated by the high-risk AI system for at least six months, unless Union or national law requires a different retention period or the logs contain personal data with a shorter retention requirement under GDPR.

Logs are your audit trail. They demonstrate that the system was used correctly, that oversight was exercised, and that the system's outputs can be reviewed after the fact. Without logs, you cannot prove compliance and you cannot investigate incidents.

Check whether your vendor's system generates logs by default, where those logs are stored, whether you have access to them, and whether six months of retention is configured.

7. Inform workers and their representatives (Article 26(7))

Before deploying a high-risk AI system that will be used at the workplace, deployers who are employers must inform the workers' representatives and the affected workers. This obligation applies specifically to deployers in the role of employer โ€” not to every deployer category.

This obligation is frequently overlooked. Organizations focus on technical compliance and forget the human dimension. The AI Act explicitly requires worker notification, not just as a procedural courtesy but as a compliance requirement.

In a manufacturing context: before deploying AI-powered quality control systems that monitor worker performance, employees and works councils must be informed. In an office environment: before deploying AI tools that assess employee productivity, the same notification requirement applies.

The scope of "high-risk AI systems at the workplace" is broader than many expect. Systems that affect employment decisions, task allocation, or performance monitoring can fall within the high-risk category. Check Annex III of the EU AI Act for the full list.

Works councils, trade unions, and employee representatives should be involved early. Do not treat this as a post-decision formality.

8. Public authorities must register before use (Article 26(8))

Where deployers are public authorities, agencies, or bodies, they must comply with the registration obligations under Article 49. The registration itself takes place in the EU database referred to in Article 71. If a system intended for use has not been registered in that database, the public authority must not use it and shall inform the provider or distributor.

This is a hard stop for government deployers. No registration, no use. The EU AI Act database is the transparency mechanism for government use of high-risk AI. Procurement of AI systems in the public sector should include registration as a pre-go-live step.

9. Use Article 13 information for DPIA compliance (Article 26(9))

Deployers must use the information provided under Article 13 (transparency and information provision) to comply with their DPIA obligations under GDPR Article 35 or the Law Enforcement Directive Article 27.

This is the direct bridge between the AI Act and GDPR. Article 13 requires providers to supply detailed technical documentation about their AI system, including its capabilities, limitations, and risks. Deployers must use this documentation when conducting Data Protection Impact Assessments.

If you are deploying a high-risk AI system that processes personal data, a DPIA is almost certainly required under GDPR. The AI Act now explicitly instructs you to use the provider's Article 13 documentation as input for that DPIA. This means your DPIA cannot be completed without adequate provider documentation.

The FRIA generator on this site helps you structure the fundamental rights impact assessment that connects Article 26(9) obligations to your specific deployment context.

Two obligations that organizations get wrong most often

Lid 7 (worker notification) is the most commonly skipped obligation. It applies specifically to deployers who act as employers โ€” not every deployer, but when your organization is deploying AI for use in your own workplace. Organizations often treat it as an internal communication task rather than a legal requirement. The consequence of skipping it is not just a compliance gap but potential liability if AI-driven workplace decisions are later challenged by employees or union representatives.

Lid 9 (DPIA bridge) is misunderstood because organizations treat AI Act compliance and GDPR compliance as separate tracks. They are not. The AI Act explicitly wires them together. Your DPIA and your AI Act compliance documentation should reference each other. If they do not, one of them is incomplete.

Article 26 Deployer Compliance Checklist

Before putting a high-risk AI system into use, verify all nine points:

  • Received and reviewed the instructions for use from the provider (Lid 1)
  • Named specific individuals responsible for human oversight with documented competence and authority (Lid 2)
  • Mapped AI Act obligations against existing GDPR, sector, and employment law obligations (Lid 3)
  • Assessed and documented input data quality and representativeness, if you control input data (Lid 4)
  • Established monitoring procedure, incident escalation path, and criteria for suspension (Lid 5)
  • Confirmed log generation and 6-month retention is configured and accessible (Lid 6)
  • If acting as employer: notified workers' representatives and affected employees, with records of notification (Lid 7)
  • Registered the system in the EU database if you are a public authority (Lid 8)
  • Completed or updated DPIA using provider's Article 13 documentation (Lid 9)

Where to go from here

Article 26 compliance requires more than a checklist. It requires documented processes, trained staff, and integration with your existing governance frameworks.

The FRIA generator helps you build a Fundamental Rights Impact Assessment that covers your Article 26(9) obligations and provides structured input for your DPIA.

For a full picture of how your organization's AI use maps against the EU AI Act risk categories, the risk assessment tool walks you through the classification logic.

The full legal text of Article 26 is available on this site if you need to verify the exact wording for your compliance documentation.

If you are deploying AI in HR, finance, or public services, the high-risk classification under Annex III almost certainly applies. Get the compliance framework in place before go-live, not after.

LearnWize

7 days free

All-in-one platform for AI Act compliance: mandatory AI literacy, risk tools and weekly updates.

Start free trial โ†’