Short answer: if you use an AI agent inside your organization, under your authority and for a professional purpose, you are likely already a deployer within the meaning of Article 3(4) of the AI Act. That does not automatically mean that every heavy obligation for high-risk AI immediately applies to you. But it does mean you cannot pretend that responsibility sits entirely with the vendor.
Most organizations are still asking the wrong question. They ask whether employees are allowed to use ChatGPT, Copilot or another AI agent. The better question is this: at what point are we actually using such a system under our own authority?
That difference may sound semantic, but it is not. The moment an organization uses an AI agent in recruitment, customer service, internal research, software development or decision support, the conversation shifts from experimentation to governance. And that is exactly where the EU AI Act enters the picture.
Why this question has suddenly become urgent
AI agents have moved from curiosity to work instrument in a remarkably short time. They are not only used by developers. Lawyers, HR teams, sales teams, compliance officers and support staff increasingly use them for summaries, analysis, communication, triage and automation.
That often happens without a major implementation plan. A team tests a tool, connects a mailbox, lets the agent search documents or draft responses, and before long there is a system in operation that has access to business information, supports real processes and generates output that people rely on. What starts as a pilot often turns into an actual workflow.
That is precisely why the role question matters. The EU AI Act does not only look at who builds a system. It also looks at who uses it.
The EU AI Act does not create a separate category for AI agents
The Regulation does not use the term "AI agent" as a standalone legal category. So the first question is not whether something is called an agent, but whether it qualifies as an AI system within the meaning of Article 3(1) AI Act. The European Commission published additional guidance on that point in 2025, which we discussed earlier in What is an AI system? The European Commission gives an answer.
In plain language, if a system operates with a degree of autonomy and generates outputs such as recommendations, content, predictions or decisions based on input, it is already likely to fall within the scope of the AI Act. Many contemporary AI agents meet that profile without much difficulty.
So an agent is not legally interesting because it is called an agent, but because it is often an AI system performing concrete tasks inside an organization.
What is a deployer under the AI Act?
Article 3(4) AI Act defines a deployer as a natural or legal person, public authority, agency or other body using an AI system under its authority, except where the AI system is used in the course of a personal, non-professional activity.
That is a short definition with large consequences.
Its core has two elements:
- it concerns the use of an AI system
- that use takes place under your authority and not merely in a private context
So you do not need to be a provider, developer or model builder to have a legally relevant role under the AI Act. The moment your organization uses an AI system inside its own processes, you are no longer merely watching from the sidelines.
Provider and deployer are not the same thing
Confusion often arises because organizations assume the supplier handles everything. That is only partly true.
A provider develops the AI system, has it developed, or places it on the market or puts it into service under its own name. A deployer then uses that system within the organization.
In practice, that means, for example:
- Microsoft, OpenAI or a specialized SaaS vendor may be the provider
- your organization may be the deployer once it uses the tool for recruitment, customer service, internal analysis or operational decision-making
That distinction is not cosmetic. For high-risk AI systems, Article 26 of the AI Act explicitly imposes obligations on deployers, such as use in line with instructions, human oversight, monitoring, log retention and, in some cases, a FRIA or DPIA.
When are you likely to be a deployer of an AI agent?
There is no magical checkbox that suddenly flips the law on. But there are clear signals.
You use the agent in a real work process
Not as a purely casual demo, but for a task that is part of how your organization actually operates. Think of screening candidates, answering customer questions, analysing files, reviewing contracts or generating code.
The agent operates under your organizational authority
The tool may run at an external provider, but you decide who uses it, for what purpose, with which data and inside which workflow. That is exactly the kind of use the deployer role is meant to capture.
People rely on the agent's output
The moment employees use recommendations, analyses or generated output in their work, the system gains real influence. Even if there is still a human in the loop, you have moved beyond casual orientation.
The agent touches personal data, rights or important decisions
The closer an agent comes to HR, finance, healthcare, public services or other sensitive contexts, the more relevant the deployer question becomes. Not because every agent automatically becomes high-risk, but because the potential impact grows.
When are you not, or not yet really, a deployer?
Nuance matters here as well.
An occasional private test by an employee at home, outside working time and without any organizational context, will in principle fall outside the deployer definition. The AI Act explicitly excludes personal, non-professional activity.
But organizations often make a mistake in the other direction. They treat a pilot or experiment as proof that no legal role exists yet. That is too simplistic. A pilot can still be professional use. If a team is testing in a real workflow, with real data and real operational impact, that is still use under organizational authority.
In other words, "we are only testing" is not a legal shield.
Not every deployer of an AI agent immediately falls under Article 26
This is an important distinction that is often missing from the debate.
You can be a deployer without every heavy obligation for high-risk AI already applying. Article 26 is specifically aimed at deployers of high-risk AI systems. So the deployer role comes first. The question whether Article 26 obligations then follow depends on the classification of the system.
In practice, that means:
- if you use an AI agent for internal notes or low-risk support, you may well be a deployer, but not necessarily the deployer of a high-risk AI system
- if you use an agent in HR, creditworthiness, education, law enforcement or other Annex III contexts, the conversation becomes much more serious much faster
That is exactly why role determination matters. Without that step, you cannot sensibly determine which obligations follow next.
A practical rule of thumb
Do not just ask: "is this tool smart?" Ask instead: what are we using it for, under whose authority, with which data, and what happens if the output is wrong? Those are usually the questions that determine whether you already need to think like a deployer.
Learn the EU AI Act by doing
No slides. No boring e-learning. Try an interactive module.
Try it yourself
3 interactive activities. Earn XP. See why this works better than reading slides.
Four recognizable examples
1. An HR agent that pre-screens applicants
An organization uses an agent that summarizes CVs, ranks candidates and flags the "best matches" for recruiters. The system may not make the final decision itself, but it clearly influences the selection process.
In such a case, the organization is very likely a deployer. And depending on the precise functionality and impact, it may quickly approach a high-risk use case under Annex III.
2. A customer service agent with access to case files
A support agent that handles questions, drafts emails, retrieves data and suggests responses is not automatically high-risk. But once that agent operates structurally inside your customer process and employees rely on it, you are clearly using it under your authority.
At that point, the deployer question is no longer theoretical. You need to think about instructions, logging, oversight, privacy and error handling.
3. An internal research agent for legal or compliance work
Think of an agent that summarizes policy, searches regulation, compares contracts or drafts notes for lawyers and compliance officers. That may look like an internal helper, and often its risk profile is lower than in HR or credit scoring. But the same logic applies: the system is being used in a professional context, under organizational authority, for real work.
So this, too, is not simply "a handy tool". It is an AI system inside your governance perimeter.
4. A coding agent in software development
A coding agent that generates code, runs tests or proposes changes will usually not fall directly into the high-risk category. But the organization that uses that agent in its development process is still likely a deployer. The largest risks here often lie less in Annex III and more in security, quality assurance, intellectual property and software supply chain management.
Why this role determination matters
The deployer role matters for three reasons.
First: compliance. For high-risk AI systems, explicit obligations come into view, as set out in Article 26. You cannot contract those away to the vendor.
Second: governance. Even outside high-risk settings, someone inside the organization must own the use case, the guardrails, the risk assessment and the monitoring.
Third: accountability. If an AI agent makes mistakes, discriminates, produces inaccurate output or uses sensitive information in an unintended way, the question will not only be who built the tool. It will also be who used it, why, and with which safeguards.
That is the real significance of being a deployer.
Three things organizations should do now
1. Create an inventory of all AI agents in use
Not only formally approved tools, but also shadow AI. Which teams use which agents? For what exactly? With which data? And through which integrations?
2. Determine your role for each use case
Are you purely a deployer? Also a provider? Only a user of a low-risk application? Or are you shifting into another role through customization, fine-tuning or own-brand deployment? That analysis should be done per use case, not at an abstract organizational level.
3. Put minimum governance in place before scaling
Assign ownership. Define allowed use cases. Arrange meaningful human oversight. Think through logging, access rights, privacy impact and escalation paths. Not because every agent is immediately prohibited or high-risk, but because casual use almost always turns into governance chaos.
📬 AI Act Nieuwsbrief
Wekelijks de belangrijkste AI Act ontwikkelingen in je inbox.
Geen spam · Altijd opzegbaar
The real mistake is not the tool, but the way organizations think about it
The biggest mistake organizations make today is treating AI agents as isolated productivity tools. As if it makes no legal or organizational difference whether an employee uses a text box or a system that analyses information, retrieves data, generates recommendations and influences real business processes.
That difference does matter.
The AI Act does not require organizations to panic every time a new AI instrument appears. But it does expect them to know which role they occupy. And for many AI agents, that starts with a simple recognition: you are not just a user in the everyday sense, but a deployer in the legal sense.
Organizations that skip that step will later struggle with classification, governance and accountability. Organizations that take it now stand a far better chance of scaling AI responsibly.
Frequently asked questions about deployers and AI agents
Sources
For organizations already using AI agents, the main lesson is not that every agent is immediately high-risk. The main lesson is that you enter a legal role much earlier than many teams assume. From that moment on, governance is no longer a nice-to-have. It is basic hygiene.