On February 12, 2026, the Dutch Data Protection Authority (Autoriteit Persoonsgegevens, AP) published a warning about the security risks of autonomous AI agents. The regulator specifically targets open source platforms that give users full access to their computer, email, files and online services. It marks one of the first times a European privacy authority has spoken this explicitly about this category of AI systems.
The timing is no coincidence. AI agents are growing explosively in popularity, among consumers and within organizations alike. But security standards are not keeping pace. The AP labels autonomous AI agents a "Trojan Horse," and that deserves serious attention.
What are the concrete risks?
The AP draws on findings from security researchers worldwide. The key risks:
Malicious plug-ins. Approximately one-fifth of available plug-ins for this type of platform contain malware targeting login credentials or cryptocurrency assets. The plug-in ecosystem of AI agents resembles the early days of browser extensions: little oversight, significant abuse potential.
Indirect prompt injection. This is the most underestimated risk. Hidden commands can be embedded in websites, emails or chat messages. When an AI agent processes that content, the system can be manipulated into executing an attacker's instructions rather than the user's. The consequences: account takeovers of linked services (Google, Apple ID, social media), reading emails and files, and stealing API keys.
Remote code execution. Security researchers have found critical vulnerabilities allowing attackers to remotely, without physical access, take full control of a system through the AI agent.
Misconfiguration. Running locally does not automatically mean running securely. Incorrect installation or configuration can inadvertently make personal data publicly accessible.
Why this is more than a privacy issue
The AP approaches this from a GDPR perspective, and rightly so. But the implications reach further.
Autonomous AI agents operate in a gray area. They make decisions, execute actions and process data, often without the user explicitly approving each step. That touches not just privacy, but also cybersecurity, intellectual property and business continuity.
For organizations using or considering AI agents: the question is not whether you may use them, but under what conditions. The AP correctly states that organizations and users remain responsible for GDPR compliance, regardless of whether they use open source or commercial tools.
The AI Act dimension
At the European level, the AP advocates for clarification that autonomous AI agents fall under the AI Act. That is an important signal.
Under the current AI Act text, the classification of AI agents is not always straightforward. An AI agent that acts autonomously and impacts natural persons may be considered high-risk, depending on the application domain. Consider an agent that autonomously sends emails, makes financial decisions or accesses personnel records.
Relevant AI Act provisions for AI agents:
- Article 6 and Annex III determine when an AI system is classified as high-risk. AI agents deployed for credit scoring, HR decisions or law enforcement quickly fall under this classification.
- Article 9 requires a risk management system for high-risk AI, including cybersecurity measures.
- Article 14 mandates human oversight for high-risk systems. For fully autonomous agents, this is inherently a point of attention.
- Article 15 requires accuracy, robustness and cybersecurity. Prompt injection vulnerabilities are a direct violation of this requirement.
- Article 27 obligates deployers of high-risk AI to conduct a Fundamental Rights Impact Assessment (FRIA).
The overlap with GDPR is evident. A Data Protection Impact Assessment (DPIA) under GDPR and a FRIA under the AI Act cover partially the same ground. Organizations would do well to conduct these in combination.
What should organizations do now?
The AP warning is not a reason for panic, but it is a reason for action. Concrete steps:
1. Inventory AI agent usage. Many organizations lack a complete picture of which AI agents are being used, by whom, and with what permissions. Shadow AI, where employees independently install tools, is a real risk. Start with an inventory.
2. Assess permissions. What access do these agents have? Email, files, APIs, databases? The principle of least privilege applies here too. An AI agent does not need access to everything to be useful.
3. Evaluate plug-ins and integrations. The AP specifically points to the risk of malicious plug-ins. Establish an approval process for plug-ins, similar to how you manage software installations.
4. Test for prompt injection. Have your security team specifically test for indirect prompt injection. This is a relatively new attack vector that is still missing from many standard security assessments.
5. Establish policy. Define in your AI policy whether and how AI agents may be deployed. What data may they process? What actions may they perform autonomously? Where is human approval required?
6. Conduct a DPIA. If an AI agent processes personal data, a DPIA under GDPR is likely mandatory. Combine this with an AI Act risk assessment if the system may qualify as high-risk.
The broader trend
The AP warning fits a pattern. Regulators worldwide are struggling with the speed at which AI agents are being adopted. Technology races ahead, regulation follows behind.
That does not mean organizations should wait for perfect regulation. The principles are clear: know what you deploy, limit the risks, document your decisions and maintain human oversight. Whether you take GDPR, the AI Act or your own risk framework as a starting point, the conclusion is the same.
AI agents are powerful tools. But power without control is a security risk. The AP puts it diplomatically. We put it practically: if you cannot explain which AI agents are running in your organization and what they do, you have a problem bigger than compliance.