AI Geletterdheid Hub

AP summer 2025 report: emotion recognition under the AI Act is 'dubious and risky'

9 min read
Dutch version not available

An in-depth analysis of the critical findings from the Dutch supervisory authority

Crucial insights: the Dutch Data Protection Authority (AP) has published a devastating analysis of AI systems for emotion recognition in its AI & Algorithms Netherlands Report (RAN) summer 2025, with direct implications for compliance under the EU AI Act.

The unrelenting conclusion of the AP

The Dutch Data Protection Authority (AP) has dedicated an in-depth and critical chapter to AI systems for emotion recognition in its semi-annual AI & Algorithms Netherlands Report (RAN) for summer 2025. The supervisory authority's conclusion is unrelenting: the technology is built on "disputed assumptions" and its deployment is both "dubious" and "risky".

For organizations preparing for the AI Regulation, this analysis provides crucial insights. The report not only exposes the fundamental weaknesses of the technology but also directly links these to the risks of discrimination, privacy violations, and the restriction of human autonomy.

The legal framework: emotion recognition in the AI Regulation

Before diving into the AP's analysis, it is essential to clearly define the playing field of the AI Regulation. Emotion recognition based on biometrics is viewed with great suspicion by the European legislator.

Three legal regimes for emotion recognition

The AI Regulation applies a differentiated approach depending on the context:

  • Absolute prohibition: in the workplace and educational institutions
  • High-risk classification: in all other contexts with biometric categorization
  • GDPR connection: strict requirements for special category personal data

Absolute prohibition in certain contexts

The deployment of AI systems that infer emotions or mental states from biometric data is strictly prohibited in the workplace and educational institutions. The legislator recognizes the unequal power relationship here, which makes free and informed consent from employees and students illusory.

High-risk classification

Outside these prohibited contexts, every AI system that uses biometric data to categorize persons (biometric categorization) is in principle classified as high-risk under Annex III of the Regulation. This explicitly applies to emotion recognition systems deployed in, for example, public spaces, marketing, or healthcare.

This classification triggers a whole regime of obligations, including risk management, data quality, transparency, human oversight, and robustness.

Interconnection with the GDPR

The AP rightly emphasizes the connection with the GDPR. Biometric data processed for the purpose of unique identification constitutes special category personal data (Art. 9 GDPR). Processing very intimate and privacy-sensitive data, such as inferred emotions, requires a solid legal basis and strict compliance with data protection principles.

The AP's analysis: a fundamentally shaky foundation

The strict regulation is no coincidence. The AP concludes that the technology itself is built on a "shaky tower" of scientific assumptions. This is a crucial argument for organizations that must conduct a risk analysis or a Fundamental Rights Impact Assessment (FRIA).

Assumption of universality

Many systems assume that emotions are universal and expressed in the same way by everyone. Cultural, individual, and contextual differences are ignored.

Problematic proxy coupling

AI systems don't measure emotions; they measure physical signals like heart rate or facial expressions. The coupling between signal and emotion is highly unreliable.

The discrimination risk

The AP points to studies showing that systems assign more negative emotions to people with dark skin, a direct and unacceptable consequence of biased training data. A system trained on a limited, often Western dataset will inevitably lead to misinterpretations and discrimination.

The unreliability of proxies

As the AP chairman states: "A high heart rate is not always a sign of fear, and a loud voice is not always an expression of anger." This unreliability directly affects the requirements of robustness and accuracy that the AI Regulation places on high-risk systems.

Risks in practice: findings from the AP's investigation

The AP has tested theory against practice by examining deployment in wearables, language models, and customer services. The findings have direct implications for compliance.

Wearables: lack of transparency

The practical test shows a significant lack of transparency. Users receive scores and graphs about their stress levels, but the underlying logic of the algorithm is a black box. This conflicts with the right to explanation under the AI Regulation (Art. 86-87) and the information obligation under the GDPR.

Compliance risk: the objective presentation of subjective and unreliable data is misleading and can lead to wrong decisions by users.

Language models (GPAI): inherent capabilities

The analysis of General-Purpose AI (GPAI) models is particularly relevant. These models can 'recognize' emotions as an additional, not explicitly trained capability. They base their analyses on stereotypical characteristics and sometimes even refer to the disputed emotion theories from their training data.

For organizations integrating GPAI models into their own (high-risk) systems, this represents a significant compliance risk. The inherent and opaque capabilities of the underlying model must be included in their own risk assessment.

Customer service: insufficient justification

The investigation into customer services touches on the sore point of transparency and purpose limitation. The justification "quality and training purposes" is completely insufficient to legitimize the processing of (special category) personal data for emotion recognition. Customers are not explicitly and specifically informed, which prevents valid consent.

The double test: product safety versus societal desirability

One of the AP's most astute observations is the distinction between product regulation and the desirability question.

The AP's fireworks analogy

The AI Regulation functions primarily as product legislation. The goal is to ensure that a high-risk system that comes to market is safe and meets the established requirements. This is comparable to fireworks: legislation ensures that the product itself is safe, but the political consideration determines whether and where we want to set it off.

For emotion recognition, this means that a system can technically meet all requirements of the AI Regulation, but its deployment in a specific context may be socially undesirable.

This places a heavy ethical responsibility on organizations. Simply checking off the compliance requirements of the AI Regulation is not enough. A thorough Fundamental Rights Impact Assessment (FRIA), which will be mandatory for governments and certain private parties, must also address this desirability question.

Practical implications for organizations

The AP's report is not non-binding advice; it is a clear indication of how the Dutch supervisory authority weighs the risks of this technology.

Action areaConcrete measuresCompliance impact
ClassificationVerify if application falls under prohibition Art. 5(1)(f)High - can completely exclude use
TransparencyExplicit information about deployment, logic, and risksHigh - required under Art. 86-87 AI Act
Impact AssessmentDPIA and FRIA must address discrimination risksMedium - documentation requirement
SuppliersCritical requirements for training data and validationMedium - contractual safeguards

Five concrete recommendations for organizations

1. Be extremely cautious

The fundamental flaws of the technology are so significant that its use is inherently risky. Consider whether there are alternatives that can achieve the goal in a less intrusive and more reliable way.

2. Know your classification

Verify whether your application falls under the prohibition of Article 5(1)(f). If not, assume it is a high-risk system under Annex III and align your compliance processes accordingly.

3. Ensure real transparency

The vague legal disclaimers that the AP signals will not suffice under the AI Regulation. Be explicit about the system's deployment, logic, risks, and the rights of the data subject.

4. Conduct a thorough impact assessment

A DPIA (under GDPR) and FRIA (under AI Regulation) must explicitly address the fundamental scientific weaknesses and discrimination risks that the AP identifies.

5. Set critical requirements for suppliers

If you purchase a system from third parties, ask probing questions. Demand transparency about training data, validation, known limitations, and the degree of bias.

Practical tip: document why, despite the weaknesses signaled by the AP, the deployment of the system is proportional and necessary. Record this consideration explicitly in your compliance documentation.

Conclusion: a clear line in the sand

The AP has drawn a clear line in the sand with this report. The message to the market is clear: the time of uncritical and opaque deployment of emotion recognition is over. The AI Regulation formalizes this distrust, and the AP shows that it will closely monitor compliance.

For organizations, this means that emotion recognition can no longer be considered a neutral technology. It is a high-risk application that raises fundamental questions about discrimination, privacy, and human dignity. The time of experimenting without consequences is definitively over.