Responsible AI Platform
πŸ’ΌEmployment

The Story of EmploTrack

When workplace monitoring hits an absolute prohibition β€” and how to change course

Fictional scenario β€” based on realistic situations

Scroll
01

The Trigger

How it started

πŸ“§

EmploTrack had developed an innovative product: SentimentPulse analyzed facial expressions and voice tones to measure stress levels. HR departments were enthusiastic β€” finally objective data about employee wellbeing instead of subjective surveys.

Until the AI Act came into force. Article 5(1)(f) explicitly prohibits inferring emotions from biometric data in the workplace. No exceptions. EmploTrack's core product was suddenly illegal.

β€œ
"We thought we were helping employees by preventing burnout. The regulator saw it differently: we were violating their fundamental rights."
02

The Questions

What did they need to find out?

1Question

Why is emotion recognition in the workplace prohibited?

The team dove into the background of the prohibition. The legislator recognized something fundamental: in the workplace there is always a power imbalance. An employee cannot freely say "no" to monitoring by the employer.

πŸ’‘ The insight

Consent is illusory in an employment relationship. If your boss asks if you want to participate in emotion recognition, you feel pressure to say yes β€” regardless of your actual comfort. That's why the legislator chose an absolute prohibition instead of consent as legal basis.

🌍 Why this matters

The DPA confirmed this in its report: the unequal power relationship makes free and informed consent from employees illusory. This principle also applies to educational institutions, where the same power imbalance exists between student and institution.

2Question

What makes biometric data so sensitive?

EmploTrack had thought of facial expressions as "ordinary" data. The legal analysis showed this was a mistaken understanding: biometric data for identification or categorization are special personal data under the GDPR.

πŸ’‘ The insight

Biometric data are uniquely identifying and immutable β€” you cannot reset your face like a password. The inferred emotions are moreover "highly intimate and privacy-sensitive data." This activates the strictest regime of data protection.

🌍 Why this matters

The AI Act classifies all systems that use biometric data for categorization automatically as high-risk. And in the workplace it goes further: then it is simply prohibited, regardless of how well your compliance processes are set up.

3Question

Can the technology even reliably detect emotions?

Besides the legal objections, the team dove into the scientific basis. The findings were sobering: the Data Protection Authority called the technology "dubious and risky," built on "contested assumptions."

πŸ’‘ The insight

The technology doesn't measure emotions β€” it measures physical signals and links them to assumed emotions. A high heart rate can be stress, but also excitement or coffee. A loud voice can be anger, but also enthusiasm. The proxy coupling is fundamentally unreliable.

🌍 Why this matters

Research shows that emotion recognition systems assign more negative emotions to people with darker skin β€” a direct discrimination risk. The idea that emotions are universal and expressed the same by everyone is scientifically contested. Cultural, individual and contextual differences are ignored.

4Question

Is there a legal path to employee wellbeing monitoring?

The team had to redesign their product. Was there a way to monitor employee wellbeing without violating the prohibition?

πŸ’‘ The insight

Yes, but fundamentally different. Anonymous, aggregated surveys where employees voluntarily participate. No biometrics, no individual tracking. Focus on team-level insights instead of per-person stress scores. The value shifts from surveillance to listening.

🌍 Why this matters

The DPA points to the "fireworks analogy": the AI Act ensures products are safe, but the societal question of whether we want it is a different consideration. EmploTrack had to ask themselves: even if this was technically possible, do we want to monitor employees this way?

03

The Journey

Step by step to compliance

Step 1 of 6
πŸ“ˆ

Product-market success

SentimentPulse was adopted by multiple enterprises. HR departments were enthusiastic about "objective wellbeing data."

Step 2 of 6
β›”

The AI Act check

During AI Act compliance preparation, it turned out that emotion recognition in the workplace is explicitly prohibited under Article 5(1)(f). No exceptions.

Step 3 of 6
πŸ”¬

Scientific reflection

The team investigated the scientific basis. The findings: "dubious and risky," built on "contested assumptions" with discrimination risks.

Step 4 of 6
πŸ”„

Product transformation

From biometric surveillance to anonymous, voluntary wellbeing surveys. Focus on team-level insights, not individual scores.

Step 5 of 6
πŸ“’

Customer communication

Existing customers were informed about the legal change and migration to the new product. Some left; others valued the ethical course change.

Step 6 of 6
🎯

New positioning

EmploTrack positioned itself as "privacy-first wellbeing platform" β€” a differentiator in a market full of surveillance tools.

04

The Obstacles

What went wrong?

Obstacle 1

βœ— Challenge

Core product turned out to be prohibited under AI Act Article 5(1)(f)

↓

βœ“ Solution

Complete product transformation to anonymous, voluntary surveys

Obstacle 2

βœ— Challenge

Scientific basis of emotion recognition was "dubious and risky"

↓

βœ“ Solution

Accept that technology is fundamentally unreliable and look for alternatives

Obstacle 3

βœ— Challenge

Losing existing customers who wanted surveillance features

↓

βœ“ Solution

Repositioning as privacy-first platform, attracting new customers who value ethics

β€œ
We thought we were helping employees by measuring them. We learned that listening β€” with respect for privacy β€” is more effective than surveillance.
β€” Lisa de Jong, CEO, EmploTrack
05

The Lessons

What can we learn from this?

Les 1 / 4
β›”

Some applications are simply prohibited

Not everything is a compliance matter. Some AI applications are illegal, period. Check Article 5 first.

Les 2 / 4
βš–οΈ

Consent doesn't work everywhere

In power relationships (employer-employee, school-student) free consent is illusory.

Les 3 / 4
πŸ”¬

Ask the scientific question

Can the technology even do what we claim? For emotion recognition: probably not.

Les 4 / 4
πŸ†

Compliance can be a differentiator

Privacy-first positioning can be a competitive advantage in a market full of surveillance.

Does your organization use biometric analysis?

Check if your application falls under an AI Act prohibition before investing further.