Since February 2, 2025: The use of AI for emotion recognition during job interviews is prohibited under the AI Act. Additionally, many other AI applications in HR are now classified as high-risk, meaning they must meet strict requirements before August 2026.
The core message: AI in HR often falls under strict rules
During the first AI Supervision Congress in December 2025, organized by the Netherlands Authority for Digital Infrastructure (RDI) and the Dutch Data Protection Authority (AP), session 8 was entirely dedicated to AI in recruitment and selection. The message was clear: this is one of the most regulated application areas under the AI Act.
The reason? AI decisions in HR directly affect people's fundamental rights:
- Access to work and income
- Protection against discrimination
- Privacy and human dignity
Why is HR-AI high-risk?
Decisions about who gets hired, promoted, or fired have significant impact on individuals' life paths. AI systems can amplify existing biases and automate discrimination at scale. That's why the EU decided to subject these applications to the strictest requirements.
Three categories: Prohibited, High-Risk, Low Risk
๐ซ Prohibited: Emotion recognition in recruitment processes
Since February 2, 2025 it is prohibited to use AI that detects emotions in the workplace or during job application procedures, unless for medical or safety reasons.
This prohibition affects technologies that claim to:
- Detect nervousness or stress during interviews
- Infer "personality traits" from facial expressions
- Use voice analysis to measure motivation or reliability
- Deploy eye-tracking to assess focus or interest
Stop immediately: Does your organization or a vendor use AI tools that claim to "read" emotions during job interviews? Stop this immediately. This is a prohibited practice under Article 5 of the AI Act.
The scientific basis for such tools is moreover very weak. The Dutch DPA concluded in their report on emotion recognition that these technologies often don't do what they promise and can have discriminatory effects.
โ ๏ธ High-Risk: Most AI in recruitment and selection
The AI Act classifies in Annex III, point 4 the following AI applications explicitly as high-risk:
| AI Application | Why high-risk? | Deadline |
|---|---|---|
| CV screening & matching | Determines who advances to the next round | August 2026 |
| Candidate ranking | Influences who gets invited | August 2026 |
| Automated application rejections | Direct impact on access to work | August 2026 |
| Performance monitoring | Can lead to dismissal or demotion | August 2026 |
| Employee turnover prediction | Can reinforce biases about who is "at risk" | August 2026 |
โ Low risk: Supporting tools
Not all AI in HR is high-risk. Tools that have no direct impact on decisions about individuals often fall outside the strict requirements:
Shift scheduling
AI that helps plan shifts without evaluating individuals.
Job posting optimization
Tools that help write inclusive job descriptions.
General HR chatbots
Assistants for FAQs about employment conditions (no decisions).
Anonymous feedback analysis
Sentiment analysis at aggregate level without individual identification.
Note: Even "low risk" tools must comply with general AI Act obligations, such as transparency and AI literacy.
The AI value chain: Shared responsibility
A crucial message from the AI Supervision Congress was that responsibility doesn't rest solely with the software vendor. The AI Act introduces a value chain approach:
Who is responsible?
Providers (the vendors of AI tools) AND deployers (the organizations that use them) each have their own obligations. Purchasing a CE-marked product does not exempt you from your own responsibilities as a user.
Obligations for providers of high-risk R&S AI
Essential requirements
Comply with requirements for risk management, data governance, and documentation.
Conformity assessment
Standard internal procedure (no notified body needed for HR-AI).
Registration
Record in the EU database of AI systems.
CE marking
Apply the CE marking after successful assessment.
Standard in development: The harmonized standard prEN 18286 for quality management systems for AI is currently open for comment. This will become the standard that high-risk AI providers must comply with.
Obligations for deployers of high-risk R&S AI
As an employer using AI tools for recruitment and selection:
- Don't use AI without CE marking โ Verify if the vendor is certified
- Follow the instructions for use โ Deploy the system as intended by the provider
- Organize human oversight โ Ensure qualified people review the output
- Use representative input data โ Prevent the data itself from introducing bias
- Monitor the system โ Track performance and potential discrimination
- Retain logs โ According to Article 26 of the AI Act
FRIA required for governments: If you're a government organization or provide public services, you must also conduct a Fundamental Rights Impact Assessment (FRIA) under Article 27 of the AI Act. More on this in our DPIA vs FRIA comparison.
Practical examples: What should you do concretely?
Scenario 1: You use a CV screening tool
The tool: A SaaS solution that automatically scores and ranks CVs based on job requirements.
Classification: High-risk (Annex III, point 4a)
Compliance checklist:
- [ ] Ask the vendor for proof of CE marking (before August 2026)
- [ ] Request technical documentation and instructions for use
- [ ] Assign a person responsible for reviewing AI rankings
- [ ] Verify if training data was representative
- [ ] Monitor for potential discrimination patterns (gender, age, ethnicity)
- [ ] Document how you use the system and what decisions it supports
- [ ] Inform candidates that AI is used in the process
Scenario 2: You're considering a video interview tool with "personality analysis"
The tool: A platform that analyzes video interviews for facial expressions, voice patterns, and word choice to score "soft skills."
Classification: Prohibited if it detects emotions; high-risk if it analyzes other characteristics
Action:
- Stop immediately with tools that claim to recognize emotions
- Ask the vendor to clarify exactly what the tool measures
- When in doubt: don't use it
Scenario 3: You use AI to optimize shift planning
The tool: An algorithm that combines availability, preferences, and workload to create schedules.
Classification: Low risk (no individual assessment impacting careers)
Obligations:
- Transparency to employees about how the tool works
- AI literacy for system users
Timeline: When must everything be in place?
| Date | What applies? | For whom? |
|---|---|---|
| Feb 2, 2025 | Prohibition on emotion recognition in HR | All organizations |
| Feb 2, 2025 | AI literacy mandatory | All organizations using AI |
| Aug 2, 2026 | High-risk requirements in force | Providers and deployers of HR-AI |
| Aug 2, 2026 | Enforcement starts by supervisory authorities | Dutch DPA and RDI as coordinating supervisors |
What does this mean for HR software vendors?
The Dutch DPA is watching
During the AI Supervision Congress, the Dutch DPA made clear: "Keep an eye on us!" Further guidelines and possibly investigations into HR-AI tools on the Dutch market are coming.
For providers of AI tools in recruitment and selection:
- Start now with conformity preparation โ August 2026 is approaching fast
- Follow standard development โ prEN 18286 for quality management is in progress
- Document proactively โ Technical documentation, risk management, data governance
- Prepare CE marking โ The procedure is an internal assessment for HR-AI
- Register timely โ The EU database of AI systems will become the norm
Digital Omnibus proposal: Relaxation coming?
The AI Supervision Congress also mentioned the Digital Omnibus Regulation, in which the European Commission proposes to simplify certain AI Act rules. This could impact obligations for high-risk AI.
However: the core obligations for HR-AI will likely remain intact, given the fundamental rights at stake. Follow our updates on the Digital Omnibus for the latest developments.
Concrete steps for HR departments
This week
- Inventory which AI tools you use in recruitment, selection, and HR
- Check for emotion recognition โ Stop immediately with tools that do this
- Ask vendors about their AI Act roadmap
This month
- Classify your tools โ Prohibited, high-risk, or low risk?
- Start AI literacy for HR staff
- Document current processes and how AI plays a role
This quarter
- Set contract requirements for vendors (CE marking, documentation)
- Design human oversight โ Who reviews AI output and how?
- Begin monitoring โ Measure discrimination indicators
Need help? Check our HR & Employment sector page for comprehensive compliance checklists, or join one of our trainings on the AI Act for HR professionals.
Conclusion: Start now, not in August 2026
The AI Act sets strict requirements for AI in recruitment and selection โ and for good reason. The impact of automated HR decisions on people's lives is too significant to leave to black-box algorithms.
Three key messages:
- Emotion recognition is already prohibited โ Check today if your vendors do this
- Human oversight is mandatory โ AI may advise, not autonomously decide
- The value chain shares responsibility โ Even as a buyer, you have obligations
Organizations that take action now are not only building compliance capacity, but also trust with applicants and employees. In a tight labor market, that can be a significant competitive advantage.
Sources
๐ฏ Need training for your HR team? Schedule a call to discuss how we can help your organization with AI Act compliance in recruitment and selection.