Responsible AI Platform

AI in Recruitment & Selection: What's prohibited, what's high-risk, and what's allowed?

ยทยท12 min read
Delen:
Dutch version not available

Since February 2, 2025: The use of AI for emotion recognition during job interviews is prohibited under the AI Act. Additionally, many other AI applications in HR are now classified as high-risk, meaning they must meet strict requirements before August 2026.

The core message: AI in HR often falls under strict rules

During the first AI Supervision Congress in December 2025, organized by the Netherlands Authority for Digital Infrastructure (RDI) and the Dutch Data Protection Authority (AP), session 8 was entirely dedicated to AI in recruitment and selection. The message was clear: this is one of the most regulated application areas under the AI Act.

The reason? AI decisions in HR directly affect people's fundamental rights:

  • Access to work and income
  • Protection against discrimination
  • Privacy and human dignity

Why is HR-AI high-risk?

Decisions about who gets hired, promoted, or fired have significant impact on individuals' life paths. AI systems can amplify existing biases and automate discrimination at scale. That's why the EU decided to subject these applications to the strictest requirements.


Three categories: Prohibited, High-Risk, Low Risk

๐Ÿšซ Prohibited: Emotion recognition in recruitment processes

Since February 2, 2025 it is prohibited to use AI that detects emotions in the workplace or during job application procedures, unless for medical or safety reasons.

This prohibition affects technologies that claim to:

  • Detect nervousness or stress during interviews
  • Infer "personality traits" from facial expressions
  • Use voice analysis to measure motivation or reliability
  • Deploy eye-tracking to assess focus or interest

Stop immediately: Does your organization or a vendor use AI tools that claim to "read" emotions during job interviews? Stop this immediately. This is a prohibited practice under Article 5 of the AI Act.

The scientific basis for such tools is moreover very weak. The Dutch DPA concluded in their report on emotion recognition that these technologies often don't do what they promise and can have discriminatory effects.

โš ๏ธ High-Risk: Most AI in recruitment and selection

The AI Act classifies in Annex III, point 4 the following AI applications explicitly as high-risk:

AI ApplicationWhy high-risk?Deadline
CV screening & matchingDetermines who advances to the next roundAugust 2026
Candidate rankingInfluences who gets invitedAugust 2026
Automated application rejectionsDirect impact on access to workAugust 2026
Performance monitoringCan lead to dismissal or demotionAugust 2026
Employee turnover predictionCan reinforce biases about who is "at risk"August 2026

โœ… Low risk: Supporting tools

Not all AI in HR is high-risk. Tools that have no direct impact on decisions about individuals often fall outside the strict requirements:

1

Shift scheduling

AI that helps plan shifts without evaluating individuals.

2

Job posting optimization

Tools that help write inclusive job descriptions.

3

General HR chatbots

Assistants for FAQs about employment conditions (no decisions).

4

Anonymous feedback analysis

Sentiment analysis at aggregate level without individual identification.

Note: Even "low risk" tools must comply with general AI Act obligations, such as transparency and AI literacy.


The AI value chain: Shared responsibility

A crucial message from the AI Supervision Congress was that responsibility doesn't rest solely with the software vendor. The AI Act introduces a value chain approach:

Who is responsible?

Providers (the vendors of AI tools) AND deployers (the organizations that use them) each have their own obligations. Purchasing a CE-marked product does not exempt you from your own responsibilities as a user.

Obligations for providers of high-risk R&S AI

1

Essential requirements

Comply with requirements for risk management, data governance, and documentation.

2

Conformity assessment

Standard internal procedure (no notified body needed for HR-AI).

3

Registration

Record in the EU database of AI systems.

4

CE marking

Apply the CE marking after successful assessment.

Standard in development: The harmonized standard prEN 18286 for quality management systems for AI is currently open for comment. This will become the standard that high-risk AI providers must comply with.

Obligations for deployers of high-risk R&S AI

As an employer using AI tools for recruitment and selection:

  1. Don't use AI without CE marking โ€“ Verify if the vendor is certified
  2. Follow the instructions for use โ€“ Deploy the system as intended by the provider
  3. Organize human oversight โ€“ Ensure qualified people review the output
  4. Use representative input data โ€“ Prevent the data itself from introducing bias
  5. Monitor the system โ€“ Track performance and potential discrimination
  6. Retain logs โ€“ According to Article 26 of the AI Act

FRIA required for governments: If you're a government organization or provide public services, you must also conduct a Fundamental Rights Impact Assessment (FRIA) under Article 27 of the AI Act. More on this in our DPIA vs FRIA comparison.


Practical examples: What should you do concretely?

Scenario 1: You use a CV screening tool

The tool: A SaaS solution that automatically scores and ranks CVs based on job requirements.

Classification: High-risk (Annex III, point 4a)

Compliance checklist:

  • [ ] Ask the vendor for proof of CE marking (before August 2026)
  • [ ] Request technical documentation and instructions for use
  • [ ] Assign a person responsible for reviewing AI rankings
  • [ ] Verify if training data was representative
  • [ ] Monitor for potential discrimination patterns (gender, age, ethnicity)
  • [ ] Document how you use the system and what decisions it supports
  • [ ] Inform candidates that AI is used in the process

Scenario 2: You're considering a video interview tool with "personality analysis"

The tool: A platform that analyzes video interviews for facial expressions, voice patterns, and word choice to score "soft skills."

Classification: Prohibited if it detects emotions; high-risk if it analyzes other characteristics

Action:

  • Stop immediately with tools that claim to recognize emotions
  • Ask the vendor to clarify exactly what the tool measures
  • When in doubt: don't use it

Scenario 3: You use AI to optimize shift planning

The tool: An algorithm that combines availability, preferences, and workload to create schedules.

Classification: Low risk (no individual assessment impacting careers)

Obligations:

  • Transparency to employees about how the tool works
  • AI literacy for system users

Timeline: When must everything be in place?

DateWhat applies?For whom?
Feb 2, 2025Prohibition on emotion recognition in HRAll organizations
Feb 2, 2025AI literacy mandatoryAll organizations using AI
Aug 2, 2026High-risk requirements in forceProviders and deployers of HR-AI
Aug 2, 2026Enforcement starts by supervisory authoritiesDutch DPA and RDI as coordinating supervisors

What does this mean for HR software vendors?

The Dutch DPA is watching

During the AI Supervision Congress, the Dutch DPA made clear: "Keep an eye on us!" Further guidelines and possibly investigations into HR-AI tools on the Dutch market are coming.

For providers of AI tools in recruitment and selection:

  1. Start now with conformity preparation โ€“ August 2026 is approaching fast
  2. Follow standard development โ€“ prEN 18286 for quality management is in progress
  3. Document proactively โ€“ Technical documentation, risk management, data governance
  4. Prepare CE marking โ€“ The procedure is an internal assessment for HR-AI
  5. Register timely โ€“ The EU database of AI systems will become the norm

Digital Omnibus proposal: Relaxation coming?

The AI Supervision Congress also mentioned the Digital Omnibus Regulation, in which the European Commission proposes to simplify certain AI Act rules. This could impact obligations for high-risk AI.

However: the core obligations for HR-AI will likely remain intact, given the fundamental rights at stake. Follow our updates on the Digital Omnibus for the latest developments.


Concrete steps for HR departments

This week

  1. Inventory which AI tools you use in recruitment, selection, and HR
  2. Check for emotion recognition โ€“ Stop immediately with tools that do this
  3. Ask vendors about their AI Act roadmap

This month

  1. Classify your tools โ€“ Prohibited, high-risk, or low risk?
  2. Start AI literacy for HR staff
  3. Document current processes and how AI plays a role

This quarter

  1. Set contract requirements for vendors (CE marking, documentation)
  2. Design human oversight โ€“ Who reviews AI output and how?
  3. Begin monitoring โ€“ Measure discrimination indicators

Need help? Check our HR & Employment sector page for comprehensive compliance checklists, or join one of our trainings on the AI Act for HR professionals.


Conclusion: Start now, not in August 2026

The AI Act sets strict requirements for AI in recruitment and selection โ€“ and for good reason. The impact of automated HR decisions on people's lives is too significant to leave to black-box algorithms.

Three key messages:

  1. Emotion recognition is already prohibited โ€“ Check today if your vendors do this
  2. Human oversight is mandatory โ€“ AI may advise, not autonomously decide
  3. The value chain shares responsibility โ€“ Even as a buyer, you have obligations

Organizations that take action now are not only building compliance capacity, but also trust with applicants and employees. In a tight labor market, that can be a significant competitive advantage.


Sources

RDI / Dutch Data Protection Authority: Session 8: Supervision of Annex III in practice - Recruitment and Selection (December 2025)
Dutch Data Protection Authority: Getting started with AI literacy (2025)

๐ŸŽฏ Need training for your HR team? Schedule a call to discuss how we can help your organization with AI Act compliance in recruitment and selection.