Imagine...
You're applying for your dream job. An AI system evaluates your CV before any human ever sees it. Who protects you from a biased algorithm?
The Case
TalentFlow BV
A familiar scenario
Company
TalentFlow BV
Size
250 employees
Industry
Technology & consultancy
Location
Amsterdam
Sarah van der Berg
HR Director
New recruitment system - approval needed
Hi team, I had a demo of SmartRecruit Pro and I'm excited! They claim 70% time savings...
Chapter 1
What is AI?
The definition that determines everything
Before you know which rules apply, you need to know if you're dealing with an AI system at all. The EU AI Act defines this in 7 elements.
"AI system means a machine-based system that is designed to operate with varying levels of autonomy and that may exhibit adaptiveness after deployment, and that, for explicit or implicit objectives, infers, from the input it receives, how to generate outputs such as predictions, content, recommendations, or decisions that can influence physical or virtual environments."
The 7 Elements
Click to discover
โ Click an element for details
The key point
Inference is the essential condition. Without the ability to infer how to generate outputs, it's not an AI system.
What is NOT AI?
Want to understand why some software falls under the AI Act and other doesn't?
Episode 1 ยท SmartRecruit Pro
Is SmartRecruit Pro an AI system?
Let's apply the 7 elements:
โ Yes, SmartRecruit Pro is an AI system
Now that we know it's AI, we need to determine: what risk level? โ
An AI system must be able to infer from input how to generate outputs. Does it only follow fixed rules? Then it's regular software.
How strict are the rules?
The answer lies in a pyramid.
Chapter 2
The Pyramid
Not all AI is equal
Click a level
โ Click a level for details
โThe more impact on human lives, the more responsibility.โ
Episode 2 ยท SmartRecruit Pro
Where does SmartRecruit Pro fall?
SmartRecruit Pro falls under High Risk because:
โ ๏ธ High Risk AI
This means TalentFlow must comply with strict requirements: CE marking, conformity assessment, human oversight, and more.
But what is TalentFlow's exact role? Provider or deployer? โ
The higher the risk, the stricter the rules.
But who bears responsibility?
That depends on your role.
Chapter 3
The Players
Which role do you play?
The AI Act assigns different roles. Each role carries its own responsibilities. The question is: where do you stand?
4
Different roles in the AI value chain
Click a question that applies to you
Episode 3 ยท SmartRecruit Pro
What is TalentFlow's role?
TalentFlow buys SmartRecruit Pro from RecruitTech Solutions (a US company). TalentFlow doesn't develop the software themselves, but will use it for recruitment decisions.
๐ค TalentFlow is a Deployer
TalentFlow is the party that "deploys" the AI system within their organization. RecruitTech Solutions is the Provider (developer), and since they're from outside the EU, an Importer is also needed.
What must TalentFlow do?
Now that we know TalentFlow is a deployer, what concrete obligations apply? โ
Where is the AI Act strictest?
8 critical sectors.
Chapter 4
The Sectors
The AI Act identifies 8 areas where AI carries the most risks.
Biometrics
Your face, your identity
Facial recognition, fingerprint analysis, emotion detection. AI that recognizes you.
Critical Infrastructure
What we all depend on
Energy, water, transport. An AI error affects everyone.
Education
The future of our children
Who gets to study? How are students evaluated?
Employment
Who gets the job?
CV screening, video interviews, performance monitoring.
Essential Services
Access to the social safety net
Benefits, subsidies, social housing. Who gets help?
Law Enforcement
Freedom versus security
Predictive policing, risk profiling, evidence analysis.
Migration
Who gets in?
Visa applications, asylum assessments, border controls.
Democracy
The foundation of our society
AI that influences elections or court cases.
Scroll to discover
Episode 3 ยท SmartRecruit Pro
SmartRecruit Pro's sector
Employment
SmartRecruit Pro falls in sector 4: Employment. CV screening, video interviews, and candidate ranking are explicitly high-risk applications.
Chatbots: 55% of voting advice to just 2 parties
โThe vacuum cleaner effect: profiles are sucked toward the extremes, regardless of actual preferences.โ
In one chatbot, 80% of all advice went to GroenLinks-PvdA or PVV. Transparency and verifiability completely absent.
How does the human stay in control?
Human oversight is essential.
Chapter 5
Human Oversight
Humans remain in control
High-risk AI cannot autonomously decide about human lives. A human must always be watching โ and able to intervene.
Click a principle
Levels of oversight
Episode 4 ยท SmartRecruit Pro
Oversight at TalentFlow
Recruiters must understand
Why does SmartRecruit Pro rank a candidate higher? Recruiters must be able to interpret the AI output.
Override capability
TalentFlow must be able to deviate from AI rankings. A recruiter can decide to invite a "lower ranked" candidate anyway.
Want to discover why "keeping humans in control" is easier said than done?
Scenario
A bank uses AI to assess mortgage applications. The AI rejects an application due to "insufficient creditworthiness".
What is the minimum required oversight?
Which assessment do you need?
DPIA or FRIA โ it matters.
Chapter 6
DPIA vs FRIA
Which impact assessment do you need?
GDPR requires a DPIA. The AI Act requires a FRIA. But when do you need which โ and can you combine them?
DPIA
Data Protection Impact Assessment
Overlap
Both assessments look at risks for individuals. You can often combine them, but the AI Act sets specific requirements for how you assess human rights.
When which?
Episode 6 ยท SmartRecruit Pro
Which assessment for TalentFlow?
Double obligation
SmartRecruit Pro processes personal data (CVs, videos) AND is high-risk AI. TalentFlow therefore needs both assessments.
โ DPIA + FRIA required
A DPIA for privacy risks (GDPR) and a FRIA for broader fundamental rights like non-discrimination (AI Act).
What must users know?
Transparency is not optional.
Chapter 7
Transparency
Being honest about AI
Users have the right to know when they're dealing with AI. Transparency isn't optional โ it's an obligation.
Episode 5 ยท SmartRecruit Pro
Being honest about AI
Inform candidates
TalentFlow must inform candidates that their CV is analyzed by AI and that video interviews are evaluated for facial expressions.
Disclose emotion recognition
SmartRecruit Pro analyzes facial expressions โ this is emotion recognition and must be explicitly disclosed to candidates.
LinkedIn wants 22 years of data for AI training
โPeople shared information back then without foreseeing it would be used for AI training.โ โ Monique Verdier, APRead the full analysis โ
Exemptions
- Law enforcement (in specific cases)
- Spam/fraud detection (no direct user interaction)
- AI that's obvious (e.g., a gaming console)
What if you get it wrong?
The EU means business.
Chapter 8
The Consequences
What does it cost if you get it wrong?
The EU is serious. The fines under the AI Act are among the highest in European regulation. This is no paper tiger.
For prohibited AI practices
For other infringements
For misleading information
For comparison
GDPR fine
AI Act fine
Scenario
A tech company with โฌ500 million turnover implements a social scoring system for customer assessment โ without knowing this is prohibited.
What is the maximum fine?
Episode 8 ยท SmartRecruit Pro
What would TalentFlow risk?
TalentFlow has โฌ50M turnover and uses SmartRecruit Pro for recruitment decisions. If they fail to comply with AI Act requirements for high-risk AI...
โฌ1.5M
3% of โฌ50M turnover
โ ๏ธ Non-compliance can cost up to โฌ1.5M
Even a medium-sized company faces significant fines. Compliance is an investment, not a cost.
โThe message is clear: compliance is cheaper than the consequences.โ
Which AI is simply banned?
The red line.
Chapter 9
The Forbidden Zone
AI that simply cannot exist
Some AI applications are so dangerous to fundamental rights that the EU prohibits them entirely. No exceptions, no compromises.
February 2025
When prohibited practices will be enforced
Social Scoring
Governments may not use AI to evaluate citizens based on behavior or personal characteristics.
Think of the Chinese system that gives citizens "points" for "good" behavior.
Manipulation of vulnerable people
AI that exploits children, elderly, or people with disabilities to influence behavior.
Toys that manipulate children into dangerous behavior.
Real-time biometric surveillance
Facial recognition in public spaces for law enforcement is prohibited, with very limited exceptions.
Mass surveillance with cameras in shopping streets.
Emotion recognition at work/school
AI that recognizes emotions in employees or students is not allowed.
Software that checks via webcam if you're "happily" working.
Biometric categorization
AI that categorizes people based on sensitive characteristics like race, religion, or sexual orientation.
Systems that try to determine religion based on photos.
Predictive policing on individuals
AI that predicts whether a specific individual will commit a crime.
Minority Report-like systems.
AP: Emotion recognition "dubious and risky"
โA high heart rate is not always a sign of fear, and a loud voice not always an expression of anger.โ
Systems assign more negative emotions to people with darker skin. Now completely banned in workplace and education.
What about ChatGPT?
Foundation models have special rules.
Chapter 10
The Big Models
ChatGPT, Claude, Gemini โ and the special rules
General-Purpose AI is a separate category. These models are so powerful and versatile that they deserve their own regulation.
โWith great power comes great responsibility.โ
August 2025
When GPAI rules take effect
GPAI with systemic risk
If a GPAI model used more than 10^25 FLOPS of computing power during training, it is automatically classified as "systemic risk". This means:
- โขAdversarial testing required
- โขIncident reporting
- โขCybersecurity measures
- โขEnergy efficiency documentation
Transparency for all GPAI
- โTechnical documentation public
- โTraining data summary
- โCopyright compliance
- โDownstream provider information
How do you prove compliance?
CE marking is proof.
Chapter 11
Conformity Assessment
The CE mark for AI
Just like products need a CE marking, high-risk AI systems must undergo conformity assessment before they can enter the market.
2026
Deadline for high-risk AI conformity assessment
Internal assessment
You assess yourself whether you meet the requirements
bijv. Recruitment AI, credit scoring
External assessment
A notified body assesses your system
bijv. Biometrics, critical infrastructure
Episode 11 ยท SmartRecruit Pro
Does SmartRecruit Pro need a CE marking?
SmartRecruit Pro is a high-risk AI system (recruitment). The provider RecruitTech Solutions must undergo conformity assessment.
๐ท๏ธ Yes, internal assessment suffices
Recruitment AI falls under Annex III and can be assessed internally. External notified body is not required.
What must the provider do?
Room to experiment?
The AI Sandbox offers protection.
Chapter 12
The Sandbox
Innovation within boundaries
The AI Act isn't just restriction โ it also creates space to experiment. The AI Regulatory Sandbox lets companies test innovative AI under supervision of authorities.
1
Dutch AI Sandbox in preparation
What is an AI Sandbox?
A controlled environment where you can test innovative AI without immediately meeting all compliance requirements. The regulator guides you and provides feedback before you go to market.
Benefits
- โGuidance from the regulator
- โFaster time to market
- โLower compliance costs
- โDirect feedback on your system
- โIdentify risks early
For whom?
Startups, scale-ups, but also large companies that want to test innovative AI. Especially interesting if you're working on high-risk AI systems.
๐ณ๐ฑ Netherlands
The Dutch Data Protection Authority is working on the Dutch AI Sandbox. Expected to be operational in 2025.
Do your people have the knowledge?
AI literacy becomes mandatory.
Chapter 13
AI Literacy
Knowledge is power โ and now mandatory
The AI Act requires everyone working with AI to have sufficient knowledge. This is not a suggestion โ it's a legal obligation.
February 2, 2025
The deadline for AI literacy
100%
Of employees working with AI must be AI literate
What is AI literacy?
Basic knowledge
Understanding AI concepts, capabilities and limitations
Practical skills
Ability to correctly operate and monitor AI systems
Compliance awareness
Knowledge of relevant laws and regulations
Ethical judgment
Recognizing bias, privacy risks and ethical dilemmas
By role
Scenario
An employee uses ChatGPT to analyze customer data without reporting it to the IT department.
What's the problem here?
AI without permission?
Shadow AI is a growing risk.
Chapter 14
Shadow AI
The invisible threat
Employees use ChatGPT, Claude and other AI tools โ often without IT or management knowing. This is one of the biggest governance challenges.
75%
Of knowledge workers use AI tools at work
What can you do?
Scenario
An HR manager pastes 50 CVs into ChatGPT asking "select the best candidates". The CVs contain names, addresses and social security numbers.
What's the biggest risk here?
โShadow AI isn't something you solve with a ban. It requires a culture change and smart governance.โ
Who oversees all of this?
The supervisors are ready.
Chapter 15
Who Supervises?
The agencies and authorities behind the AI Act
The AI Act is no paper tiger. A completely new European enforcement system is becoming operational. These are the players who will soon knock on your door.
Epilogue
The Turning Point
August 2024. After years of negotiation, the EU AI Act becomes reality. The world's first comprehensive AI legislation is a fact.
2024
The year everything changed for AI in Europe
Epilogue
Your Story Begins Now
You now know the story of the EU AI Act โ from birth to consequences, from risk levels to your own role.
What you've learned:
Next steps
The AI Act is not the end of a story โ it's the beginning. A new chapter where responsible AI becomes the standard. Your chapter begins now.