Responsible AI Platform
โ˜…
โ˜…
โ˜…
โ˜…
โ˜…
โ˜…
โ˜…
โ˜…
โ˜…
โ˜…
โ˜…
โ˜…

Imagine...

You're applying for your dream job. An AI system evaluates your CV before any human ever sees it. Who protects you from a biased algorithm?

The Case

TalentFlow BV

A familiar scenario

Company

TalentFlow BV

Size

250 employees

Industry

Technology & consultancy

Location

Amsterdam

๐Ÿ‘ฉโ€๐Ÿ’ผ

Sarah van der Berg

HR Director

Today 09:14

New recruitment system - approval needed

Hi team, I had a demo of SmartRecruit Pro and I'm excited! They claim 70% time savings...

Chapter 1

What is AI?

The definition that determines everything

Before you know which rules apply, you need to know if you're dealing with an AI system at all. The EU AI Act defines this in 7 elements.

Article 3(1) AI Act:

"AI system means a machine-based system that is designed to operate with varying levels of autonomy and that may exhibit adaptiveness after deployment, and that, for explicit or implicit objectives, infers, from the input it receives, how to generate outputs such as predictions, content, recommendations, or decisions that can influence physical or virtual environments."

The 7 Elements

Click to discover

โ†‘ Click an element for details

The key point

Inference is the essential condition. Without the ability to infer how to generate outputs, it's not an AI system.

What is NOT AI?

โœ—Mathematical optimization tools
โœ—Basic data processing (spreadsheets, dashboards)
โœ—Classical heuristics (fixed rules)
โœ—Simple statistics (calculating averages)
๐Ÿ”€

Want to understand why some software falls under the AI Act and other doesn't?

1

Episode 1 ยท SmartRecruit Pro

Is SmartRecruit Pro an AI system?

Let's apply the 7 elements:

โœ“Machine-basedSoftware running on servers
โœ“AutonomyRanks candidates independently
โœ“AdaptivenessLearns from historical data
โœ“Objectives"Find best candidates"
โœ“InferenceInfers from CV who is suitable
โœ“OutputsRankings and decisions
โœ“Environmental ImpactDetermines who gets invited

โœ… Yes, SmartRecruit Pro is an AI system

Now that we know it's AI, we need to determine: what risk level? โ†’

An AI system must be able to infer from input how to generate outputs. Does it only follow fixed rules? Then it's regular software.

๐Ÿ“Š

How strict are the rules?

The answer lies in a pyramid.

โ†“

Chapter 2

The Pyramid

Not all AI is equal

Click a level

โ†‘ Click a level for details

โ€œThe more impact on human lives, the more responsibility.โ€

2

Episode 2 ยท SmartRecruit Pro

Where does SmartRecruit Pro fall?

SmartRecruit Pro falls under High Risk because:

โ€ขIt makes decisions about employment
โ€ขIt ranks candidates and influences who gets hired
โ€ขIt analyzes facial expressions (biometric data)
โ€ขRecruitment AI is explicitly listed in Annex III of the AI Act

โš ๏ธ High Risk AI

This means TalentFlow must comply with strict requirements: CE marking, conformity assessment, human oversight, and more.

But what is TalentFlow's exact role? Provider or deployer? โ†’

The higher the risk, the stricter the rules.

๐Ÿ‘ฅ

But who bears responsibility?

That depends on your role.

Chapter 3

The Players

Which role do you play?

The AI Act assigns different roles. Each role carries its own responsibilities. The question is: where do you stand?

4

Different roles in the AI value chain

Click a question that applies to you

3

Episode 3 ยท SmartRecruit Pro

What is TalentFlow's role?

TalentFlow buys SmartRecruit Pro from RecruitTech Solutions (a US company). TalentFlow doesn't develop the software themselves, but will use it for recruitment decisions.

๐Ÿ‘ค TalentFlow is a Deployer

TalentFlow is the party that "deploys" the AI system within their organization. RecruitTech Solutions is the Provider (developer), and since they're from outside the EU, an Importer is also needed.

What must TalentFlow do?

โ†’Ensure human oversight in recruitment decisions
โ†’Inform candidates that AI is being used
โ†’Keep logs of AI decisions
โ†’Monitor for discrimination and bias

Now that we know TalentFlow is a deployer, what concrete obligations apply? โ†’

๐Ÿญ

Where is the AI Act strictest?

8 critical sectors.

Chapter 4

The Sectors

The AI Act identifies 8 areas where AI carries the most risks.

๐Ÿ‘๏ธ

Biometrics

Your face, your identity

Facial recognition, fingerprint analysis, emotion detection. AI that recognizes you.

Facial recognition
Emotion detection
Behavior analysis
โšก

Critical Infrastructure

What we all depend on

Energy, water, transport. An AI error affects everyone.

Energy grid
Traffic systems
Water supply
๐ŸŽ“

Education

The future of our children

Who gets to study? How are students evaluated?

Admission algorithms
Exam grading
Plagiarism detection
๐Ÿ’ผ

Employment

Who gets the job?

CV screening, video interviews, performance monitoring.

CV screening
Video interviews
Performance monitoring
๐Ÿ›๏ธ

Essential Services

Access to the social safety net

Benefits, subsidies, social housing. Who gets help?

Benefits
Subsidies
Social housing
โš–๏ธ

Law Enforcement

Freedom versus security

Predictive policing, risk profiling, evidence analysis.

Risk profiling
Evidence analysis
Lie detection
๐Ÿ›‚

Migration

Who gets in?

Visa applications, asylum assessments, border controls.

Visa applications
Asylum assessments
Border control
๐Ÿ—ณ๏ธ

Democracy

The foundation of our society

AI that influences elections or court cases.

Election influence
Legal AI
Polling analysis

Scroll to discover

3

Episode 3 ยท SmartRecruit Pro

SmartRecruit Pro's sector

๐Ÿ’ผ

Employment

SmartRecruit Pro falls in sector 4: Employment. CV screening, video interviews, and candidate ranking are explicitly high-risk applications.

โ—BREAKING โ€” October 2025

Chatbots: 55% of voting advice to just 2 parties

โ€œThe vacuum cleaner effect: profiles are sucked toward the extremes, regardless of actual preferences.โ€

โ€” AP Report 2025
The impact

In one chatbot, 80% of all advice went to GroenLinks-PvdA or PVV. Transparency and verifiability completely absent.

AI
๐Ÿ‘๏ธ

How does the human stay in control?

Human oversight is essential.

Chapter 5

Human Oversight

Humans remain in control

High-risk AI cannot autonomously decide about human lives. A human must always be watching โ€” and able to intervene.

Click a principle

Levels of oversight

1
Human-in-the-loopHuman approves every AI decision
2
Human-on-the-loopHuman monitors and can intervene
3
Human-in-commandHuman decides when AI is deployed
4

Episode 4 ยท SmartRecruit Pro

Oversight at TalentFlow

๐Ÿง 

Recruiters must understand

Why does SmartRecruit Pro rank a candidate higher? Recruiters must be able to interpret the AI output.

โœ‹

Override capability

TalentFlow must be able to deviate from AI rankings. A recruiter can decide to invite a "lower ranked" candidate anyway.

๐Ÿ”€

Want to discover why "keeping humans in control" is easier said than done?

Scenario

A bank uses AI to assess mortgage applications. The AI rejects an application due to "insufficient creditworthiness".

What is the minimum required oversight?

๐Ÿ“‹

Which assessment do you need?

DPIA or FRIA โ€” it matters.

Chapter 6

DPIA vs FRIA

Which impact assessment do you need?

GDPR requires a DPIA. The AI Act requires a FRIA. But when do you need which โ€” and can you combine them?

DPIA

Data Protection Impact Assessment

LawGDPR
FocusPrivacy and personal data
WhenHigh risk for individual privacy
WhoData controller
DeadlineAlready mandatory since 2018

Overlap

Both assessments look at risks for individuals. You can often combine them, but the AI Act sets specific requirements for how you assess human rights.

Discrimination and biasTransparency for data subjectsHuman controlSafety and reliability

When which?

Only personal dataโ†’ DPIA
High-risk AI without personal dataโ†’ FRIA
High-risk AI with personal dataโ†’ Both!
6

Episode 6 ยท SmartRecruit Pro

Which assessment for TalentFlow?

๐Ÿ“‹

Double obligation

SmartRecruit Pro processes personal data (CVs, videos) AND is high-risk AI. TalentFlow therefore needs both assessments.

โœ… DPIA + FRIA required

A DPIA for privacy risks (GDPR) and a FRIA for broader fundamental rights like non-discrimination (AI Act).

๐Ÿ”

What must users know?

Transparency is not optional.

โ†“

Chapter 7

Transparency

Being honest about AI

Users have the right to know when they're dealing with AI. Transparency isn't optional โ€” it's an obligation.

5

Episode 5 ยท SmartRecruit Pro

Being honest about AI

๐Ÿ“

Inform candidates

TalentFlow must inform candidates that their CV is analyzed by AI and that video interviews are evaluated for facial expressions.

๐ŸŽฏ

Disclose emotion recognition

SmartRecruit Pro analyzes facial expressions โ€” this is emotion recognition and must be explicitly disclosed to candidates.

๐Ÿ“ฐ Real Case
AP Warningโ€ขSeptember 2025

LinkedIn wants 22 years of data for AI training

โ€œPeople shared information back then without foreseeing it would be used for AI training.โ€ โ€” Monique Verdier, AP
Read the full analysis โ†’

Exemptions

  • Law enforcement (in specific cases)
  • Spam/fraud detection (no direct user interaction)
  • AI that's obvious (e.g., a gaming console)
โš ๏ธ

What if you get it wrong?

The EU means business.

Chapter 8

The Consequences

What does it cost if you get it wrong?

The EU is serious. The fines under the AI Act are among the highest in European regulation. This is no paper tiger.

โ‚ฌ0Mor 7% of global turnover

For prohibited AI practices

โ‚ฌ0Mor 3% of global turnover

For other infringements

โ‚ฌ0Mor 1% of global turnover

For misleading information

For comparison

0%GDPR

GDPR fine

vs
0%AI Act

AI Act fine

Scenario

A tech company with โ‚ฌ500 million turnover implements a social scoring system for customer assessment โ€” without knowing this is prohibited.

What is the maximum fine?

8

Episode 8 ยท SmartRecruit Pro

What would TalentFlow risk?

TalentFlow has โ‚ฌ50M turnover and uses SmartRecruit Pro for recruitment decisions. If they fail to comply with AI Act requirements for high-risk AI...

โ‚ฌ1.5M

3% of โ‚ฌ50M turnover

โœ—No human oversight
โœ—Candidates not informed
โœ—No FRIA conducted

โš ๏ธ Non-compliance can cost up to โ‚ฌ1.5M

Even a medium-sized company faces significant fines. Compliance is an investment, not a cost.

โ€œThe message is clear: compliance is cheaper than the consequences.โ€

๐Ÿšซ

Which AI is simply banned?

The red line.

Chapter 9

The Forbidden Zone

AI that simply cannot exist

Some AI applications are so dangerous to fundamental rights that the EU prohibits them entirely. No exceptions, no compromises.

February 2025

When prohibited practices will be enforced

โš–๏ธ

Social Scoring

Governments may not use AI to evaluate citizens based on behavior or personal characteristics.

Think of the Chinese system that gives citizens "points" for "good" behavior.

๐Ÿ›ก๏ธ

Manipulation of vulnerable people

AI that exploits children, elderly, or people with disabilities to influence behavior.

Toys that manipulate children into dangerous behavior.

๐Ÿ“ท

Real-time biometric surveillance

Facial recognition in public spaces for law enforcement is prohibited, with very limited exceptions.

Mass surveillance with cameras in shopping streets.

๐Ÿ˜ถ

Emotion recognition at work/school

AI that recognizes emotions in employees or students is not allowed.

Software that checks via webcam if you're "happily" working.

๐Ÿšซ

Biometric categorization

AI that categorizes people based on sensitive characteristics like race, religion, or sexual orientation.

Systems that try to determine religion based on photos.

๐Ÿ”ฎ

Predictive policing on individuals

AI that predicts whether a specific individual will commit a crime.

Minority Report-like systems.

โ—BREAKING โ€” July 2025

AP: Emotion recognition "dubious and risky"

โ€œA high heart rate is not always a sign of fear, and a loud voice not always an expression of anger.โ€

โ€” AP Chairman
The impact

Systems assign more negative emotions to people with darker skin. Now completely banned in workplace and education.

AI
๐Ÿค–

What about ChatGPT?

Foundation models have special rules.

Chapter 10

The Big Models

ChatGPT, Claude, Gemini โ€” and the special rules

General-Purpose AI is a separate category. These models are so powerful and versatile that they deserve their own regulation.

โ€œWith great power comes great responsibility.โ€

August 2025

When GPAI rules take effect

GPAI with systemic risk

If a GPAI model used more than 10^25 FLOPS of computing power during training, it is automatically classified as "systemic risk". This means:

  • โ€ขAdversarial testing required
  • โ€ขIncident reporting
  • โ€ขCybersecurity measures
  • โ€ขEnergy efficiency documentation

Transparency for all GPAI

  • โœ“Technical documentation public
  • โœ“Training data summary
  • โœ“Copyright compliance
  • โœ“Downstream provider information
โœ“

How do you prove compliance?

CE marking is proof.

Chapter 11

Conformity Assessment

The CE mark for AI

Just like products need a CE marking, high-risk AI systems must undergo conformity assessment before they can enter the market.

2026

Deadline for high-risk AI conformity assessment

Internal assessment

You assess yourself whether you meet the requirements

bijv. Recruitment AI, credit scoring

External assessment

A notified body assesses your system

bijv. Biometrics, critical infrastructure

11

Episode 11 ยท SmartRecruit Pro

Does SmartRecruit Pro need a CE marking?

SmartRecruit Pro is a high-risk AI system (recruitment). The provider RecruitTech Solutions must undergo conformity assessment.

๐Ÿท๏ธ Yes, internal assessment suffices

Recruitment AI falls under Annex III and can be assessed internally. External notified body is not required.

What must the provider do?

01Conduct risk analysis
02Create technical documentation
03Implement quality management system
04Issue EU Declaration of Conformity
For TalentFlow: TalentFlow must verify that SmartRecruit Pro has the CE marking before deploying the system.
๐Ÿงช

Room to experiment?

The AI Sandbox offers protection.

โ†“

Chapter 12

The Sandbox

Innovation within boundaries

The AI Act isn't just restriction โ€” it also creates space to experiment. The AI Regulatory Sandbox lets companies test innovative AI under supervision of authorities.

1

Dutch AI Sandbox in preparation

What is an AI Sandbox?

A controlled environment where you can test innovative AI without immediately meeting all compliance requirements. The regulator guides you and provides feedback before you go to market.

Benefits

  • โœ“Guidance from the regulator
  • โœ“Faster time to market
  • โœ“Lower compliance costs
  • โœ“Direct feedback on your system
  • โœ“Identify risks early

For whom?

Startups, scale-ups, but also large companies that want to test innovative AI. Especially interesting if you're working on high-risk AI systems.

๐Ÿ‡ณ๐Ÿ‡ฑ Netherlands

The Dutch Data Protection Authority is working on the Dutch AI Sandbox. Expected to be operational in 2025.

๐ŸŽ“

Do your people have the knowledge?

AI literacy becomes mandatory.

Chapter 13

AI Literacy

Knowledge is power โ€” and now mandatory

The AI Act requires everyone working with AI to have sufficient knowledge. This is not a suggestion โ€” it's a legal obligation.

February 2, 2025

The deadline for AI literacy

100%

Of employees working with AI must be AI literate

What is AI literacy?

Basic knowledge

Understanding AI concepts, capabilities and limitations

Practical skills

Ability to correctly operate and monitor AI systems

Compliance awareness

Knowledge of relevant laws and regulations

Ethical judgment

Recognizing bias, privacy risks and ethical dilemmas

By role

01
Management:Strategic understanding of AI governance and risks
02
Developers:Technical knowledge of AI systems and compliance requirements
03
End users:Basic knowledge and responsible use
04
HR & Procurement:AI clauses in contracts and recruitment

Scenario

An employee uses ChatGPT to analyze customer data without reporting it to the IT department.

What's the problem here?

๐Ÿ‘ค

AI without permission?

Shadow AI is a growing risk.

Chapter 14

Shadow AI

The invisible threat

Employees use ChatGPT, Claude and other AI tools โ€” often without IT or management knowing. This is one of the biggest governance challenges.

75%

Of knowledge workers use AI tools at work

What can you do?

01
Inventory:Map out which AI tools employees use
02
Create policy:Create clear guidelines for AI use
03
Approved tools:Offer safe alternatives to popular AI tools
04
Training:Ensure employees understand the risks

Scenario

An HR manager pastes 50 CVs into ChatGPT asking "select the best candidates". The CVs contain names, addresses and social security numbers.

What's the biggest risk here?

โ€œShadow AI isn't something you solve with a ban. It requires a culture change and smart governance.โ€
AI
๐Ÿ›๏ธ

Who oversees all of this?

The supervisors are ready.

Chapter 15

Who Supervises?

The agencies and authorities behind the AI Act

The AI Act is no paper tiger. A completely new European enforcement system is becoming operational. These are the players who will soon knock on your door.

Epilogue

The Turning Point

August 2024. After years of negotiation, the EU AI Act becomes reality. The world's first comprehensive AI legislation is a fact.

2024

The year everything changed for AI in Europe

Epilogue

Your Story Begins Now

You now know the story of the EU AI Act โ€” from birth to consequences, from risk levels to your own role.

What you've learned:

โœ“The timeline of entry into force
โœ“The four risk categories
โœ“The roles in the AI chain
โœ“The high-risk sectors
โœ“The consequences of non-compliance
โœ“The special rules for GPAI

The AI Act is not the end of a story โ€” it's the beginning. A new chapter where responsible AI becomes the standard. Your chapter begins now.