Responsible AI Platform
πŸ‘₯Recruitment

The Story of TalentFlow

How a recruitment startup discovered their "smart tool" fell under strict AI legislation β€” and what they did about it

Fictional scenario β€” based on realistic situations

Scroll
01

The Trigger

How it started

πŸ“§

TalentFlow had been using SmartRecruit Pro for CV screening for two years. The tool had processed thousands of applicants. Nobody had thought about the AI Act β€” until now.

With the AI Act deadline approaching, important questions remained open. What if they weren't compliant? What did that mean for their clients? And more importantly: what did it mean for the applicants whose careers were being influenced by their algorithm?

β€œ
The email from legal was short but alarming: "We need to talk about your AI tool."
02

The Questions

What did they need to find out?

1Question

Is SmartRecruit Pro actually an AI system under the law?

This was the first question the team asked themselves. After all, SmartRecruit Pro was "just" software β€” at least, that's how they had always seen it. But the AI Act uses a broad definition. The team dove into the legal text and discovered that any machine-based system that generates output influencing decisions β€” in a way that goes beyond simple if-then rules β€” can fall under the definition.

πŸ’‘ The insight

SmartRecruit Pro analyzed CVs with natural language processing, ranked candidates based on pattern recognition in historical data, and made recommendations that HR staff often adopted without much adjustment. It wasn't a simple search function. It was AI.

🌍 Why this matters

Many organizations underestimate what falls under the AI Act definition. A system doesn't need "artificial intelligence" in its name to be covered. The question is: does the system make decisions or influence decisions in a way that isn't fully predictable or explainable by simple rules?

2Question

What risk level do we have?

The AI Act works with a pyramid of risks. Some AI applications are prohibited, others are high-risk and must meet strict requirements, still others have limited obligations. TalentFlow had to figure out where they fell. They started by studying Annex III of the AI Act β€” the list of high-risk applications.

πŸ’‘ The insight

There it was, in black and white: AI systems used for "recruitment or selection of natural persons, in particular for placing targeted job advertisements, analyzing and filtering job applications, and evaluating candidates" are high-risk. No room for interpretation.

🌍 Why this matters

The legislator deliberately chose to classify recruitment AI as high-risk. The reasoning? Employment decisions have far-reaching consequences for people's lives. An algorithm that determines who gets invited for an interview influences careers, income, and future prospects. That impact justifies strict requirements.

3Question

Who is responsible β€” us or the vendor?

TalentFlow hadn't developed SmartRecruit Pro themselves. They had purchased it from RecruitTech Solutions, a US company. The team's first reaction was: "Then this is their problem, right?" But the AI Act works differently. The law distinguishes roles in the AI value chain, and each role has its own obligations.

πŸ’‘ The insight

TalentFlow was not a "provider" (developer), but a "deployer" (user). As a deployer of a high-risk AI system, they had their own set of obligations. They had to ensure correct use, set up human oversight, and conduct a fundamental rights impact assessment. The provider had other obligations β€” like CE marking and technical documentation β€” but that didn't exempt TalentFlow from their own responsibilities.

🌍 Why this matters

This is a common misconception. Organizations think they're "safe" because they buy AI rather than develop it themselves. But the AI Act is deliberately designed so that both sides of the chain are responsible. As a deployer, you can't hide behind the provider β€” and vice versa. Collaboration is essential.

4Question

What do we need to do NOW?

Knowing they were using a high-risk system for which they had deployer responsibility, the next question was: what now? The team compiled an overview of deployer obligations in the AI Act. It was a substantial list, but not insurmountable.

πŸ’‘ The insight

The core deployer obligations for high-risk AI turned out to be: ensuring the system is used correctly according to provider instructions, organizing human oversight so decisions are never fully automated, informing affected persons (applicants) about AI use, and conducting a fundamental rights impact assessment to map risks.

🌍 Why this matters

The good news for many organizations is that deployer obligations, while serious, are less extensive than provider obligations. You don't have to re-document the entire system β€” that's the provider's job. But you do have to demonstrate that you're handling the system consciously and responsibly.

03

The Journey

Step by step to compliance

Step 1 of 6
πŸ’‘

The wake-up call

It started with a question from a client. An HR director wanted to know if their recruitment AI complied with the new European legislation. Lisa, CEO of TalentFlow, realized she didn't know the answer. That night she lay awake. What if their entire business model suddenly wasn't allowed anymore?

Step 2 of 6
πŸ”

The inventory

The team gathered around the table. What exactly did SmartRecruit Pro do? They created a complete mapping: data in, processing, data out. They spoke with developers, HR consultants, and end users. The result was sobering: the system did much more than they thought. And it had much more impact than they realized.

Step 3 of 6
βš–οΈ

The classification

After studying the AI Act, it became clear: recruitment AI falls under Annex III. High risk. The law considers AI that influences employment as potentially impactful β€” and rightfully so. The team realized this wasn't a formality. The law recognized what they themselves had perhaps underestimated: that their tool influenced lives.

Step 4 of 6
πŸ‘₯

The role determination

TalentFlow purchased SmartRecruit Pro from RecruitTech Solutions, a US company. That made them a "deployer" β€” the party that uses the system in practice. With their own responsibilities. The team had to work through the deployer obligations from the AI Act. Some were already covered. Others were completely new.

Step 5 of 6
πŸ“ž

The conversation with the vendor

A crucial call to RecruitTech. Did their system have the right CE marking? Was there technical documentation? The answer was: "We're working on it." Not ideal. TalentFlow decided to document in writing what information they needed and when. They didn't want to depend on vague promises.

Step 6 of 6
πŸ“‹

The fundamental rights impact assessment

As a deployer, TalentFlow had to conduct a FRIA. What rights of applicants could be at stake? They mapped the risks. The right to non-discrimination: could the algorithm systematically disadvantage certain groups? The right to privacy: was more data being processed than necessary? The right to a fair process: did rejected candidates have a way to object?

04

The Obstacles

What went wrong?

Obstacle 1

βœ— Challenge

The US vendor was slow to respond to conformity questions

↓

βœ“ Solution

Documenting expectations in writing and setting a clear deadline. Escalating to management level when needed.

Obstacle 2

βœ— Challenge

Some HR staff saw the extra checkpoints as bureaucracy

↓

βœ“ Solution

Explaining that it protects them: when a decision is challenged, they can demonstrate they acted carefully.

Obstacle 3

βœ— Challenge

There were no historical logs of why the AI made certain decisions

↓

βœ“ Solution

Working with vendor to implement new logging. For existing decisions, documenting what was known.

β€œ
We thought the AI Act would be a burden. It became an opportunity to ask ourselves: are we really treating candidates fairly? The answer wasn't always yes. Now it is.
β€” Lisa van der Berg, CEO, TalentFlow
05

The Lessons

What can we learn from this?

Les 1 / 4
🧠

Start with understanding, not panic

The AI Act sounds overwhelming, but starts with simple questions: what AI do we use, and what impact does it have on people? From that understanding, the next steps follow logically.

Les 2 / 4
πŸ”—

Know your role in the chain

As a deployer you have different obligations than the provider. But ignoring is not an option β€” you're responsible for how you use AI, regardless of who built it.

Les 3 / 4
πŸ‘οΈ

Human oversight is not a formality

The law doesn't ask for a signature under AI decisions. It asks for genuine human consideration. That means people who understand what they're evaluating and have the authority to override the AI.

Les 4 / 4
πŸ’¬

Transparency builds trust

Candidates appreciate knowing how decisions are made. Openness about AI use can be a plus β€” it shows you're working carefully.

Does this story sound familiar?

Discover step by step which parts of the AI Act apply to your situation.