Responsible AI Platform
πŸ‘οΈBiometrics

The Story of SecureAccess

When facial recognition doesn't work equally well for everyone

Fictional scenario β€” based on realistic situations

Scroll
01

The Trigger

How it started

πŸ“§

SecureAccess supplied FaceGate Pro to dozens of organizations. The system worked fast and reliably β€” at least, that's what they thought. Until complaints came in from specific user groups who were rejected more often or had to wait.

Biometric identification is explicitly high-risk under the AI Act. And worse: the system turned out to work better for some skin colors than others. This wasn't just a compliance problem β€” it was a discrimination risk.

β€œ
"Why am I always stopped at the gate while my colleagues just walk through?" The complaint from an employee revealed a bigger problem.
02

The Questions

What did they need to find out?

1Question

Why is biometrics high-risk under the AI Act?

The team dove into Annex III. Biometric identification systems are explicitly included as high-risk β€” and some forms are even prohibited (real-time biometrics by law enforcement in public spaces).

πŸ’‘ The insight

The AI Act recognizes that biometrics is uniquely sensitive. You can't change your face like a password. Errors in biometric systems can lead to unjust access denial, discrimination, or worse. That's why strict requirements apply for accuracy, bias testing, and transparency.

🌍 Why this matters

Research has repeatedly shown that facial recognition performs worse for people with darker skin and for women. The AI Act codifies the obligation to test for and mitigate this kind of bias.

2Question

How do you discover bias in facial recognition?

SecureAccess had never done systematic bias testing. After the complaints, they analyzed their false rejection rates segmented by demographic characteristics. The results were sobering.

πŸ’‘ The insight

The system had a false rejection rate of 1% for light skin tones, but 8% for dark skin tones. For women with head coverings, it was even higher. This wasn't a bug β€” it was a fundamental training data problem.

🌍 Why this matters

Bias in AI is often not intentional, but a reflection of skewed training data. If your model is mainly trained on photos of white men, it will work less well for others. The AI Act requires representative training data.

3Question

Can we make the system fair for everyone?

The team got to work. Collect more diverse training data. Retrain the model. But the question was: how do you measure "fair"? And is perfect equality achievable?

πŸ’‘ The insight

They adopted an "equalized odds" approach: equal false positive and false negative rates across demographic groups. It took time and money, but the result was a system that worked equally reliably for everyone β€” within acceptable margins.

🌍 Why this matters

Fairness in AI is an active research field. There are multiple definitions, and some are mathematically incompatible. The AI Act doesn't prescribe a specific definition, but does require that you monitor and mitigate bias. Document your choices.

4Question

What are our obligations as a provider?

SecureAccess was a provider of a high-risk AI system. What did that mean concretely? The team mapped the Article 16 obligations.

πŸ’‘ The insight

As a provider, SecureAccess had to: implement a quality management system, maintain technical documentation, undergo conformity assessment, apply CE marking, and set up post-market monitoring. And their customers β€” the deployers β€” had to be instructed on correct use.

🌍 Why this matters

The AI Act distinguishes providers and deployers. Providers have heavier obligations: they are responsible for the system itself. Deployers must use it correctly. But if a deployer modifies the system, they can also get provider obligations.

03

The Journey

Step by step to compliance

Step 1 of 6
πŸ“§

The complaint

An end user complained about repeated access denials. Investigation showed a pattern: the problem mainly affected people with darker skin.

Step 2 of 6
πŸ“Š

Bias analysis

The team segmented performance data by demographic groups. The results were shocking: 8x higher false rejection rate for some groups.

Step 3 of 6
🌍

Data diversification

A project started to collect training data that better reflected the diversity of end users.

Step 4 of 6
🧠

Model redevelopment

The facial recognition model was retrained with the expanded dataset, focused on equalized odds across demographic groups.

Step 5 of 6
βœ…

Validation and testing

Extensive testing on bias, accuracy and edge cases. Results were documented for the technical file.

Step 6 of 6
πŸ“’

Customer communication

All customers were informed about the model update and their obligations as deployers under the AI Act.

04

The Obstacles

What went wrong?

Obstacle 1

βœ— Challenge

False rejection rates were 8x higher for some demographic groups

↓

βœ“ Solution

Retraining with diverse dataset and equalized odds fairness metric

Obstacle 2

βœ— Challenge

No systematic bias testing had ever been done

↓

βœ“ Solution

Implementation of continuous fairness monitoring as part of QMS

Obstacle 3

βœ— Challenge

Customers didn't understand their deployer obligations

↓

βœ“ Solution

Extensive documentation and training for all deployers

β€œ
We thought our system was objective. The opposite turned out to be true. The AI Act forced us to honestly look at who we were excluding.
β€” Emma van Leeuwen, VP Product, SecureAccess
05

The Lessons

What can we learn from this?

Les 1 / 4
πŸ”

Biometrics is inherently high-risk

You can't reset your body. Errors in biometric AI have lasting impact on people.

Les 2 / 4
πŸ“Š

Bias isn't a bug, it's training data

If your model isn't trained on diverse data, it won't work for diverse people.

Les 3 / 4
πŸ”₯

Test before customers complain

Systematic bias testing could have prevented this problem. Now we had to put out fires.

Les 4 / 4
πŸ“‹

Provider obligations are substantial

As a provider of high-risk AI, you carry the heaviest compliance burden. Plan capacity for this.

Does your company develop biometric AI?

Discover which AI Act requirements apply to facial recognition, fingerprints, and other biometrics.