Responsible AI Platform
πŸ’³Finance

The Story of FinServe

How a financial services provider discovered their credit model was "high-risk AI"

Fictional scenario β€” based on realistic situations

Scroll
01

The Trigger

How it started

πŸ“§

FinServe had been using an advanced AI model for credit assessment for years. It worked well, customers were satisfied, and default rates were low. But when the AI Act was announced, everything turned out to be different.

Credit assessment with AI falls explicitly under Annex III of the AI Act: high-risk. This means strict requirements for transparency, explainability, and human oversight. For a system that was never built with those requirements in mind.

β€œ
"Our credit model that we've been using for years? That now falls under the AI Act as high-risk?" The CFO stared in disbelief at the compliance report.
02

The Questions

What did they need to find out?

1Question

Why is our credit AI "high-risk"?

The team went through the AI Act. Annex III explicitly mentions "AI systems intended to be used to evaluate the creditworthiness of natural persons" as high-risk. But did that also apply to business customers?

πŸ’‘ The insight

The interpretation turned out to be nuanced. AI for credit assessment of individuals is automatically high-risk. For business customers, it depends on the impact on individuals β€” such as an entrepreneur who personally guarantees. In practice, many organizations chose to be cautious and treat all credit AI as high-risk.

🌍 Why this matters

Financial regulators like AFM and DNB are looking increasingly critically at AI in credit decisions. The AI Act gives them additional tools. Organizations that proactively address compliance build trust.

2Question

How do we explain what the model does?

CreditScore Pro was an ensemble of gradient boosting models. Powerful, but not intuitively explainable. The AI Act requires users to understand how the system works and what its limitations are.

πŸ’‘ The insight

The solution lay in a combination of global and local explainability. Feature importance dashboards for analysts. SHAP values for individual decisions. And an "explain this score" function that showed the top-3 reasons behind each score in understandable language.

🌍 Why this matters

Explainability is not just compliance β€” it's also business value. Account managers who can explain why a score is what it is have better conversations with customers. Many organizations discover that explainable AI improves their customer relationships.

3Question

Can we still make automated decisions?

One of the big concerns was efficiency. With hundreds of credit applications per month, full manual assessment was not feasible. But the AI Act asks for "meaningful human oversight".

πŸ’‘ The insight

The answer was risk-based human oversight. Standard cases (clearly high or low scores) could go through the system, with sample-based review. Borderline cases and high-value loans always got a human check. And every automated decision could be challenged.

🌍 Why this matters

The AI Act does not prohibit AI from making decisions β€” it requires that humans can intervene and that affected persons can exercise their rights. A well-designed escalation process is often sufficient.

4Question

What do we need to change in our process?

The team took inventory. The model itself was only part of the story. Documentation, monitoring, incident response β€” everything needed examination.

πŸ’‘ The insight

The biggest gaps were not in the model, but in governance. There was no formal change management for model updates. Bias monitoring was ad-hoc. And there was no clear escalation path if someone wanted to challenge a decision. These process improvements turned out to take the most time.

🌍 Why this matters

Many organizations focus on their AI models, but the AI Act requires a whole compliance framework around them. From data governance to incident management, from training to audit trails. It's a system change, not a technical fix.

03

The Journey

Step by step to compliance

Step 1 of 6
⚠️

The wake-up call

An external audit revealed that the credit model would qualify as high-risk under the AI Act. Management requested an impact analysis.

Step 2 of 6
πŸ”

Impact assessment

The team sat down with lawyers and data scientists. Which systems exactly fell under the AI Act? And which didn't?

Step 3 of 6
πŸ’‘

Building explainability

The data science department was tasked with building in explainability. Feature importance, SHAP values, and a user-friendly interface.

Step 4 of 6
πŸ‘οΈ

Redesigning human oversight

The credit process was revised. Which decisions could the AI make, which required human intervention?

Step 5 of 6
πŸ“‹

Documentation in order

Technical file, risk assessment, data governance policies β€” everything had to be documented according to AI Act standards.

Step 6 of 6
πŸŽ“

Training for analysts

Credit analysts were trained in interpreting AI output and recognizing possible bias or errors.

04

The Obstacles

What went wrong?

Obstacle 1

βœ— Challenge

The model was a black box β€” nobody could explain why it gave specific scores

↓

βœ“ Solution

Implementation of SHAP values and feature importance dashboards for explainability

Obstacle 2

βœ— Challenge

Full manual review was not scalable with hundreds of applications per month

↓

βœ“ Solution

Risk-based escalation: only borderline cases and high-value cases get human check

Obstacle 3

βœ— Challenge

No formal change management for model updates

↓

βœ“ Solution

Implementation of model governance framework with versioning and audit trails

β€œ
The AI Act forced us to think about explainability. Our account managers can now much better explain why a score is what it is. That improves the customer relationship.
β€” Jeroen van den Berg, Head of Credit Risk, FinServe
05

The Lessons

What can we learn from this?

Les 1 / 4
🏦

Credit AI is explicitly high-risk

The AI Act explicitly mentions creditworthiness assessment as a high-risk use case. This applies to many financial AI applications.

Les 2 / 4
πŸ’‘

Explainability is also business value

AI that can explain why improves not just compliance but also customer communication and internal decision-making.

Les 3 / 4
πŸ‘οΈ

Human oversight doesn't have to be manual

Meaningful human oversight doesn't mean every decision must be manual. Risk-based escalation is sufficient.

Les 4 / 4
πŸ“‹

Governance is more than the model

Most gaps are not in your model, but in governance: change management, bias monitoring, incident response.

Does your organization use AI for financial decisions?

Discover what the AI Act means for credit assessment, fraud detection, and other financial AI.