The Story of FinServe
How a financial services provider discovered their credit model was "high-risk AI"
Fictional scenario β based on realistic situations
The Trigger
How it started
FinServe had been using an advanced AI model for credit assessment for years. It worked well, customers were satisfied, and default rates were low. But when the AI Act was announced, everything turned out to be different.
Credit assessment with AI falls explicitly under Annex III of the AI Act: high-risk. This means strict requirements for transparency, explainability, and human oversight. For a system that was never built with those requirements in mind.
"Our credit model that we've been using for years? That now falls under the AI Act as high-risk?" The CFO stared in disbelief at the compliance report.
The Questions
What did they need to find out?
Why is our credit AI "high-risk"?
The team went through the AI Act. Annex III explicitly mentions "AI systems intended to be used to evaluate the creditworthiness of natural persons" as high-risk. But did that also apply to business customers?
π‘ The insight
The interpretation turned out to be nuanced. AI for credit assessment of individuals is automatically high-risk. For business customers, it depends on the impact on individuals β such as an entrepreneur who personally guarantees. In practice, many organizations chose to be cautious and treat all credit AI as high-risk.
π Why this matters
Financial regulators like AFM and DNB are looking increasingly critically at AI in credit decisions. The AI Act gives them additional tools. Organizations that proactively address compliance build trust.
How do we explain what the model does?
CreditScore Pro was an ensemble of gradient boosting models. Powerful, but not intuitively explainable. The AI Act requires users to understand how the system works and what its limitations are.
π‘ The insight
The solution lay in a combination of global and local explainability. Feature importance dashboards for analysts. SHAP values for individual decisions. And an "explain this score" function that showed the top-3 reasons behind each score in understandable language.
π Why this matters
Explainability is not just compliance β it's also business value. Account managers who can explain why a score is what it is have better conversations with customers. Many organizations discover that explainable AI improves their customer relationships.
Can we still make automated decisions?
One of the big concerns was efficiency. With hundreds of credit applications per month, full manual assessment was not feasible. But the AI Act asks for "meaningful human oversight".
π‘ The insight
The answer was risk-based human oversight. Standard cases (clearly high or low scores) could go through the system, with sample-based review. Borderline cases and high-value loans always got a human check. And every automated decision could be challenged.
π Why this matters
The AI Act does not prohibit AI from making decisions β it requires that humans can intervene and that affected persons can exercise their rights. A well-designed escalation process is often sufficient.
What do we need to change in our process?
The team took inventory. The model itself was only part of the story. Documentation, monitoring, incident response β everything needed examination.
π‘ The insight
The biggest gaps were not in the model, but in governance. There was no formal change management for model updates. Bias monitoring was ad-hoc. And there was no clear escalation path if someone wanted to challenge a decision. These process improvements turned out to take the most time.
π Why this matters
Many organizations focus on their AI models, but the AI Act requires a whole compliance framework around them. From data governance to incident management, from training to audit trails. It's a system change, not a technical fix.
The Journey
Step by step to compliance
The wake-up call
An external audit revealed that the credit model would qualify as high-risk under the AI Act. Management requested an impact analysis.
Impact assessment
The team sat down with lawyers and data scientists. Which systems exactly fell under the AI Act? And which didn't?
Building explainability
The data science department was tasked with building in explainability. Feature importance, SHAP values, and a user-friendly interface.
Redesigning human oversight
The credit process was revised. Which decisions could the AI make, which required human intervention?
Documentation in order
Technical file, risk assessment, data governance policies β everything had to be documented according to AI Act standards.
Training for analysts
Credit analysts were trained in interpreting AI output and recognizing possible bias or errors.
The Obstacles
What went wrong?
β Challenge
The model was a black box β nobody could explain why it gave specific scores
β Solution
Implementation of SHAP values and feature importance dashboards for explainability
β Challenge
Full manual review was not scalable with hundreds of applications per month
β Solution
Risk-based escalation: only borderline cases and high-value cases get human check
β Challenge
No formal change management for model updates
β Solution
Implementation of model governance framework with versioning and audit trails
The AI Act forced us to think about explainability. Our account managers can now much better explain why a score is what it is. That improves the customer relationship.
The Lessons
What can we learn from this?
Credit AI is explicitly high-risk
The AI Act explicitly mentions creditworthiness assessment as a high-risk use case. This applies to many financial AI applications.
Explainability is also business value
AI that can explain why improves not just compliance but also customer communication and internal decision-making.
Human oversight doesn't have to be manual
Meaningful human oversight doesn't mean every decision must be manual. Risk-based escalation is sufficient.
Governance is more than the model
Most gaps are not in your model, but in governance: change management, bias monitoring, incident response.
Does your organization use AI for financial decisions?
Discover what the AI Act means for credit assessment, fraud detection, and other financial AI.
Ga verder met leren
Ontdek gerelateerde content