Responsible AI Platform
High-risk Sector

AI Act Compliance for Financial Services

Credit scoring, insurance and fraud detection — high-risk under the AI Act

Practical guidelines for banks, insurers and other financial institutions to comply with the EU AI Act.

View the compliance checklist

Why Take Action Now?

The AI Act has major impact on the financial sector

August 2025

First obligations for high-risk AI systems come into effect

Credit Scoring = High-risk

AI for creditworthiness automatically falls under strictest rules

Fines up to €35 million

Or 7% of global annual turnover — regulators will enforce

Explainability Requirement

Customers have right to explanation for automated decisions

High-risk AI in Financial Services

These AI applications fall under strict AI Act requirements (Annex III)

Credit Scoring

AI systems that assess creditworthiness of natural persons — from mortgage acceptance to personal loans.

Credit scoring modelsMortgage acceptance AICredit limit determinationAffordability checks

Insurance Premiums & Claims

Systems assessing risks for life and health insurance, or making claims decisions.

Life insurance pricingHealth insurance risk selectionClaim fraud detectionDamage assessment AI

Anti-Money Laundering (AML)

Transaction monitoring and customer due diligence systems fall under financial supervision and AI Act.

Transaction monitoringCustomer risk scoringSanctions screeningPEP identification

Fraud Detection

Systems detecting fraudulent transactions or behavior — from payment fraud to identity fraud.

Real-time transaction monitoringAccount takeover detectionApplication fraud screeningBehavioral analytics

Specific Challenges for Financial Institutions

The AI Act brings unique compliance questions for the financial sector

Model Risk Management Integration

How does AI Act compliance fit into existing MRM frameworks? What are overlaps with SR 11-7 and ECB guidance?

Explainability vs. Black Box

Many scoring models are complex. How to meet explainability requirements without sacrificing performance?

Bias in Credit Decisions

Non-discrimination is crucial. How to test AI models for proxy discrimination and indirect bias?

Legacy Systems

Much production AI is years old. How to bring existing models in line with new requirements?

Third-party AI Vendors

What to ask from external vendors? Credit bureaus, fraud vendors, AML providers?

Regulator Expectations

How do regulators interpret the AI Act? What are expectations for financial institutions?

AI Act Compliance Roadmap

Practical steps for financial institutions

1

AI Inventory

2-4 weeks

Map all AI systems. Which models make decisions about customers?

2

Risk Classification

1-2 weeks

Determine per system if it is high-risk, limited risk or minimal risk.

3

Gap Analysis

3-6 weeks

Compare current documentation and processes with AI Act requirements.

4

Remediation

3-12 months

Implement technical documentation, bias testing, human oversight.

5

Ongoing Monitoring

Ongoing

Set up processes for continuous monitoring and periodic review.

15-month trajectory

Implementation Roadmap

Detailed 6-phase timeline with concrete deliverables

1

Inventory

Month 1-2
Complete AI system registerOwners per systemUse case documentation
2

Classification

Month 2-3
High-risk vs. limited/minimal risk per systemJustification per classification
3

Gap Analysis

Month 3-4
Per high-risk system: gap between current state and AI Act requirements
4

Governance Framework

Month 4-6
AI governance structureRoles & responsibilitiesPolicies & procedures
5

Implementation

Month 6-12
Technical adjustmentsDocumentationConduct FRIAsSet up monitoring
6

Audit-ready

Month 12-15
Internal auditDry-run for regulatorContinuous monitoring

AI System Inventory Guide

Typical AI systems in financial services and their likely classification

Important: Many systems do NOT become high-risk if they are only supportive (human decision-maker). Avoid unnecessary compliance costs by classifying carefully.

Credit & Lending

Usually high-risk
Credit scoringMortgage acceptanceAffordability checksCredit limit engines

Annex III, category 5b — automatically high-risk for creditworthiness assessment

Insurance

Often high-risk
Premium calculationRisk selectionClaims processingClaims fraud

High-risk for life and health insurance (Annex III, cat. 5b)

AML/KYC

Context-dependent
Transaction monitoringCustomer due diligenceSanctions screeningPEP detection

Can be high-risk if it makes autonomous decisions about individuals

Trading & Markets

Usually limited/minimal
Algorithmic tradingMarket surveillanceRisk analytics

No direct impact on natural persons — but note MiFID II overlap

Customer Interaction

Limited risk
ChatbotsNext-best-actionChurn predictionPersonalization

Transparency obligations (Art. 50) — customer must know it is AI

Operations

Usually minimal risk
Document processing (OCR/NLP)Process automationWorkforce planning

Minimal risk unless it makes decisions affecting individuals

Classification Decision Tree

Quickly determine the risk classification of your AI system

Does the system fall under Annex III category 5b (creditworthiness)?

Yes

Automatically high-risk

No

Go to next question

Does the system make autonomous decisions about natural persons?

Yes

Likely high-risk

No

Go to next question

Is it supportive with human override?

Yes

Possibly limited risk

No

Go to next question

Is it purely internal analytics without impact on individuals?

Yes

Minimal risk

No

Consult an expert for classification

This is a simplified decision tree. Consult your legal team for the definitive classification.

Governance Structure

Recommended organizational structure for AI governance in financial institutions

Board of Directors / Managing Board
AI Governance Committee (cross-functional)
AI Officers per business line
Responsible AI Team
Model Validation (2nd line)
Internal Audit (3rd line)

Build on existing Model Risk Management (MRM) — don't reinvent the wheel.

Key Roles

AI System Owner

Responsible per AI system for compliance and performance

AI Compliance Officer

Overall monitoring of AI Act compliance across the organization

Human Oversight Officer

Oversight for high-risk systems — required by Art. 14

Data Governance Lead

Ensures data quality and data governance — required by Art. 10

Compliance Checklist for High-risk Financial AI

Concrete checkpoints for each high-risk AI system

AI system registered in EU databaseArt. 49
Risk management system establishedArt. 9
Data governance & data quality ensuredArt. 10
Technical documentation completeArt. 11
Logging & traceability in placeArt. 12
Transparency to usersArt. 13
Human oversight establishedArt. 14
Accuracy & robustness testedArt. 15
FRIA conducted as deployerArt. 27
Conformity assessment completedArt. 43

This checklist applies per high-risk system. Consult your legal team for organization-specific requirements.

Common Mistakes to Avoid

Avoid these pitfalls in AI Act implementation

Treating everything as high-risk

Costs millions unnecessarily. Many systems are limited risk — classify carefully.

AI Act separate from existing frameworks

Integrate with MRM, DORA and GDPR instead of building a separate compliance silo.

Only involving IT

AI Act compliance is cross-functional: legal, business, risk and IT must collaborate.

Waiting for definitive guidance

The law is here. Start inventorying — waiting increases risk.

Assuming vendor compliance

As deployer you are responsible yourself. Verify what vendors claim.

Skipping the FRIA

Mandatory for deployers of high-risk systems (Art. 27). No FRIA = non-compliant.

What Makes Financial AI Different?

Sector-specific considerations

Dual Regulated

Financial AI falls under both AI Act and financial supervision

Higher Documentation Requirements

MRM frameworks already require extensive model documentation — AI Act adds to this

Consumer Protection Focus

Explainability and complaint rights are extra important for financial decisions

Sanction Exposure

Beyond AI Act fines also regulator enforcement and reputation risk

Financial regulation

Regulatory Overlap

How the AI Act connects with existing financial regulation

DORA

Overlap: ICT risk, incident reporting

Practical tip: Combine AI Act monitoring with DORA ICT risk framework

MiFID II

Overlap: Algorithmic trading rules

Practical tip: AI Act adds transparency requirements on top of MiFID II

GDPR

Overlap: DPIA, automated decision-making (Art. 22)

Practical tip: FRIA can partially overlap with DPIA — combine where possible

Solvency II

Overlap: Model governance for insurers

Practical tip: AI Act model documentation aligns with Solvency II model validation

DNB Good Practice

Overlap: AI risk management

Practical tip: DNB expects proactive AI governance, not just reactive