Why Take Action Now?
The AI Act has major impact on the financial sector
August 2025
First obligations for high-risk AI systems come into effect
Credit Scoring = High-risk
AI for creditworthiness automatically falls under strictest rules
Fines up to €35 million
Or 7% of global annual turnover — regulators will enforce
Explainability Requirement
Customers have right to explanation for automated decisions
High-risk AI in Financial Services
These AI applications fall under strict AI Act requirements (Annex III)
Credit Scoring
AI systems that assess creditworthiness of natural persons — from mortgage acceptance to personal loans.
Insurance Premiums & Claims
Systems assessing risks for life and health insurance, or making claims decisions.
Anti-Money Laundering (AML)
Transaction monitoring and customer due diligence systems fall under financial supervision and AI Act.
Fraud Detection
Systems detecting fraudulent transactions or behavior — from payment fraud to identity fraud.
Specific Challenges for Financial Institutions
The AI Act brings unique compliance questions for the financial sector
Model Risk Management Integration
How does AI Act compliance fit into existing MRM frameworks? What are overlaps with SR 11-7 and ECB guidance?
Explainability vs. Black Box
Many scoring models are complex. How to meet explainability requirements without sacrificing performance?
Bias in Credit Decisions
Non-discrimination is crucial. How to test AI models for proxy discrimination and indirect bias?
Legacy Systems
Much production AI is years old. How to bring existing models in line with new requirements?
Third-party AI Vendors
What to ask from external vendors? Credit bureaus, fraud vendors, AML providers?
Regulator Expectations
How do regulators interpret the AI Act? What are expectations for financial institutions?
AI Act Compliance Roadmap
Practical steps for financial institutions
AI Inventory
2-4 weeksMap all AI systems. Which models make decisions about customers?
Risk Classification
1-2 weeksDetermine per system if it is high-risk, limited risk or minimal risk.
Gap Analysis
3-6 weeksCompare current documentation and processes with AI Act requirements.
Remediation
3-12 monthsImplement technical documentation, bias testing, human oversight.
Ongoing Monitoring
OngoingSet up processes for continuous monitoring and periodic review.
Implementation Roadmap
Detailed 6-phase timeline with concrete deliverables
Phase 1.Inventory
Month 1-2Phase 2.Classification
Month 2-3Phase 3.Gap Analysis
Month 3-4Phase 4.Governance Framework
Month 4-6Phase 5.Implementation
Month 6-12Phase 6.Audit-ready
Month 12-15AI System Inventory Guide
Typical AI systems in financial services and their likely classification
Important: Many systems do NOT become high-risk if they are only supportive (human decision-maker). Avoid unnecessary compliance costs by classifying carefully.
Credit & Lending
Usually high-riskAnnex III, category 5b — automatically high-risk for creditworthiness assessment
Insurance
Often high-riskHigh-risk for life and health insurance (Annex III, cat. 5b)
AML/KYC
Context-dependentCan be high-risk if it makes autonomous decisions about individuals
Trading & Markets
Usually limited/minimalNo direct impact on natural persons — but note MiFID II overlap
Customer Interaction
Limited riskTransparency obligations (Art. 50) — customer must know it is AI
Operations
Usually minimal riskMinimal risk unless it makes decisions affecting individuals
Classification Decision Tree
Quickly determine the risk classification of your AI system
Does the system fall under Annex III category 5b (creditworthiness)?
Automatically high-risk
Go to next question
Does the system make autonomous decisions about natural persons?
Likely high-risk
Go to next question
Is it supportive with human override?
Possibly limited risk
Go to next question
Is it purely internal analytics without impact on individuals?
Minimal risk
Consult an expert for classification
This is a simplified decision tree. Consult your legal team for the definitive classification.
Governance Structure
Recommended organizational structure for AI governance in financial institutions
Build on existing Model Risk Management (MRM) — don't reinvent the wheel.
Key Roles
AI System Owner
Responsible per AI system for compliance and performance
AI Compliance Officer
Overall monitoring of AI Act compliance across the organization
Human Oversight Officer
Oversight for high-risk systems — required by Art. 14
Data Governance Lead
Ensures data quality and data governance — required by Art. 10
Compliance Checklist for High-risk Financial AI
Concrete checkpoints for each high-risk AI system
This checklist applies per high-risk system. Consult your legal team for organization-specific requirements.
Common Mistakes to Avoid
Avoid these pitfalls in AI Act implementation
Treating everything as high-risk
Costs millions unnecessarily. Many systems are limited risk — classify carefully.
AI Act separate from existing frameworks
Integrate with MRM, DORA and GDPR instead of building a separate compliance silo.
Only involving IT
AI Act compliance is cross-functional: legal, business, risk and IT must collaborate.
Waiting for definitive guidance
The law is here. Start inventorying — waiting increases risk.
Assuming vendor compliance
As deployer you are responsible yourself. Verify what vendors claim.
Skipping the FRIA
Mandatory for deployers of high-risk systems (Art. 27). No FRIA = non-compliant.
What Makes Financial AI Different?
Sector-specific considerations
Dual Regulated
Financial AI falls under both AI Act and financial supervision
Higher Documentation Requirements
MRM frameworks already require extensive model documentation — AI Act adds to this
Consumer Protection Focus
Explainability and complaint rights are extra important for financial decisions
Sanction Exposure
Beyond AI Act fines also regulator enforcement and reputation risk
Regulatory Overlap
How the AI Act connects with existing financial regulation
DORA
Overlap: ICT risk, incident reporting
Practical tip: Combine AI Act monitoring with DORA ICT risk framework
MiFID II
Overlap: Algorithmic trading rules
Practical tip: AI Act adds transparency requirements on top of MiFID II
GDPR
Overlap: DPIA, automated decision-making (Art. 22)
Practical tip: FRIA can partially overlap with DPIA — combine where possible
Solvency II
Overlap: Model governance for insurers
Practical tip: AI Act model documentation aligns with Solvency II model validation
DNB Good Practice
Overlap: AI risk management
Practical tip: DNB expects proactive AI governance, not just reactive
Related Articles
Deepen your knowledge of AI Act compliance in the financial sector
FRIA: Complete Guide to Article 27 AI Act
Everything about the mandatory fundamental rights impact assessment for high-risk AI systems.
AI Governance Financial Sector 2026: What Banks Need to Know
Practical guide for banks and financial institutions on AI Act governance.
From Scoring Algorithm to Transparent Credit Decision
How AI credit scoring complies with EU AI Act transparency requirements.
Ready to Start AI Act Compliance?
Practical tools and guidance for financial institutions