Responsible AI Platform

EBA on the AI Act in Financial Services: An Additional Layer, Not a New Universe

··21 min read
Dutch version not available

Important Development: On November 21, 2025, the European Banking Authority (EBA) sent a letter to the European Commission with the results of its AI Act mapping exercise. This analysis shows how AI Act obligations relate to existing regulation for banks and payment institutions, specifically for credit scoring and creditworthiness assessment.

In November 2025, the European Banking Authority (EBA) sent a letter to the European Commission with the results of its AI Act "mapping exercise". In this letter, the EBA explains how the obligations under the AI Act relate to existing banking and payments law, specifically for AI systems used for creditworthiness assessment and credit scoring of natural persons.

For anyone working on AI governance in the financial sector, this document is important. It shows that the AI Act does not stand alongside existing rules, but rather sits on top of and between them. It is not an entirely new compliance universe, but rather an additional layer on frameworks that have been in place for years, such as CRR/CRD, DORA, PSD2, CCD2, MCD, and the EBA Guidelines.

In this blog, we walk through the core of the EBA letter and translate it into implications for banks, payment institutions, and their AI and compliance teams.

Why is the EBA specifically looking at credit scoring?

The EBA's starting point is the classification of AI systems for creditworthiness assessment and credit scoring as high-risk under the AI Act. Annex III, point 5(b), explicitly places these systems in the high-risk category. This makes sense: credit decisions directly affect financial inclusion, discrimination, over-indebtedness, and the right to a dignified standard of living.

In practice, we're talking about AI applications such as:

  • models that automatically calculate a credit score for consumers
  • AI-driven decision-making on the acceptance or rejection of credit applications
  • dynamic limit setting based on behavioral data and payment history

Precisely these systems fall under both the AI Act and existing sectoral rules, such as CRR/CRD for prudent lending, CCD2 and MCD for consumer and mortgage credit, and the EBA Guidelines on Loan Origination and Monitoring (LOM). The EBA's central question: where do these regimes overlap, where do they complement each other, and where is there a risk of duplication?

The mapping exercise: AI Act alongside CRR/CRD, DORA and co.

In January 2025, the EBA established a dedicated workstream to systematically map the AI Act against relevant EU sectoral frameworks in the banking and payments domain. The focus was on:

  • identifying areas where the AI Act explicitly provides for synergy or derogation
  • identifying areas where there is no derogation, but substantive overlap exists with existing rules

Core message from the EBA

There is no fundamental conflict between the AI Act and existing financial supervision law. The AI Act will mainly need to be "woven into" existing governance, risk, and IT frameworks, rather than institutions having to build an entirely parallel system. This is confirmed in various analyses by market parties, including Linklaters.

The EBA emphasizes that while the AI Act provides targeted derogations and synergies for some requirements (such as for quality management and technical documentation), there are no explicit derogations for other requirements (such as human oversight, data governance, and cybersecurity), even though EU financial law already contains extensive regulation in these areas.

Where the AI Act explicitly takes sectoral rules into account

The AI Act itself provides for synergy or partial derogation at various points, particularly for high-risk systems in regulated sectors. The EBA elaborates this in the Annex for a series of AI Act obligations, including:

Quality management and risk management

The obligations for a quality management system (Article 17 AI Act) and a risk management system (Article 9) closely align with existing prudential frameworks. Think of:

  • CRR/CRD provisions on internal models, risk management, and governance (Articles 174, 175, 176, 185 CRR and Article 74 CRD)
  • EBA Guidelines on Internal Governance on internal control framework, regulatory compliance assessment, and internal audit
  • EBA Guidelines on Loan Origination and Monitoring on credit-granting monitoring, credit risk policies, and automated CWA models
  • DORA obligations for ICT-risk management and business continuity (Articles 5 and 6 DORA)

Practical consequence: Banks do not need to design a new quality management framework for their AI credit scoring from scratch. They must expand their existing model governance, credit risk processes, and DORA-ICT frameworks and explicitly anchor AI requirements (such as data governance, model monitoring, and explainability) within them.

The EBA specifically points to the relevance of existing requirements such as:

  • CRR Article 174 on the use of models and validation
  • CRR Article 175 on documentation of rating systems
  • EBA IG Guidelines paragraph 141 on internal control function responsibilities
  • DORA Article 5 on governance and organization
  • CCD2 Article 18.3 on data relevance and accuracy
  • EBA LOM Guidelines paragraph 38 on credit-granting monitoring

Technical documentation and record-keeping

For technical documentation (Article 18) and logging/record-keeping (Article 19), the EBA shows how thorough CRR, the IRB-RTS, and the EBA guidelines already are in terms of:

  • documentation of model design, assumptions, and validation
  • traceability of ratings, overrides, and changes
  • data quality and data provision for credit risk models
AI Act RequirementExisting Sectoral RegulationKey Articles
Technical Documentation (Art. 18)CRR documentation requirements for rating systemsCRR art. 144, 175, 452(f)
Record-keeping (Art. 19)CRR data collection & storage obligationsCRR art. 174, 176
Post-market Monitoring (Art. 26, 72)CRR/CRD model validation & monitoringCRR art. 174(d), 185, 190(2)

Institutions that have been operating under the IRB regime for years therefore have a solid basis. The challenge becomes explicitly demonstrating that this existing documentation also covers AI Act requirements, including elements such as dataset bias, representativeness, and robustness of AI models.

The EBA points to concrete CRR articles in its mapping:

  • Article 144: documentation of rating system and model design rationale
  • Article 169: documentation of rationale for assigning obligors
  • Article 174: documentation of model input data vetting process, model specification and testing
  • Article 175: documentation of design and operational details of rating systems, including all major changes

Incident reporting and post-market monitoring

The AI Act requires post-market monitoring and incident reporting for high-risk systems (Articles 26, 72, 73). The EBA links this to:

  • DORA: reporting obligation for major ICT incidents and requirements for incident response (Articles 17, 18, 19 and CDR incident classification)
  • CRR/CRD: continuous monitoring of model performance and credit risk (Articles 174, 185, 190 CRR and Article 74 CRD)
  • EBA LOM Guidelines: monitoring of credit quality and model performance (paragraphs 34, 35, 38, 42, 53, 55, 60)

Practical translation: The processes for ICT incidents, model validation, and credit monitoring already exist. They only need to be made explicitly AI-specific, for example by defining AI incidents (such as systematic bias or incorrect scoring) as a separate category in incident management.

Consumer right to explanation

Article 86 AI Act introduces a right to explanation for consumers regarding certain AI decisions. The EBA shows that CCD2 already contains obligations for credit providers:

  • CCD2 Article 18(8): consumers have the right to request and obtain an explanation of the creditworthiness assessment, including the logic and risks
  • CCD2 Article 18(9): obligation to inform consumers about rejection and automated data processing

Where AI Act requirements must be integrated with sectoral regulation

For a second category of AI Act requirements, the regulation explicitly provides for integration or combination with existing sectoral requirements:

Risk management system (Article 9)

The EBA identifies extensive overlap between the AI Act risk management system and existing risk management frameworks:

Existing risk management requirements relevant for AI Act

CRD Article 74(1) and 76: Processes to identify, manage, monitor, and report all material risks. Risk management function must have adequate resources for managing all material risks.

CRR Articles 144, 169, 174, 179, 189-191: Extensive requirements for meaningful obligor assessment, validation of rating systems, periodic review of rating criteria, PD assessment techniques, and CRCU responsibilities.

DORA Articles 6 and 8: ICT risk management framework with strategies, policies, procedures, ICT protocols, and tools. Continuous identification of ICT risk sources and assessment of cyber threats.

Specifically for credit scoring, the EBA points to:

  • CDR assessment methodology Article 16(3): CRCU must have sufficient resources and experienced personnel
  • EBA IG Guidelines paragraphs 152-196: Risk management framework must cover all relevant risks, including analysis of trends in new or emerging risks
  • EBA LOM Guidelines paragraphs 34-60: Criteria for identification, assessment, approval, monitoring, reporting, and mitigation of credit risk

Fundamental Rights Impact Assessment (FRIA)

Article 27 AI Act requires certain deployers to conduct a Fundamental Rights Impact Assessment for high-risk systems, including AI for creditworthiness. The EBA points to the link with:

  • CCD2 Article 6: non-discrimination of consumers
  • GDPR: existing obligations for Data Protection Impact Assessments (DPIA)

Integrated assessments

For banks, the challenge becomes not treating DPIAs, FRIAs, and existing risk assessments as separate silos, but establishing an integrated assessment process in which both financial and fundamental rights risks are assessed. This is also emphasized in broader analyses of the impact of AI on the European financial sector.

Where there is no explicit synergy in the AI Act, but overlap exists

Interesting are the parts where the AI Act itself does not mention specific derogation or synergy, but where the EBA still sees a clear link with existing regulation. The EBA explicitly emphasizes in its letter that the AI Act does not provide targeted derogations or regulatory synergies for these requirements, but that EU financial services law nevertheless already contains a wide range of relevant requirements.

Human oversight (Article 14)

Article 14 AI Act places strong emphasis on human control and the ability to overrule AI outcomes. The EBA links this to:

  • CRR Article 149(1): conditions for stopping the use of IRB models
  • CRR Article 172(3): model input/output override and personnel responsible for approving overrides
  • CRR Article 174: human judgment and human oversight to review model-based assignments
  • EBA Guidelines on Internal Governance paragraphs 26, 31, 32: oversight of risk management and internal controls, business line responsibilities
  • CDR assessment methodology Articles 24(2) and 39: situations where human judgment is used to override inputs/outputs of rating systems

Practical consequence for credit scoring: A bank cannot rely solely on a fully automated acceptance chain without clear procedures for human review, escalation, and documentation of overrides. These procedures should already be in the model governance framework but must be explicitly aligned with the AI Act.

The EBA also points to consumer rights under CCD2:

  • CCD2 Article 18(8): consumers have the right to request human intervention
  • CCD2 Article 18(9): obligation to inform consumers about the right to human assessment and the procedure for contesting the decision

Data governance (Article 13)

Data governance is a second area where the AI Act does not provide explicit derogation, but the EBA shows that CRR, EBA PD/LGD guidelines, and the LOM Guidelines already contain extensive data quality and bias requirements.

Familiar themes from the credit risk world, such as:

  • representativeness of data
  • absence of material bias
  • documentation of data cleansing and feature engineering

now become explicitly relevant for AI Act compliance.

Data Governance AspectExisting Sectoral Requirements
Data collection & storageCRR art. 144(1)(d), 174, 176(1)
Data representativenessCDR assessment methodology art. 37(2), 42(1)(c); EBA PD/LGD GLs para 17
Bias detection & preventionCRR art. 179(1)(f); EBA PD/LGD GLs para 31; EBA LOM GLs para 53(e), 55
Data quality & accuracyCCD2 art. 18; EBA LOM GLs para 60, 87-89
Data securityCDR ICT risk management framework art. 11; EBA LOM GLs para 60

With the AI Act lens on, supervisors will look more critically at segmentation, proxies for protected characteristics, and how risk models ensure fairness.

AI literacy (Article 4)

New in the AI Act is the explicit requirement around AI literacy. The EBA links this to existing provisions on knowledge and competence requirements:

  • CRD Article 76(2): management body must allocate adequate resources to manage all material risks
  • CRR Article 189: management body, senior management, and designated committees must have a general understanding of rating system design and operation
  • CDR assessment methodology Articles 16(3) and 17(2): CRCU and internal audit must have adequate resources and experienced and qualified personnel
  • DORA Article 13: learning and evolving, ICT security awareness programs, and digital operational resilience training
  • CCD2 Article 33 and MCD Article 9: knowledge and competence requirements for staff
  • EBA LOM Guidelines paragraphs 53, 66, 79-81: management body must have sufficient understanding of technology-enabled innovation, staff must be adequately trained

Practical consequence: AI training is not just an IT party. Board members, risk managers, product owners, and customer advisors must be trained to a level appropriate to their role in the lifecycle of a high-risk AI system.

Accuracy, robustness, and cybersecurity (Article 15)

For accuracy, robustness, and cybersecurity, the EBA points to extensive existing requirements:

  • CRR Articles 144, 174, 179, 185: categorization of model changes, soundness and integrity of implementation processes, plausibility of estimates, validation
  • DORA Articles 6, 10, 11: comprehensive ICT risk management framework to address ICT risks quickly and efficiently, mechanisms for prompt detection of anomalous activities, ICT business continuity policy
  • CDR ICT risk management framework Articles 21, 23: prevention of unauthorized access, protection of recording of anomalous activities
  • EBA Guidelines on PD and LGD estimation paragraphs 16, 36-38: identification of deficiencies in risk parameter estimation, methodologies to correct deficiencies

Transparency to deployers (Article 13)

For transparency towards deployers, the EBA points to:

  • CRR Article 171(1)(b): documentation must enable third parties to understand, replicate, and evaluate assignments
  • DORA Article 17(3)(d): plans to provide information to financial entities acting as counterparts
  • EBA LOM Guidelines paragraphs 53(b) and 54(b): management body must have sufficient understanding of the use of technology-enabled innovation, traceability measures, and model override procedures

What does this all mean for banks and payment institutions?

The EBA letter is not a theoretical exercise. It provides a roadmap for how institutions can implement the AI Act without getting lost in duplicate structures.

A number of concrete implications:

1. Use existing frameworks as a foundation

Governance, risk management, model validation, and DORA-ICT processes form the backbone. The task is to weave AI Act requirements into these existing structures, not to set everything up in duplicate.

Practical approach

Start with a gap analysis between existing model governance documentation and AI Act requirements. Identify where existing CRR/CRD processes already comply (for example, for technical documentation) and where additions are needed (for example, for specific AI elements such as explainability or bias detection).

2. Map AI systems within the existing model inventory

High-risk AI systems for credit scoring should be fully visible in the IRB/credit risk model inventory, with clear links to AI Act classification, documentation, and monitoring.

Concretely, this means:

  • Expansion of the existing model inventory with AI Act-specific metadata (high-risk classification, intended purpose, AI techniques used)
  • Mapping of existing IRB documentation to AI Act Article 18 requirements
  • Integration of AI Act monitoring requirements into existing model performance monitoring dashboards

3. Make human oversight tangible

Define clear roles for who may overrule AI decisions, how this is recorded, and what escalation paths exist in case of systematic problems in scores or outcomes.

The existing CRR Article 172(3) and 174 procedures for overrides must be expanded with:

  • Explicit AI override protocols that comply with Article 14 AI Act
  • Documentation of who is authorized to overrule AI decisions at what level
  • Escalation procedures when systematic AI problems are detected (for example, structural bias in credit scoring outputs)
  • Training of personnel in recognizing situations where human intervention is necessary

4. Professionalize data governance with an AI lens

Many data processes already exist, but AI makes questions about fairness, indirect discrimination, and proxies more urgent. Data quality becomes not only a prudential requirement, but also a fundamental rights issue.

Intensified supervision expected: Supervisors will look more critically at existing data practices with an AI Act lens. Segmentation models previously considered technical-prudential are now also assessed for fundamental rights impact. Proxies for protected characteristics (such as postal code as a proxy for ethnicity) that were previously accepted require reconsideration.

5. Invest in AI literacy across the board

Compliance, risk, IT, business, and the board all have different information needs, but no one can afford to continue viewing AI as a "black box". Training must align with the role in the value chain of the AI system.

Concrete training needs per role:

  • Management body: strategic understanding of AI risks, AI Act obligations, and governance structures (in line with CRR Article 189)
  • Risk management: in-depth technical understanding of AI models, validation techniques, and AI-specific risks
  • Compliance: legal interpretation of AI Act requirements and mapping to existing frameworks
  • IT/Data Science: practical implementation of AI Act requirements in development processes
  • Front-office staff: basic knowledge of how AI systems are used and when to escalate

Towards integrated AI governance for the financial sector

The most important message from the EBA letter is that the AI Act in the financial sector is not about building a separate AI compliance island. The AI Act strengthens and connects existing rules:

  • prudential requirements from CRR/CRD
  • operational resilience via DORA
  • consumer protection via CCD2 and MCD
  • payment security and incident reporting under PSD2
  • and the detailed EBA guidelines for governance, model use, and lending

The EBA's strategic message

The EBA is of the opinion that the list of provisions included in the Annex provides a comprehensive overview of how EU financial services law already addresses relevant AI Act requirements. This will be a useful instrument to inform the Guidelines on the interplay between AI Act and EU sectoral legislation, facilitate management of potential overlaps and complementarities, and ultimately ensure a smooth implementation of the AI Act in the EU banking and payment sector.

For institutions that have seriously invested in model governance, data quality, and ICT-risk management in recent years, this is good news. The foundation is there. The challenge now is to explicitly build AI into those existing foundations, with more attention to explainability, fundamental rights, and AI literacy.

For lawyers, compliance officers, and AI governance teams, this means that the work in the coming years will mainly be about:

  • translating AI Act provisions into existing internal policies and control frameworks
  • establishing coherence between prudential, data protection, and AI Act requirements
  • and guiding the organization in the responsible deployment of AI in credit processes

What's next?

The EBA letter is addressed to the European Commission under Article 96(1)(e) AI Act, which requires the Commission to issue Guidelines on the interplay between the AI Act and EU sectoral legislation, including EU banking and payments sector legislation.

The EBA remains committed to supporting the Commission in the implementation of the AI Act, including via the AI Board subgroup on financial services or other relevant sub-structures.

For financial institutions, this means:

1. Short term (Q1-Q2 2026): Use the EBA mapping as a guide for your own gap analysis and documentation of how existing frameworks cover AI Act requirements

2. Medium term (2026): Anticipate the Commission's Guidelines by already bringing harmonization between existing sectoral compliance and AI Act requirements

3. Long term (2027+): Expect further specification and possible adjustments in EBA Guidelines to explicitly integrate AI Act requirements

Strategic advantage: Institutions that now explicitly map their existing model governance, DORA frameworks, and EBA Guideline compliance to AI Act requirements are ahead of the competition. They can demonstrate to supervisors that they are not building a new compliance universe, but are expanding existing excellent practices with AI-specific elements.

Conclusion: the EBA has drawn the map

The EBA has drawn the first layer of the map. It is now up to banks and payment institutions to use that map as a compass for their own AI governance structure.

The core message: no panic, no parallel structures, but targeted integration. The AI Act is an additional layer on a solid foundation of prudential regulation, operational resilience, and consumer protection. For institutions that have done their homework on CRR/CRD, DORA, and EBA Guidelines, the step to AI Act compliance is not a leap in the dark, but a logical next step.

It does require explicit attention to AI-specific elements: human control must not only be technically possible but also procedurally secured, data governance must not only be prudentially but also fundamental rights-proof, and AI literacy must run through the entire organization, from board to front-office.

For lawyers, compliance officers, and AI governance teams, the EBA letter is a practical roadmap. It shows where existing regulation already provides for AI Act compliance, where targeted additions are needed, and where the focus should be in the coming years.

The EBA's message is clear: the AI Act is not a new universe, but an additional layer that fits seamlessly on what already exists. For institutions that understand this and act accordingly, AI Act compliance becomes not a compliance burden but a strengthening of existing governance excellence.


Test your knowledge

EBA AI Act Mapping Quiz

Test your knowledge about the EBA mapping exercise and the AI Act in the financial sector. From CRR/CRD overlap to DORA integration - discover how well you master the subject.

10 questionsEstimated time: 8 minutes
Start the quiz

Need help with AI Act implementation in the financial sector?

Want to know how your existing CRR/CRD, DORA, and EBA Guideline compliance relates to AI Act requirements? Or do you have questions about setting up an integrated AI governance framework? Contact us for a no-obligation conversation about how you can practically apply the EBA mapping within your organization.