A compliance officer at a major European bank recently told me: "We have AI models in production that nobody fully understands how they reach their decisions. And by August, we need to prove we have them under control."
He's not alone with this problem. Recent research by EY and MIT shows that over 70% of banks are now using agentic AI β but governance frameworks are structurally lagging behind adoption.
The clock is ticking: By August 2, 2026, high-risk AI systems in the financial sector must fully comply with the EU AI Act. That's six months away. The time for pilot projects and "wait and see" is over.
The State of AI in Banking: Adoption vs. Governance
The numbers are impressive and concerning at the same time:
| Metric | Percentage | Implication |
|---|---|---|
| Banks using agentic AI | 70%+ | AI is mainstream, no longer experimental |
| Fully deployed | 16% | Production systems with real impact |
| In pilot | 52% | Scale-up imminent |
| With robust governance framework | ??? | Not measured β and that says enough |
The problem lies in that last row. We measure adoption precisely, but governance remains vague. And this while supervisors are becoming increasingly explicit about their expectations.
What Supervisors Expect in 2026
The European Banking Authority (EBA), ECB, and national supervisors have sharpened their priorities for 2026. Three themes stand out:
1. Human-in-the-Loop Is No Longer Optional
In 2025, "human oversight" shifted from nice-to-have to regulatory expectation. Organizations must demonstrate how AI-generated outputs are validated and how human experts are involved in decisions.
This especially applies to:
- Credit decisions
- Fraud detection
- Customer segmentation
- Risk assessments
2. Explainability and Auditability
Supervisors expect banks to explain:
- How an AI model reached a decision
- What data was used
- What biases may play a role
- How the model was tested and validated
The EBA emphasizes that existing CRR/CRD requirements already provide a "comprehensive and technology-neutral governance and risk management framework" β but this must be explicitly applied to AI.
3. Third-Party AI Risk Management
Perhaps the biggest blind spot: AI that enters through vendors, cloud services, and software integrations. EY explicitly warns: "Update existing AI policies to cover integration across software and service supply chains."
Shadow AI: A growing problem is unofficial AI use by employees β ChatGPT for customer communications, Copilot for code, AI tools for analysis. This falls outside governance and creates invisible risks.
The Overlap Between AI Act and Financial Legislation
One of the biggest headaches for compliance teams: how do EU AI Act requirements relate to existing financial regulation?
The European Parliament explicitly raised concerns about this overlap in November 2025. Taylor Wessing summarizes: "The lack of sufficient guidance on interpreting these overlaps and interactions introduces undue complexity, compliance burdens and legal uncertainty."
| Subject | AI Act | Existing Regulation | Status |
|---|---|---|---|
| Governance & Risk Management | Article 9 | CRR/CRD framework | Synergy possible |
| Cybersecurity | Article 15 | DORA | Derogation in AI Act |
| Documentation | Article 11 | MiFID II, IDD | Overlap unclear |
| Bias & Fairness | Article 10 | Consumer Duty (UK), fair lending | Guidance needed |
The Commission must publish guidelines by February 2, 2026 on the practical implementation of Article 6 β including how this relates to sector-specific regulation.
Five Concrete Actions for Q1 2026
Based on the latest insights from EY, EBA, and compliance experts, these are the priorities for the coming months:
1. Inventory All AI Applications
Not just official projects, but also:
- Embedded AI in software (Microsoft 365 Copilot, Salesforce Einstein)
- AI at vendors and outsourcing partners
- "Shadow AI" by employees
2. Classify by Risk
Map each application against AI Act risk categories:
- High risk: Credit scoring, fraud detection, HR decisions
- Limited risk: Chatbots, content generation
- Minimal risk: Internal efficiency tools
3. Implement Human-in-the-Loop Controls
For each high-risk application:
- Who validates outputs?
- How are deviations escalated?
- Which decisions may be fully automated?
4. Document Model Governance
Create or update:
- Model inventory with ownership
- Validation and test protocols
- Bias monitoring procedures
- Incident response plans
5. Train Your Organization
AI literacy is no longer a luxury β it's an obligation under Article 4 of the AI Act. Ensure that:
- Board members understand AI risks
- Compliance teams can assess
- End users know what's allowed and what isn't
The Business Case for Proactive Governance
It's tempting to see AI governance as a cost center and slowdown. But practice shows otherwise.
Organizations that invest early in governance report:
- Faster time-to-market for new AI applications (no last-minute compliance scramble)
- Lower risk costs through early detection of bias and errors
- Higher adoption because employees trust the tools
- Better supervisory relationships through proactive communication
Conclusion: August 2026 Is Coming Faster Than You Think
The financial sector is at a tipping point. AI is no longer experimental β it's operational, scalable, and increasingly autonomous. At the same time, supervisor expectations are becoming more concrete and deadlines harder.
The question is not whether you should tackle AI governance, but whether you do it now β or later under time pressure.
Action: Start this week with an inventory of all AI applications in your organization. Not next month. This week. Everything else follows from there.
Want to prepare your organization for the EU AI Act deadline? Embed AI offers training and consulting for financial institutions implementing AI governance.