Introduction
AI is the shiny new intern of the financial world. Works fast, never complains, but sometimes makes decisions no human can explain.
In the UK, around 75% of financial firms already use AI, and this number is growing fast [Bank of England, 2024]. That’s either impressive or terrifying because alongside AI’s promise comes risks that could undermine the safety of firms, the fair treatment of consumers, and even financial stability.
That’s where AI governance comes in: the systems, frameworks and controls that ensure AI is used responsibly and transparently.
This blog delves into AI governance, focusing on the UK Building Societies and Specialist Lenders.
Let’s break this down into four manageable disasters. I mean… pillars.
Regulatory Risks; The “Don’t Get Fined” Part
The UK takes a principles-based, collaborative approach to AI, with no single AI law (yet).
However, existing regulations already apply; for instance, if a bank uses an AI model for credit scoring, it must ensure the model is valid and monitored (Model Risk Management), doesn’t produce biased outcomes (Equality Act or Consumer Duty), and that the outcomes can be explained or subject to a human review (GDPR).
This contrasts with the EU AI Act, which introduces strict rules for “high-risk” AI, including mandatory risk assessments, bias testing, audit trails, and human oversight.
Survival tips:
- Map AI to regulation. Don’t wait for compliance to panic – involve them early.
- Nominate someone. Appoint an “AI Responsible Person.” and consider an ethics or risk committee that includes multiple business functions for oversight.
- Be honest. Tell customers when a robot makes a decision about them
- Update everything. Ensure contracts, policies and contingency plans cover AI-specific issues.
"AI often sits outside existing regulatory frameworks, raising complex questions around data privacy, accountability, and acceptable use. Traditional controls may fall short, especially in customer-facing applications where the risk of unintended outputs is highest. A fast-follower strategy, supported by cross-functional governance, such as an executive AI oversight committee, can help organisations navigate evolving standards while ensuring decisions aren't left to individuals operating in isolation."
Shawbrook
Model Risks: The “It Did What?!” Problem
Model risk arises when AI systems do the wrong thing, or the right thing for the wrong reasons.
Unlike traditional software, many AI models are complex and hard to interpret, making errors harder to detect and explain.
For example, In 2019, Apple’s AI-driven credit card allegedly gave men higher limits than women. No one could explain why. Not even Apple. It became a BBC headline. (Which is never where you want to be if you’re in finance.) [BBC, 2019].
Survival tips:
- Test it like you mean it. Out-of-sample data. Bias checks. Edge cases. Test it, then get others to test it again.
- Keep humans in the loop. If AI says no to a loan, let a human overrule it (especially when it’s wrong). It not only reduces errors but trains the model.
- Monitor drift. Models age like milk, not wine. Retrain models with new data over time.
"AI introduces new dimensions of model risk, particularly when leveraging third-party models that organisations don’t fully control or understand. Mitigation requires a 'human-in-the-loop' approach, alongside trust layers such as retrieval-augmented generation (RAG) to constrain model behaviour and reduce hallucinations. Building explainability into AI reasoning is essential, not only to foster trust but also to meet growing regulatory and operational expectations."
Shawbrook
Operational Risks: The “Oops, the Bank’s Down” Scenario
AI systems, like any technology, can fail.
The risk is that when they do, they disrupt core banking services. Where uptime and trust are critical, such failures can quickly escalate into customer harm or regulatory breaches.
These failures are often harder to diagnose than with traditional systems, due to AI’s complexity.
The Bank of England also warns that concentrated reliance on a few platforms could amplify shocks across the sector.
Survival tips:
- Resilience planning. Create “AI fails” playbooks. Think manual overrides, backup spreadsheets, therapy animals.
- Vendor audits. Don’t just trust their PowerPoint. Check SLAs, security, backups. Then check again.
- Join the herd. Participate in industry-wide scenario planning. Share your horror stories.
"The use of emerging AI providers introduces elevated operational risk, given the relative immaturity of enterprise-grade capabilities such as service reliability, scalability, and incident response. To manage this, organisations should establish robust vendor oversight, implement clear operational SLAs, and design flexible architectures that minimise a deep dependency on any single provider. Where possible, integrate AI assessments into your existing third party onboarding and review assessments and ensure you have a mechanism to capture AI “Add-ons” that your vendors may offer post onboarding."
Shawbrook
Security Risks: The “Hack Me, I Dare You” Part
AI introduces new attack surfaces — from data leaks to adversarial manipulation.
As AI systems increasingly handle sensitive financial data and power critical decisions, they become attractive targets for cybercriminals.
Common nightmares include:
- Data poisoning. Where bad actors mess with training data.
- Model inversion. Where someone reverse-engineers’ personal data from your model.
- Data exposure. Where employees post sensitive data into ChatGPT.
Survival Tips;
- Limit access. Encrypt data. Anonymise training sets. Assume someone’s always watching.
- Train staff. Seriously. Half of them are already using AI. The other half are lying.
- Test for adversarial attacks. Test AI models for vulnerabilities to adversarial inputs. Incorporate robustness testing into model development, similar to penetration testing for applications.
- Demand more from vendors. SOC2, ISO 27001, and other protections for your data.
"Without clear AI usage policies and visibility, organisations risk exposure to shadow AI, unsanctioned or unmanaged use of AI tools that can lead to data leakage and compliance breaches. Mitigating this requires proactive communication, agreed guardrails, and the deployment of approved tooling to enable safe, governed adoption across the enterprise. AI brings new risks but it can also help us manage existing ones more effectively, as with many things in risk management, culture plays a massive role in your organisations’ risk profile, so ensure colleagues are well informed and supported on their AI journey."
Shawbrook
Conclusion: The Future Belongs to the Governed
AI in banking holds immense promise. However, with great power comes great responsibility.
If you want to ride the AI wave without wiping out:
- Start with compliance, or end with complications.
- Bake in human oversight, not human excuses.
- Make governance a feature, not a legal requirement.
The good news is that AI governance is maturing. Banks can draw on emerging best practices (many referenced above) and even industry frameworks. International standards and frameworks like the EU’s AI Act is focusing on similar core concepts.
The bottom line?
AI won’t destroy you. But ignoring its risks might.
Co-Authored by Kyle Masterson & Sam Bridges-Sparkes, Head of BI Analytics & Strategy at Shawbrook Bank Limited