As we steam ahead into 2024, while banks experiment with generative AI, they also need to be aware of how regulators will scrutinise how they use AI. Of specific concern is how the impact of the EU’s AI Act on banks’ AI ambitions will become much clearer
over the coming months.
The EU AI Act, which will become enforceable during 2025, is the world’s first-ever comprehensive legal framework on AI. It aims to foster trustworthy AI in Europe and beyond through ensuring AI systems respect fundamental rights, safety, and ethical principles,
and specifically targeting the potential risks of powerful AI models.
The new law has clear relevance to the financial services sector. At the heart of the new regulations is an assessment of how risky AI use cases could be. Notably, AI systems identified as high-risk include AI technology used in credit checks that could
deny a customer a loan, are cited by the European Commission. Similarly, how AI is used to direct pricing and risk assessment in life and health insurance would be subject to the new law.
There remains work to be done by standardisation organisations and then by national authorities to apply any new AI governance and risk management requirements and standards onto how banks and other institutions use AI.
On generative AI and large language models like Open AI’s GPT-4, the new act attempts to answer any anxieties about how these powerful but very new forms of AI could affect people’s lives. A new AI Office is being established, which will be responsible for
enforcing and overseeing the new rules for general-purpose AI systems used directly or indirectly by financial services companies. It is important to remember that financial institutions remain responsible for the tools and services they outsource, including
AI-powered decision-making.
Of course, in some financial services markets like the UK, the new EU Act will not apply at all. How these other jurisdictions develop regulatory regimes to police AI adoption in the financial services sector will vary. In the UK, it is very likely that
the regulator will lean on the new Consumer Duty Regulation and apply its focus on fairness and transparency to how AI is put into practice.
In fact, how existing regulations will accommodate the rise in AI adoption by banks and others is an important point. AI has been widely used by financial services organisations for many years now including in credit processes, claims management, anti-money
laundering and fraud detection. This use of AI has not gone unnoticed by regulators, so as AI evolves there will be a case for asking whether existing rules are sufficient or need enhancing rather than replacing.
Some might argue that existing and new rules like the EU AI Act stifle innovation and the sector should be bolder in its adoption of AI. Countries like the UK may also try to diverge from the EU AI Act for national competitive advantage.
There will be some political game play on how AI regulations are framed publicly but it is in everyone’s interests that there is consistency on regulations internationally and alignment to avoid any confusion and onerous checks. Also, whether it is liked
or not in certain places, it is likely that markets outside of the EU will follow, if not explicitly copy, the EU AI Act in another incidence of the Brussels Effect on regulatory best practice like GDPR.
Banks, which have been using AI for many years to automate workflows used in everyday banking, are not taking risks with the adoption of more powerful AI technologies. They are exhibiting caution especially in how generative AI uses sensitive data or might
become involved in direct interactions with customers. The focus will, and should always be, on the outcome for both the customer and the bank. Getting the right outcome and optimising rather than undermining a business process is the real aim. New rules like
the EU AI Act are going to be welcomed by the sector because of how they set clearer guidelines and guardrails on what can be done or not done with this transformative technology.