The Financial Conduct Authority has confirmed that it intends to regulate the use of artificial intelligence (AI) in the context of financial services where appropriate. The speech itself did not offer specifics on and FCA Chief Executive Nikhil Rathi emphasised that the regulator would only seek to intervene with new guidance or rules should they be necessary in the future.
The announcement, delivered in a speech to The Economist on 12 July, follows the publication of the UK Government's White Paper "A pro-innovation approach to AI Regulation" issued on 29 March 2023. As we highlighted in our commentary, the Government's proposed approach intends to achieve a "clear and unified approach to regulation… making it clear that AI technologies are subject to cross-cutting principles-based regulation."
Unlike the European Union, which is legislating via the AI Act, the UK Government stated it does not intend to place principles on a statutory footing, instead opting focusing on a “context-specific” approach, based on assessing the outcome the AI will generate for specific applications. The White Paper makes clear that the principles-based regulation would be implemented with reference to the existing regulatory framework, including that of the FCA.
In line with that, the speech placed significance on the existing regulatory framework being appropriate to address many of the current issues arising from AI, including the Senior Managers & Certification Regime, which makes clear that the senior managers are responsible for their firm's activities. Similarly, the Consumer Duty was underlined as making clear that securing good customer outcomes must be a design feature of products and services.
Where does the FCA see benefits from AI?
The FCA itself has been involved in various worldwide discussions on the regulation and use of AI in the financial services sector. Mr Rathi highlighted the influential role of the FCA both in the UK and globally in co-ordinating an approach to the use of AI.
The FCA is currently using AI methods to monitor portfolios and identify risky behaviours, and several benefits were identified within the speech for the use of AI within financial services, including:
- Improving financial models and reducing crime, meaning that fraud and money laundering can be confronted quickly and in a greater scale;
- Delivery of better, more accurate information to 'everyday' investors, not just the wealthiest investors. One particular example highlighted was the use of AI tools to help with debt management;
- Hyper-personalisation of products and services in the insurance market.
Notes of concern?
Despite the progression that has been made to date, Mr Rathi emphasised that there remain significant questions about where the accountability of the use of AI would ultimately lie. The FCA does not regulate the technology itself but the effect of the use of AI technology on financial services does lie within their regulatory remit.
The risk of cyber fraud, cyber attacks and identity fraud caused by AI also means that investment in fraud prevention and operational resilience will have to also increase.
The speech stressed that further regulation is likely to be necessary in the future, yet there were no details on what that regulation might entail, which is perhaps unsurprising given the nascent AI industry. As we emphasised in our previous assessment of the Government's White Paper, the regulatory approach to be introduced in the EU by the AI Act may prompt a reconsideration of whether a more stringent approach is required in the UK. One such example highlighted in Mr Rathi's speech was recent suggestions in Parliament that a bespoke SMCR-type regime should be in place for those senior individuals managing AI systems, as opposed to a reliance on the current system. This would reflect the likelihood that those individuals are not necessarily experienced senior managers, but have been elevated to that level by the widening use of AI.
In the meantime, we will keep you informed of any regulatory developments in this evolving space.