Regulators playing catch up with AI governance

0

Artificial intelligence (AI) is the simulation of human intelligence processes by machines. It is a burgeoning technology that is seen as either the future or the end of humanity as we know it, depending on who you speak to. AI is being used to drive forward innovation in a great number of fields from speech recognition, machine vision and bespoke ‘expert systems’ that are revolutionising problem-solving across almost every sector of industry. Its application is so widespread that governments and regulators globally are now scrambling to establish a suitable way to govern its use and critically, prevent or mitigate its capacity to cause harm.

How does AI impact the financial services sector?

AI already has extensive use in financial services with many banks and firms using it for (but not limited to):

  • Fraud prevention
  • Risk management
  • Process automation
  • Customer experience
  • Regulatory compliance
  • Credit decisions
  • Anti-money laundering
  • Predictive analytics

The list of its potential uses is almost endless and so it requires focus and attention that regulators have previously not been willing to commit to. Up until now, UK financial regulators have adopted a ‘technology-neutral’ approach, meaning that they neither prohibit nor authorise specific technologies, and have thus far only published guidance on how to mitigate risks that apply to certain uses of technology. With so many applications of AI already in use across the industry, many (including the regulators themselves) are now questioning whether a ‘technology-neutral’ approach is still appropriate.

Understanding it, deploying it and being mindful of its capacity to negatively impact consumers will be the challenge for all financial services firms going forward

April 2024 saw the Financial Conduct Authority (FCA), Bank of England (BoE) and the Prudential Regulation Authority (PRA) publish their ‘strategic approaches’ to regulating AI in the financial services sector. Although this is not a certified set of rules for using AI within financial services firms, it is likely the pre-cursor to a more complex set of guidelines that the regulators have admitted will need to be implemented in the not-too-distant future. They have acknowledged that AI is fast developing, and they will need to keep pace with its growing complexity as they try to regulate it with “pro-innovation” and “pro-safety” approaches.

The financial regulator’s proposals have come off the back of a consultation White Paper published by HM Treasury in spring 2023 that set out a framework for AI regulation across governmental departments. They highlighted five key principles:

  • Safety, security and robustness
  • Appropriate transparency and explainability
  • Fairness
  • Accountability and governance
  • Contestability and redress

The latest updates from the FCA, BoE and PRA indicate that they agree with this principle-based governance and are of the belief that all the stated principles are already broadly aligned with the existing regulatory framework. The FCA Principles for Businesses, Consumer Duty obligations, Sourcebooks and Rulebooks are all geared towards creating an environment where these principles are either being met or strived for. But the FCA has noted that due to the speed, scale and complexity of AI, it may need to adapt its regulatory approach going forward and put more resources into the testing, validation and understanding of AI models.

The PRA has also highlighted the potential impact to the UK’s financial stability should AI become more entrenched in the financial services industry, with far greater risk coming from potential cyber attacks and loss of accountability. It has stated that it is its intention to investigate this in far more depth over the course of 2024 and will submit its findings to the Financial Policy Committee (FPC) at the Bank of England.

The FCA and PRA have previously released their own discussion paper, questioning participants in financial services if existing legal frameworks and guidelines are sufficiently robust to govern the risk and harm associated with AI in their industry. They also asked for suggested changes should AI be further implemented into UK financial markets.

One of the key messages from respondents to the discussion paper was the need to prioritise consumer protection. They outlined the risk of bias, lack of explainability and accountability, as well as the potential exploitation of vulnerable consumers as being some of the biggest risks associated with the wider adoption of AI, and called on the regulators to address these concerns in future regulatory updates.

What should financial firms do next?

When firms are deciding whether to start implementing more AI into their operations, they need to thoroughly assess the impact of the choices they make. Any AI models need to be tested, verified and understood before they are used to influence the financial decisions of consumers. Existing regulations mandate that firms consider the outcome for all potential clients in their decision-making, and employing a new AI model into a firm’s processes is no different.

The every-day functionality of systems like identity verification are relatively low impact and so these do not need to be as thoroughly evaluated, but when looking at more complex analytical modelling, firms need to be sure that these models can be utilised fairly and without bias, especially when it comes to automated decision-making. It may be that in the future, UK financial regulation will require firms to regularly report on consumer outcomes that have arisen from the AI modelling and to internally assess the systems’ efficacy.

The ability to evidence any AI tool’s compliance with the governing principles will be key, and if firms are not able to clearly demonstrate their model’s adherence, it could be argued that the risk of its continued use outweighs any potential benefits.

Staff training and education will also be paramount to the successful deployment of any AI tool within a financial firm, with emphasis placed on the need for employees to understand how AI impacts their role as well as its limitations. If employees are led to believe that any AI model is beyond reproach, it may be that they are not as willing to investigate or address concerns from consumers who may have been negatively impacted by the system’s application. Empowering staff to take an outcome-based approach to their assessment of the AI tool’s impact will enable firms to adhere to the final principle of ‘Contestability and redress’.

In summary, the world is changing at a rapid pace and whether you believe AI is the most significant means for improvement or a powerful weapon for potential harm, its wider adoption is almost inevitable. Understanding it, deploying it and being mindful of its capacity to negatively impact consumers will be the challenge for all financial services firms going forward. The regulators are aware that its development has far out-paced their guidance and they are now playing catch-up to ensure their frameworks remain relevant.

Share.

About Author

Avatar photo

Philip Masey from Wizard Learning. Having worked in financial services for the past 12 years, Philip is an experienced Mortgage, Protection and Equity Release Advisor and also one of the lead authors at Wizard Learning. Alongside their online courses, Wizard Learning offer a monthly CPD package and now with an expanded team, they aim to provide a well-rounded, informative monthly course to help you develop your knowledge in an easy to understand format

Leave A Reply