Don’t jump in with FOMO – think about what you really need from AI in to find the right solution for Quality Assurance


There is a growing perception that because Generative AI has gained so much public profile that every single function in every industry needs to immediately move to some new, entirely unrecognisable way of working.  But the reality is, as much as we’re evaluating the technology itself, we also need to evaluate how people, processes and data can or can’t work with it.

The financial advice sector is no stranger to long-standing, inherited challenges borne from a sluggish reaction to rapid technological advances, ever-evolving population demographics and volatile economic conditions. As a result, the industry continues to suffer from systemic productivity and efficiency issues.

This is reinforced by the FCA’s dogged determination to make Consumer Duty its flagship regulation, forcing companies to up their risk and compliance game. This has been highlighted by the high profile news of provisions being set aside for potential client refunds, fines distributed and a number of Dear CEO letters from the FCA. Many small advisory firms that sell-up or leave the market often cite the cost of compliance and ever-increasing regulatory burden as a key factor. This is forcing companies to measure, record, analyse and report on much more data than before. The only economically feasible way this can happen is if companies use technology.

In the next few years, it is predicted that Generative AI will enable compliance to be hundreds of times faster and cheaper, and beyond that enhance the entire workflow of the organisation

Quality Assurance (QA) processes are typically chronically inefficient. Despite client calls having to be recorded for many other areas of the financial services industry, it isn’t actually a legal requirement for financial advice. Typically, when calls are recorded and quality checked, only a round 1-2% of calls are ever reviewed, and when they are, they’re a randomly sampled call which may or may not contain an issue. However, with AI’s ability to listen to and monitor calls, companies can now not just put an end to random sampling and manual review but they can scale their compliance function economically.

Every customer call can be run through an AI platform, which will automatically assess and flag any risks such as customer vulnerability, complaints, expressions of dissatisfaction or even agent conduct issues. The platform can complete a ‘pre-execution’ check for advisers for 100% of their clients. Any high-risk cases can be triaged to a QA assessor for human review allowing companies to address any issues before they escalate. Importantly, it also allows companies to keep a proper record of customer communications and issues and a way to pull reports for the regulator to prove the efficacy of their frameworks and Consumer Duty initiatives.

In the next few years, it is predicted that Generative AI will enable compliance to be hundreds of times faster and cheaper, and beyond that enhance the entire workflow of the organisation. But, and this is a big one, there must be consideration before the money is spent.

In a 2023 survey Risk Officers prioritised the following in terms of spend: staff training (89%); technology to monitor customer data (77%); and ability to automate the quality assurance process (70%). The investment in technology was considered much more imperative compared to a 41% desire to hire more staff across quality assurance and customer service (41%).

A clear understanding of the needs in the business and particularly in the QA function needs to be agreed and then the right questions must be asked before the investment is undertaken. That is often where challenge lies and businesses are jumping in without considering some fundamentals.

There are so many different applications out there, but a vertical solution catered to financial services, where there is a specialised Large Language Model which incorporates the right tone and topics to provide the required level of detail in the regulated setting can make a significant difference to finding the ‘right’ tool rather than an ‘average’ one.

Questions to be considered are also important, and while this list is not exhaustive it contains some fundamental points before the leap into an AI technology investment are made:

  • What are we trying to achieve – compliance effectiveness, productivity, accuracy, removal of administrative burden, insight, greater inclusion of customer voice?
  • Do we build our own product offering on top of one of these large generic foundational models?
  • Do we partner with an expert to build completely bespoke functionality or do we go down the product route? Do we find products and services?
  • Is this solution geared towards our sector?
  • Do we go with a larger tech stack provider kind of service?
  • What does good looks like for using AI functionality?
  • What investment do we have to make and where will the cost savings be?
  • How much training will our staff require to use it?

Positively, there is very real traction across the market with a lot more specialised solutions being developed – models trained for specific purposes on very specific data sets with a lot of accuracy. This is the much-needed growth of vertical AI, and this is going to be super important to ensure compliance is made easier, more transparent and gives the QA function, advisers, paraplanners and functions the tools and support it requires.


About Author

Avatar photo

Joseph Twigg is CEO of Aveni. They describe themselves as a passionate team of computer scientists, engineers, designers and creatives. They want to transform the Financial Services industry by fusing AI with human interaction. With this approach, they can enhance risk monitoring, transform understanding of customers and agents, better manage staff performance, automate, drive down the cost to serve and so more

Leave A Reply