AI can bring us closer together and help, not inhibit humans

0

Opportunities and Challenges with AI and NLP

AI is already changing our lives. From expert systems which predict the weather and the stock market, to facial recognition and internet search results, its application is growing more and more extensive all the time. Some uses of AI are relatively low risk, such as suggesting the next song to play on our Spotify playlist. Others are potentially life changing like predicting cancer from a scan, or if you are a possible terrorist. Some uses of AI seem low risk but have huge societal consequences, such as curating  posts in a Facebook feed. Optimising these models for maximum engagement has unintentionally led to incendiary posts being prioritised, and the massive proliferation of conspiracy theories.

There has been a lot of publicity about the problems associated with trusting AI, and there is an active community of researchers and engineers who are working towards making AI more beneficial to humans. Briefly, the problems with AI come from creators of AI datasets and systems where ethical implications are not considered, and/or unintended biases in data and models are not mitigated. Arguably we should not be using AI at all for some purposes, e.g. to predict attractiveness from a portrait photo, but what is actually more of a problem is that models are trained on data, and they absorb biases from that data. This can lead to outcomes which are unfair, for example studies show that speech recognition systems work far worse on women’s voices.

There is no competition between human and AI intelligence; both are needed

Human+

There is not one single solution to fixing AI, but one of the most important aspects of making AI safe and beneficial to humans is not to treat it as an isolated ‘black box’ expert. Instead, if we put humans in the centre of a system which leverages AI when appropriate and under human supervision, we could harness the best aspects of both human and artificial intelligence.

At Aveni we call this human-centred AI: Human+. We design and investigate new forms of human-AI experiences and interactions that enhance and expand human capabilities for the good of our products, clients, and society at large. Ultimately AI’s long-term success depends upon our acknowledgement that people are critical in its design, operation, and use. We take an interdisciplinary approach that involves specialists in Natural Language Processing, human-computer interaction, computer-supported cooperative work, data visualisation, and design in the context of AI.

Adhering to the core value that Human+ is better than either human or AI in isolation, we develop novel user experiences and visualisations that foster human-AI collaboration. This helps fulfil artificial intelligence’s destiny: to be a natural extension of human intelligence, helping humans and organisations make wiser decisions. Human+ is a partnership in which people will take the role of specification, goal setting, high-level creativity, curation, and oversight. In this partnership, the AI augments human abilities through being able to absorb large amounts of low-level details, synthesise across many features and data points and do this quickly.

Our models are explainable to human operators, and we incorporate human feedback in the continual development of our models.

NLP can really put the customer first

Currently, many financial services firms use large human teams listening to calls and writing a combination of objective and subjective assessments to track quality. This requires serious consideration in relation to Consumer Duty and will increase budgets significantly, but also creates a very interesting application of NLP-based technologies. It is no longer simply good practice to assess vulnerability and risk and demonstrate how they are being mitigated; the technology can genuinely provide the key to complying with regulatory requirements.

Responsible and human-compatible AI needs to explain how certain models come to their decisions, and inform the users of their strengths and weaknesses. These models are combined with human-computer interfaces which are capable of translating model outputs into understandable and useful explanations for the end user.

NLP and machine learning capabilities allow for automatic monitoring and analysis of customer interactions such as speech from phone calls, video conferencing and in-person meetings as well as other digital interactions.  It then converts speech to text to derive context and understanding from the conversation enabling organisations to automate specific processes.

Incorporating explainable AI into the design and implementation of all models links directly to the evidence used for decision-making. This allows humans to navigate quickly to the place in the call where the model triggered.

Aside from clear efficiency and productivity benefits, this lets organisations put data-driven technologies, and the voice of the customer at the heart of their operations. Being able to extract the right information from every interaction, through human analysis and NLP-based results can bring the improvements needed to meet Consumer Duty requirements. Companies can use that to drive improvements in multiple areas such as customer experience, products and services, training and coaching, sales, and quality assurance, as well as transform their risk assurance at a time when the FCA is really tightening up its regulatory supervision of the industry.

Human-in-the-Loop

Human-in-the-loop is a branch of AI that brings together AI and human intelligence to create machine learning (ML) models. It’s when humans are involved with setting up the systems, tuning and testing the model so the decision-making improves, and then actioning the decisions it suggests. The tuning and testing stage is what makes AI systems smarter, more robust and more accurate through use.

With human-in-the-loop machine learning, businesses can enhance and expand their capabilities with trustworthy AI systems whilst humans set and control the level of automation. Simpler, less critical tasks can be fully automated, and more complex decisions can operate under close human supervision.

One of the key problems is that machine learning can take time to achieve a certain level of accuracy. It needs to process lots of training data to learn over time how to make decisions, potentially delaying businesses that are adopting it for the first time.

Human-in-the-loop machine learning gives AI software the chance to shortcut the machine learning process. With human supervision, the ML can learn from human intelligence and deliver more accurate results despite a lack of data. That means having human-in-the-loop ML ensures your AI system learns and improves its results faster and any biases or blind-spots can be detected quickly and remedied.

What the future holds

The potential to use AI and NLP to really benefit people or customers is significant, to give them access to more affordable and more reliable support and advice. These sophisticated tools come with some drawbacks which can be mitigated by taking a Human+ approach to system design, which includes making automation explainable, and incorporating user feedback.

As these transformative technologies become increasingly adopted across the financial services industry, affecting a myriad of critical functions, we need to have a clearer understanding of the challenges and benefits that AI brings. A human centric adoption of AI mitigates its worst drawbacks, and makes it more likely to have a beneficial impact. There is no competition between human and AI intelligence; both are needed. In fact, using AI to support humans to achieve higher levels of creativity, intuition, and insight is very exciting.

Share.

About Author

Avatar photo

Dr Lexi Birch is co-founder of Scottish Regtech business Aveni – the AI-powered Natural Language Processing platform making big waves in the finance and regulation markets. Lexi is an internationally recognised expert in Natural Language Processing, and a Senior Research Fellow at the University of Edinburgh. She is working with the team at Aveni to drive necessary revolution in risk assessment and vulnerability recognition to address data-first demands of new compliance – particularly in relation to the new Consumer Duty requirements from the FCA. Innovation in thinking, technological advances and regulatory compliance requirements are changing the way the financial services sector operates and the reality of AI and NLP advances are being recognised and must be embraced.

Leave A Reply