AI projects, particularly in large institutions, are riddled with challenges from the outset.
As we’ve mentioned before, one executive at a major UK bank suggested that 90% of their AI projects fail to make it through to production. And that is in a bank with a fairly sophisticated data science capability and approach to digital transformation.
We appreciate that some experiments are never intended to be fully operationalised, but even so a staggering proportion of these projects hit a stumbling block at one point or another along the journey.
But why?
What are the unique features of AI, and its Machine Learning and Deep Learning sub-sets, that make it so difficult to design, build, test, and operationalise effective solutions?
Access to data
Data is to AI as oxygen is to humans.
Without plentiful, high quality data it will be near impossible to train an accurate model that provides results that the business needs.
With one client, we were analysing how AI could better forecast the physical cash requirements of bank branches and ATMs within a region.
Cash usage fluctuates dramatically based on a number of factors: time of year; regional preferences; weather conditions; and events, among others.
So, in order to create a model that will accurately predict how much cash will be needed at any given location at any given time, you need a lot of different data inputs.
However, in this example, because of the way that data was captured, archived and backed up, we only had access to three months of internal data. That wasn’t enough to even assess the seasonal effects on usage, let alone the challenge of combining it with a number of other external data points.
Unfortunately, this isn’t an isolated problem. In many instances firms have not placed enough emphasis on the capture and maintenance of high quality data, for long enough.
In cases where the data is of a sufficient quantity, more often than not it won’t be of sufficient quality – due to manual entry points across the process, paper based data input or simply missing data.
Which is why we see a lot of projects being kicked off, only to be paused or canned after thorough data analysis has taken place.
Choosing the right use case
In theory, AI and machine learning can be applied to thousands upon thousands of different use cases across financial services.
However, in practice there are definitely limitations with how far and wide these tools can be used – particularly given the state of technology today.
Take KYC, as an example.
Machine Learning is now very effective at identifying and extracting data from identity documentation, and performing real-time facial recognition against a live video or image. This ability to process highly unstructured data has developed dramatically in recent years.
However, humans still need to be part of the process for those few cases where identification has failed and a customer wants to query what has happened, or to make a complaint via phone or email. An AI solution can recognise voice or text data, and it can process key words to route customer service traffic, but it is still incredibly difficult for it to construct a coherent and accurate response directly to the customer.
That’s not to say that AI can’t be applied across the business today, because it can; rather, it’s to say that a problem statement still needs to be analysed in detail to determine if the use case is suitable for an AI solution, or if another technology is better suited.
Expectations and measurement
Digital transformation is often a long game, and for the most part AI should be no different.
However, because it is constantly lauded as a revolutionary game-changer, we’ve seen many examples of leadership enthusiasm being quickly extinguished where benefits from the technology aren’t reaped immediately.
There are some quick wins. Robotic Process Automation solutions (which many will argue doesn’t really fit in the category of AI) are generally well advanced, so they can be used to quickly automate simple processes, for immediate cost savings and efficiency gains.
There are also plenty of “ready-made” AI tools and model libraries which can drastically reduce development effort and the time to get to market. Google, Amazon and Microsoft all have accessible speech recognition, image recognition and even data labelling tools that can be easily plugged into an internal solution.
But where the use case is slightly more unique, it is reliant on internal data sources or where the model isn’t yet built, expectations need to be managed to ensure that project investment continues from the initial experiments and POCs, through to full rollout, even where benefits are not immediately realised.
Conclusion
These pitfalls are common across organisations and can be difficult to navigate. Going into AI projects with your eyes wide open will certainly help, but there are processes, behaviours and capabilities that you can develop within the organisation over time to increase the chances of success.
Our AI whitepaper – Navigating AI in Financial Services – brings a number of these to life, and offers a simple, repeatable, but flexible model for executing AI projects, and we will discuss some of the things that make a great AI organisation in a future post.