Increasing the effectiveness of AI projects
It is a stark reality that 9 out of 10 AI or Machine Learning projects fail to make it to production.
In some cases, this is understandable. Many AI projects are experimental in nature and therefore will not be suitable for a full production deployment.
However, it’s difficult to believe that this should be the result for 90% of all AI projects that are executed. So why is this the reality that we’re faced with today?
We’ve seen two main reasons across financial services:
- Organisations largely don’t have a consistent framework for executing AI projects
- The frameworks that do exist place too much emphasis on model development and not enough on the initial solution design nor the steps needed to operationalise the solution
Although AI projects are nuanced in their use of data and the need to train a model for a particular use case, a consistent but flexible delivery model should help to address these issues.
This isn’t about creating a rigid, formal project methodology that must be adhered to; it is about creating a framework that can be flexibly applied to many different scenarios – whether you are simply testing a hypothesis with a short, time-boxed experiment, executing a Proof of Concept with a new technology vendor, or implementing an end-to-end AI solution that was developed in house.
To manage AI projects effectively, Woodhurst works through four key phases.
It’s important to spend time early on to evaluate whether this needs to be an AI project at all, and if so, working out if the organisation has the skills, tools and capabilities to develop, implement and maintain an effective solution.
Generally, this could be a short, sharp period of analysis with a small team of people that can provide input on the business, process, technology and data implications.
If the technology and resource capability allows, this could include a short, timeboxed “experiment” using readily available AI tools and test data to quickly validate some of the assumptions on which the initial hypotheses are built.
There isn’t necessarily a formal progression between phases, but once the team is confident that an AI solution should, in theory, be able to create a valuable business benefit they should begin to lay the groundwork for analysis and development to begin proper.
This would include gathering the people together that can begin to delve into the current state, start designing the solution (with the end state in mind) and initiate any governance approvals that might be required within the organisation to analyse data or create new infrastructure.
In some organisations this phase could stretch on and on depending on the level of governance required. Where this is the case, it’s important to start iterating the solution based on the level of approval you have in place, rather than waiting for everything to come through before starting.
Once the team has ready access to the data that they need, the real fun can begin (if you are a data scientist or machine learning engineer).
The majority of the time spent on the project will likely be wrangling data to identify the key features that will influence the model, and then selecting the right model to drive the outcomes that are desired by the business.
In parallel the development teams can begin creating the core aspects of the solution – whether that be a front-end user interface, visualisation dashboard, backend services, automated data pipelines – that allow the solution to exist within and integrate across the organisation.
Bit by bit the model and the solution will come together. First as a standalone experiment, likely based on a small data set and with limited integrations. Next, a short, sharp POC might be executed to trial an MVP solution with some real users. A more established pilot, run in parallel to an existing process can test how well the solution will fit within the organisation, and finally the core solution can be fully implemented. (We talk a bit more about this iterative process here).
And while the product development teams are busy proving and readying the solution for production, a keen eye needs to be cast towards the future state: how will the solution actually exist within the organisation?
Introducing new technology within a bank is never easy, and the effort involved in the full implementation of any technological change should never be underestimated.
The business must be prepared for a commitment of time, money and people to introduce the solution, embed it within the organisation and put in place an operational structure that supports it over time.
This is where most projects fall down. They fail to consider how the model and the solution itself will be supported over time and it is why the concept of MLOps is quickly gaining traction.
The platform and infrastructure engineers that form an MLOps function will know how best to optimise the machine learning workloads to keep costs down, reduce processing/training times and keep the solutions live in production. And importantly, they can monitor the effectiveness of a model over time to ensure neither it, nor its core data sources are drifting.
Increasing the success of AI projects
This framework is a long way from rocket science (even if the technology it supports is not), but its application will dramatically increase the chance of success of any AI project executed within Financial Services.
It can be used as an informal guide – a reminder of the things that need to be considered when working with AI, or an organisation might build out specific steps within each phase to encourage a more consistent, standardised approach.
Either way, a more considered, holistic approach towards AI and Machine learning that appreciates the importance of up front preparation will certainly lead to smoother, more frequent successful implementations.