In February Woodhurst delved into the topic of Deep Learning, a nascent technology that is pushing the boundary of what Artificial Intelligence can do.
Hailed as the biggest software breakthrough of our time, it may remove humans from the process of writing code. Deep Learning allowed a computer to finally beat a human at the game of Go, previously thought impossible. It has enabled breakthroughs in drug discovery and self-driving cars.
With AI now capable of emulating the behaviour of humans, we wanted to discuss what Deep Learning means for Financial Services.
What is Deep Learning?
Think of a big circle and call that Artificial Intelligence. Picture a smaller circle within it called Machine Learning. Then picture an even smaller circle within that one and you have Deep Learning. Or just look at the picture below…
Another way to think of it is that Machine Learning is a technique of AI, and Deep Leaning is a technique of Machine Learning. Deep Learning intends to mimic the behaviour of the human brain by processing many sources of information to make rapid, complex decisions.
What separates Deep Learning from Machine Learning?
A Machine Learning model is often narrow. It is trained on data sets to perform a single task. Deep Leaning is a neural network of multiple connected models that can be fed more data sets to achieve multiple outcomes.
You still need to train a model and point it in a direction, but due to the composition of the neural networks it may deviate from its initial programming in a more dynamic way than ML. In the example of Go, Deep Learning was able to defeat a player as it could go beyond the limits of a singular ML model to simultaneously learn a variety of techniques, strategies and potential outcomes.
Do we need to maintain Deep Learning models in the same way that Machine Learning models need maintaining?
Machine Learning models sometimes require human stewardship in order to perform well. With Deep Learning offering the potential to bring more data into the decision-making process, can we also expect that they will maintain themselves?
Our view is that if Deep Leaning models can evolve on their own there would be significant ethical and compliance challenges. The more complex the decision-making process of a model, the harder it may be to explain the outcomes in a way that users (and regulators!) can understand. This is clearly a challenge across the AI landscape, but it may become more acute the deeper and more sophisticated the techniques of AI become.
Will it supercharge areas where AI is already being used?
While chatbots and loans already get a lot of airtime as common AI use cases, often at the expense of some of less well explored opportunities, we touched on how DL might expand the scope and use of AI in these two areas.
DL can supercharge chatbots by enabling them to understand slang, interpret the subtext of a conversation and communicate with human-level emotions. The customer experience should be much closer to dealing with a human. Businesses should also feel the operational burden lighten with no longer having to map questions to answers and or intervening in as many customer conversations.
It can supercharge loan decisioning by taking in many more data points, like where you grew up, what you studied, your social media to create a more holistic affordability assessment. Perhaps it can go beyond supporting and automating processes and create new financial products based on its interpretation of the data.
Unleashing more personal data into the process has the potential to introduce bias and lead to lower levels of financial inclusion. As we’ve seen with successful AI projects to date, a robust tech and cultural framework for implementing DL will be essential in making sure the positive effects of these innovations are felt and the negative ones are mitigated out.
How can we overcome the regulatory challenges?
AI projects fall short when explainability isn’t clear. A black box approach, or a loose approach to data inputs, doesn’t sit well with regulators – and for good reason. DL may be able to consume more inputs but the same rules apply if it is going to pass the regulatory tests.
If DL starts to make use of more unorthodox data points then transparency over those inputs, and explainability over the model’s decision-making process, will be crucial to for regulatory approval.
Organisations don’t set out with the intention of using technology as a veil for unethical or unlawful behaviour. But DL, just as other AI technologies, can encounter these issues easily if there isn’t a high degree of collaboration with the regulator from ideation through to production.
Can Deep Learning be pointed at a bank’s business plan or risk policies to make decisions?
Where ML is used in the loan decisioning process today, the model is pointed at specific data sets that the bank wants to include in an affordability decision. If DL could be pointed at policies or a business plan to make decisions that are within a policy’s parameters or deliver on the business plan’s targets, then that would be a whole new level of sophistication.
We don’t think DL is anywhere near that level of sophistication yet, but we may be wrong. Assuming it can get there, we can see challenges with ceding that level of control and accountability to a machine that doesn’t understand the “why” behind what it is doing.
Are fintechs best placed to take advantage of Deep Learning?
Fintechs might have more suitable underlying technology and skills across the organisation to implement DL solutions, but will they be able to do it at scale?
While the incumbents can struggle to adopt new technologies at the same pace as a start-ups, once they do they often have the scale to generate massive returns – for customers, shareholders and the organisation.
The incumbents also have the resources to progress DL at a pace that some of the Fintechs may find challenging to compete with. That said, Fintechs have shown that on a limited budget they can make big strides with the latest technologies.