Implementing Innovation

Implementing Innovation

Josh Rix
by Josh Rix

Introducing innovative technologies in banks is hard. We’ve seen it first hand a number of times and there is no end to the reported failures or abandoned projects across the industry.

Partly this is due to the incongruence between existing and new technologies: legacy systems don’t tend to take to innovation.

But it is also due to a pervading organisational mentality towards change and, more specifically, the expectation of short delivery cycles with immediate returns.

In some cases this is possible – particularly with a white labelled, tried and tested solution – but more often than not the business needs to be prepared for a long-term, multi-phased project with a business case justifying the immediate investment.

The most successful cases we’ve seen have set expectations at the outset by clearly defining the phases of innovative change, pegging specific outputs to each phase and introducing regular goal-oriented checkpoints to control progression through the project.  Organisations can then manage innovation within a consistent framework that allows for creativity and agility, but also crucially the rigour required to take a service, product or process live within a bank.

Using terms commonly employed today but often misunderstood or inconsistently applied, we would outline these phases as an Experiment, Proof of Value, Pilot and Implementation.


An experiment is about testing a hypothesis – ideally quickly and at low cost.

Time can valuably be spent analysing the business problem, refining the hypothesis and defining the scope of the experiment before the testing takes place. This isn’t about slowing down progress; this is about ensuring that the outputs of the experiment clearly meet the aims, which isn’t possible without proper forethought. Business stakeholders will need to be engaged, processes should be analysed, and underlying data should be scrutinised to agree the bounds of the phase.

The experiment does not need to test the end to end process using real customer data in a production environment; it merely seeks to replicate, as closely as possible, a business scenario using innovative technology in a low risk way, at pace. This can be enabled by two things: the availability of a cloud-based sandbox development environment; and an ability to source or create high-quality, accurate test data.

With these capabilities available across the organisation, any function, department or team will be able to quickly execute experiments that will prove or disprove the viability of certain technologies, build out a more well-informed business case and refine the business problem.

Proof of Value

The terminology is important here. Proofs of Concept or POCs are rife across the banking industry but are increasingly coming under scrutiny as high cost exercises with few returns. By flipping the focus, you aren’t just proving that a concept can be successfully applied within the bank, you are proving that the concept will return the value that the business needs to justify the investment.

Where the Experiment was answering the question, “will my process be improved by the introduction of this technology?”, the Proof of Value phase seeks to answer, “to what extent will the business benefit from the improvements introduced by this technology?”.

It is a subtle difference, but one that requires a greater replication of the process being augmented, in terms of data sets being used and use cases being tested.

This can still be performed in a sandbox development environment using test data, only that the test data needs to extend to all (or close to all) of the different scenarios that would need to be catered for in production to truly draw out the value of the solution. It would not be sufficient to test against 80% of all scenarios, if the final 20% are the most difficult and costly today, and least likely to be improved by the innovative technology.

The outputs from this phase are validation that the technology will bring about sufficient benefits for it to be a viable solution in production, and ideally a solution that has been developed to a point where it is nearly ready for deployment in a pre-production environment, with live users.


The pilot tests whether the solution can stand up to the scrutiny of real users, real data sets, and high volumes.

For a process change, this could mean operating the new solution alongside the old for a period of time, managing volumes but ensuring that all valid scenarios are covered.

For a customer facing product or feature, internal users or a customer “super user” group could be utilised to elicit detailed feedback. For small customer experience changes, A-B testing can drive more detailed insight into the change’s effect on customer behaviour and usage.

Whatever the format, the pilot should be executed in a production or production-like environment that has the controls necessary to handle live data and the performance levels required to operate at scale.

It should be underpinned by metrics and success criteria that govern the progress to full implementation. In this real world scenario, did the assumption made in the POV hold true? Is the solution performant, scalable and secure? Will the business be in a position to maintain and run the solution once it is live?

It is the realistic nature of the pilot that allows the business to really understand what will be required to make a full roll out viable.


Experimentation is for the curious; implementation is for the committed.

The effort involved in the full implementation of any technological change should never be underestimated. The business must be prepared for a commitment of time, money and people to introduction the solution, embed it within the organisation and put in place an operational structure that supports it over time.

Particularly where innovative technologies are concerned, the business must be comfortable with the changes that are being introduced. The communication strategy should be thorough and broad, and any necessary training should be performed as early as possible.

The expected state of play following implementation also needs to be fully considered and funded. If something goes wrong, who is responsible for resolving it? When the product, service or process needs enhancing, who will fund and deliver those enhancements? If a vendor is involved, are they operationally set up to support the customer base of the solution (be it internal staff or external customers)?

Finally, the change should be closely monitored, based on agreed metrics and success criteria, to understand its true impact. Continuous improvement is essential to ensure that the change continues to evolve over time, rather than stagnating and requiring another wholesale transformation in several years’ time.

Flexibility, not rigidity

Frameworks are important for setting rules and guidelines that drive particular behaviours – they create a context in which one can operate with consistency.

But there is a flexibility too. Not every phase will be applicable to every project. A more advanced, robust technology may need a short pilot followed by full implementation. An AI solution being built from the ground up will likely need to progress through the phases, and the business should not underestimate the investment required to get it right.

The most important thing is for the project to consider, in detail, what it is trying to achieve and the best way to achieve that, to be clear about the expected outcomes and to fully understand the true level of investment required to meet the project aims.

discuss this with us & find more insights on

Want to learn more?

let's talk