By Elina Mattila
Canada’s financial services industry has long been a leader in applying artificial intelligence (AI) technology. The Royal Bank of Canada, for example, is investing tens of millions of dollars over the coming years to investigate how AI can be successfully transferred to the banking industry1.
Despite the considerable advances already made, the fact remains that there are still considerable challenges facing the delivery of AI-driven financial products and services.
This is why Mobey Forum brought representatives from the international banking community together in Toronto recently to discuss the operational, technical and ethical challenges that banks and other financial institutions should consider when building AI products and services.
Only as good as the data you use
When it comes to AI, better data means better services. Obtaining, organizing, validating and protecting the massive datasets required to underpin AI products and services is hugely complex, however.
Whether you are a start-up or global financial institution, obtaining or gaining access to data sets can be challenging. For start-ups, the sheer volume of data required can be prohibitive. For banks, securing the relevant permissions can be arduous.
Once the information is obtained, the next step is data labelling and categorizing: a very labour-intensive procedure. Ironically AI has not yet found a way to streamline these mundane operational processes and relieve the need for extensive human input.
Perhaps most importantly, banks must also be able to validate the data they are using. With AI increasingly used for decisioning and commercial modelling, the impact of using incorrect or manipulated data could be considerable. A recent poll from Accenture suggests, however, that only 24 per cent of banks validate the data they are using2. Fortunately, there is a growing consensus that there is much more work to be done in this area.
Beyond technical and operational issues, there are the privacy and data security considerations. With the General Data Protection Regulation (GDPR) legislation now enforced across Europe it has never been more important to implement best practices for data protection.
Banks and other financial services providers must understand their obligations: both domestic and international. Adopting the “privacy-by-design” approach championed by Ann Cavoukian (Ontario’s former Information and Privacy Commissioner) will ensure that data protection considerations are proactively built into AI systems and programmes, rather than having to be reverse-engineered at a later date.
From “chat” to conversation
The data itself is just the starting point. If we think about the increasingly popular chatbots, customer questions don’t align neatly with pre-developed FAQs that have been used in these applications. Instead they are often much more specific, contextual and complicated. This creates the potential for incalculable variabilities.
Banks cannot build products and services with an infinite amount of data points, pre-programmed scenarios and responses. Therefore, AI programmes must be able to “learn” from previous experiences and interactions.
Early AI deployments however could not effectively carry these learned experiences from one set of circumstances to another, meaning the smallest circumstantial variance derailed the interactions. For this reason alone, high proportions of queries still need to be passed on to human support. One bank, for example, has revealed that 55 per cent of enquiries submitted can be answered by its chatbot, with the rest requiring human interaction.
We are now starting to see more advanced machine learning solutions enter the market that aim to deliver truly “intelligent conversations”. Voice AI in particular is an area that is developing rapidly due to the ability of solutions to effectively transfer experiences to new contexts.
Shining light on black boxes
As AI products and services become more advanced and sophisticated, it is harder to understand exactly how decisions are made. We understand the data that goes in and the decision that comes out. What goes on between, however, is a mystery. This is known as “black-box AI” and, for various reasons, banks should look to avoid it.
If you are turned down when applying for a mortgage this can have life-changing consequences. You would expect to know why the decision was made in “human terms”. Indeed, consumers in Europe have a legal right under the GDPR to an explanation of decisions made by algorithms. If the bank itself does not understand why, this explanation to the consumer becomes impossible.
The black-box AI issue is compounded when considering the potential for biases to be inadvertently built into data algorithms. For example, if a bank were to start declining mortgage applications from a certain demographic because of inherent bias within the underlying data, they would unwittingly be engaged in discriminatory practices.
As AI becomes increasingly integral to the decision-making processes, banks should be committed to transparency to not only ensure compliance, thereby avoiding diverting resources to costly litigation, but also maintain the all-important customer relationships and brand reputations.
Turning the corner
Considering the scale of the challenges still facing the implementation of AI into financial services and products it is perhaps unsurprising that we are still in the early phase of deployments.
Yet it is apparent that AI will be the fundamental technology transforming the delivery of financial services. Collaboration, shared experience and joint expertise are critical if banks are to successfully harness the huge potential of AI to usher in a new era of advanced, intelligent products and services.
Elina Mattila is executive director at Mobey Forum. Join the discussion @MobeyForum and on LinkedIn.
1 Solarina Ho, “Canada’s Royal Bank boosts focus on AI with new research lab”, Reuters, January 18, 2017.
2 “‘Fake data’ will make banks vulnerable—Accenture”, Finextra, April 20, 2018.