Exclusive: ‘Power & Responsibility’ – Phil Thomas, ScotiaBank in “The Fintech Magazine”
Scotiabank couldn’t have intervened to help millions of families at risk of financial distress during the pandemic without AI. But it’s super-conscious of ethical hazard, which is why it’s put in place human processes to protect the bank’s integrity, says Phil Thomas, Executive VP of Customer Insights, Data & Analytics at Scotiabank Canada
In November of last year, Scotiabank introduced a new Global AI platform to better meet its customers’ needs in a world where those needs were rapidly changing.
At the time, it said the pandemic had ‘reinforced the importance of delivering customised financial advice that speaks to our customers’ unique business and household situations’.
It undoubtedly gave the bank better insight on the enormous amount of transactional and other data that it holds on customers to help them manage their finances better. And the results have been impressive: it enabled the bank to reach out to two million customers, which AI had identified as being the most financially vulnerable to COVID-19, allowing the bank to offer them support and solutions before they suffered any potentially long-lasting, catastrophic impacts.
It was clearly an example of artificial intelligence being used to good effect. But, at the same time, there are a number of growing anxieties surrounding the ethical use of AI, as numerous recent use cases have demonstrated.
From a discriminatory Microsoft Chatbot, to a seemingly sexist Goldman Sachs’ Apple Card credit line algorithm, there has been evidence of unintended bias creeping into the AI systems companies and businesses have been using to improve the service they provide to their customers. The growing unease has led to calls for these models to be regulated and monitored, not only to protect individuals and groups, but also to ensure the technology doesn’t end up embarrassing and damaging the organisation behind it.
Phil Thomas, the executive vice president of customer insights, data and analytics at Scotiabank Canada, is the man who has to face those ethical questions every day.
The bank, which operates across Canada and Latin America, had been building a model for its Global AI platform based on customer value when the pandemic hit. With the advent of COVID, it pivoted that model to one based on customer vulnerability.
“We’ve been exploring the use of AI in our retail businesses, both in Canada and internationally, as well as our capital markets business,” says Thomas. “But the use of machine learning and AI has been critical for us through the COVID period.
“We’ve been leveraging the data and the machine learning techniques to identify our most vulnerable customers, and then building proactive outreach programmes for these customers, so that our traditional channels, branches, or call centres, etc, have leads. That way they know the customer and their financial situation, and they’re able to come up with relevant solutions, based on the predictions generated by the AI.
“We used this vulnerability model to reach out to about two million customers who were at risk of being financially impacted by COVID. We were able to link those to a financial solution, whether it was a deferral or re-amortising or refinancing loans, to be able to make it more affordable for our customers to bank with us.”
Net Promoter Scores for the bank – how satisfied customers are – increased in the retail part of the bank where the AI platform was deployed.
“Ultimately, we’re in the customer experience business, and so I think what I got most excited about was being able to use our machine learning models, and our deep learning and advanced analytics skill sets, to really help our customers during a period of stress, and that certainly showed through our Net Promoter Score numbers,” says Thomas.
Nevertheless, Scotiabank has been cautious about giving AI free rein.
“We’ve been very thoughtful about the integration of man and machine,” says Thomas. “So, as we are building and implementing models, we have a constant monitoring process in place. We’re thoughtful both about the human bias, and the potential for machine bias, so we have a broad cross-section of people from business groups, diversity of backgrounds, committees of individuals, who will get together to review the models, looking at the results that are coming out of these models to make sure we’re eliminating any bias as best we can.”
There’s no denying that AI plays an important role in improving efficiency within the bank, whether working directly with individual customer data or with that of larger entities. It’s being used in Scotiabank’s capital markets business to improve predictions to its traders, for example, as well as figuring out effective ways to serve individual customers.
“There’s masses and masses of information that comes in, and for one human to sit there and try to process this is challenging, so the benefit of advanced analytics or AI in this space is tremendous,” says Thomas.
But, conscious that it can be a double-edged sword, he believes the machine ‘head’ and the human ‘gut’ are of equal importance.
“The challenge is around making sure that you have the human interaction. And so, if you have an experienced trader, for instance, who can save time with the help of a machine learning AI model, it’s kind of the best of both worlds. I think that’s the future we’re headed for.”
The platform is already being ported to different areas of the business in Canada and Latin America and it’s something that Thomas and his team believe is going to set the bank apart.
“Toronto is becoming an AI hub, globally, and we’ve been able to tap in to this amazing talent and leverage it across many markets,” he says. “The Global AI platform is there to support not just our Canadian business, we’ve also rolled it out now in Colombia and Peru and we have plans to move it to the other markets across the year.”
AI systems are clearly key to the management of data in the future, but they are not yet perfect. As Deloitte observed: “While the mysterious nature of AI can seem like ‘magic’ at times, it has the potential to hugely magnify societal issues in financial services.”
If the input data is incomplete, unrepresentative or biased (consciously or not) and if the AI continues to be trained on it, discrimination can be baked into an AI model of decision making – that decision often being hard to explain, which also presents a problem for regulators.
The World Economic Forum (WEF) has recently spearheaded the Global AI Action Alliance, in recognition of the fact that while 60 per cent of executives polled by the WEF said they believe AI will have a larger impact of the global economy than the Internet, there seems to be little consensus and no specific standards or frameworks to ensure its impact does not unfairly advantage or disadvantage certain groups or individuals.
Which is why, internally, Scotiabank has gone to lengths to ensure it doesn’t sleep walk into a bear trap.
In 2019 it announced it was launching a training programme for senior executives covering the principles of AI and ethics in design; decisionmaking with analytics; the dynamics of enterprise data and AI management; Canadian information and privacy regulations; and the latest research and technology developments in AI.
Daniel Moore, chief risk officer at Scotiabank, said it was committed to ‘being leaders in the development of principles, guidelines and training for the ethical application of this powerful technology’.
Anything less would risk damaging customer trust. It’s why Thomas insists the algorithmically programmed brain is leavened by the common sense and intuition that only a human one displays. And it’s why his committees of AI observers are constantly auditing the output.
“We want to be making sure that we’re leveraging our data to know our customers and understand their behaviours. But we’re doing it with a lens of managing the data ethically, so the customers are comfortable with the bank in every aspect.”