Banks need help with security and AI might not be the answer, yet
Banks and tech companies need to overcome a number of obstacles for artificial intelligence to succeed in tackling money laundering
London — When over-indebted HSBC Holdings thwarted a $500m central-bank heist, sophisticated computer software didn’t raise the alarm.
The funds flowed undetected from Angola’s reserves to a dormant company’s account in London. It was a teller at a suburban bank branch who became suspicious, declined a request to transfer $2m, and triggered a review that uncovered the scam, according to one account of the episode.
That was two years ago, and the finance industry’s battle to stop the illicit transfer of as much as $2-trillion a year around the globe has not become any easier. At least a half-dozen lenders in Europe have found themselves at the centre of fresh allegations of dirty money schemes in the past year.
The wave of scandals — at Denmark’s Danske Bank, Deutsche Bank and others — is undermining confidence in the industry well beyond the individual institutions involved.
Financial services executives have had little choice but to step up regulatory efforts; more than one in 10 now spend more than 10% of their annual budgets on compliance, according to financial adviser Duff & Phelps.
Banks are eager to find ways to bring that spending down — management, employees, and shareholders never want to spend on what are effectively internal cops. Today there’s a sense that growth may be peaking.
About two thirds of institutions considered systemically important on a global level, a leading indicator for the industry, expect the size of their compliance teams to remain unchanged or shrink, according to a Thomson Reuters regulatory intelligence report. The largest companies want to adapt their teams to grow or scale back as necessary, the report said.
That’s led to a buzz that banks are deploying artificial intelligence (AI) to replace surveillance staff. HSBC started using AI in 2018 to screen transactions, and the two biggest Nordic banks have said they are replacing compliance staff with algorithms.
Online banking start-ups such as Revolut, which rely on computerised efficiency to compete with established lenders, are finding compliance a challenge they need to address.
In short, the historical data set available to train the machines is misleading, complicating their ability to learn detection.Criminals, by contrast, are constantly adapting their ways
So far, machines are confined to simple know-your-customer (KYC) apps and are far from ready to replace humans, says Tom Kirchmaier, a visiting fellow at the London School of Economics’ centre for economic performance. He is not optimistic that a major advance is afoot, either. “There’s a lot of talk but no action.”
Take ING, which, in 2018, paid $869m to settle an investigation by a Dutch prosecutor into alleged money laundering and other corrupt practices. Even though the bank uses machine learning to filter out false alerts on potential bad actors, the lender has had to ramp up the number of individuals handling KYC procedures. It has tripled compliance personnel in the Netherlands over eight years; staff dedicated to KYC account for 5% of total employees.
Banks and tech companies need to overcome a number of obstacles for AI to succeed in tackling money laundering. For starters, they need better customer data, which is often neither current nor consistent, especially when a bank spans multiple jurisdictions. Enhancing the quality and frequency of data gathering is a crucial first step.
Banks are also constrained in their ability to detect bad behaviour, with or without computers, because competitors and national law enforcement agencies will not share data. Across Europe, for example, regulation and enforcement are split along national borders. Lenders would benefit from a common European anti-money laundering regulator, data sharing among banks, and a more open dialogue with bank supervisors, Citigroup analysts wrote in a note to clients in June.
When banks do share information, it’s often unhelpful. They tend to over-report suspicious activity to the relevant agencies to shed responsibility, but enforcement authorities typically do not provide their findings to the financial companies. What’s more, banks, wanting to shield bigger clients from unnecessary scrutiny, often under-report activity they should be flagging, according to Kirchmaier.
That leads to potentially suspicious transactions being classified as normal. The algorithms learn to replicate those types of decisions.
In short, the historical data set available to train the machines is misleading, complicating their ability to learn detection.
Criminals, by contrast, are constantly adapting their ways, finding new routes for their cash when existing ones are blocked. Catching tomorrow’s money launderers requires anticipating where they’ll move next. Will they trade gold or crypto-assets? When parameters change even slightly, AI struggles to stay ahead of the criminals.
Trust in financial services after the 2008 crisis is taking a very long time to rebuild. Banks are wary that they risk teaching machines to stereotype customers based on where they come from or where they do business. “Ethical concerns associated with AI are rightfully restraining banks’ full embrace of machine learning,” says Alexon Bell, chief product officer at Quantexa, a London-based data analytics company that counts HSBC among its customers.
Regulators, frustrated with the slow speed of change, have encouraged banks to deploy more technology. In December the US treasury department’s financial crimes enforcement network, jointly with the US Federal Reserve and other US agencies, called on banks to try new approaches to meet anti-money laundering requirements, including AI, and have offered leniency if the tools uncover deficiencies in existing systems.
One thing seems clear: compliance spending at banks may be shifting away from employing humans to adopting new software. But for now, those living and breathing internal cops are here to stay.