In this era of runaway inflation and looming recession, you’re probably not thinking, “Hey, what a great time to spend money! Let’s deploy AI in finance.”
During a webinar, guest Jeff Bell, President & COO of Eventus — a global provider of trade surveillance, market risk, and transaction monitoring solutions — discussed how world events have financial institutions (FIs) scrambling to improve risk mitigation and compliance efforts. He outlined how the immediate deployment of artificial intelligence (AI) in finance can help institutions:
1. Meet the challenges of a rapidly evolving threat landscape
From warlords seeking to avoid new economic sanctions to transactions conducted in the metaverse, the current socio-technological climate sparks new opportunities for financial malfeasance. Rules-based processes to mitigate risk and comply with regulatory mandates simply can’t keep up. They’re too slow, imprecise, and difficult to maintain.
Malefactors use digital identities and the metaverse to launder money. They exploit crypto currencies and digital assets for the illicit movement of funds. They commit fraud via digital payment systems. They employ online trading platforms to spoof the market.
“Criminals, in general, have adopted technology faster than most financial institutions have adopted technology,” noted Vamsi Koduru, Director of Identity Analytics. “They’re able to circumvent the safeguards we’ve implemented.”
Real-life entities also present problems for financial institutions. Consider Russia’s invasion of Ukraine. As a result of that action, countries around the world have levied thousands of new sanctions against Russia and Russian entities. Financial sanctions, trade embargoes, asset freezes and other injunctions forbid transactions with named individuals and companies, and, often, with their associates.
Many FIs struggle to identify these entities and relationships, determine their risk scores, and better monitor or control their financial activities. Match/no match rules-based processes too often cannot consider phonetic similarities, transliterations, nicknames, titles, use of initials, different alphabets, and other factors indicating the use of aliases. They are therefore unable to resolve identities, to determine that, for example, “John James Johnson” and “Jack Johnson” and “J.J. Johnson” and “Dzhon Dzhonson” and “Джон Джонсон” are — or at least may be — the same person.
2. Avoid identification lag time
AI in finance is faster than rules-based processes for analyzing texts, discovering digital evidence, assigning risk scores, and completing other tasks vital for mitigating crime and improving compliance. AI solutions further improve speed by dramatically reducing the numbers of false-positive alerts endemic to rules-based Know Your Customer (KYC) screening. The same processes that may be unable to resolve “John Johnson” and “Джон Джонсон” into the same person too often hit on every “John Johnson” and “J. Johnson” and “Jack Johnson” in an attempt to uncover the entity against whom sanctions actually apply. Investigating these alerts takes time. Only AI systems that look at a variety of identity attributes beyond names can help systems efficiently identify the right John Johnson.
And getting to the right John Johnson, and getting to him quickly, is vitally important.
Bell notes that “We all want the market to be a safe and transparent place for price discovery and listing of companies. But all that pales in comparison to terrorist financing or human trafficking or drug running. That’s where sanctions come in. And the sanctions don’t say, ‘Hey, sometime in the next month, we want you to do this.’ They say, ‘Effective immediately thou shalt …’ If your client just got put on a sanctions list and you don’t know it, you can really, really, end up in a bad spot.”
3. Satisfy regulators with explainable AI
Implementation of AI in finance may by now seem like a no-brainer. It helps financial institutions more effectively and quickly spot, contain, and prevent financial crimes while improving compliance efforts. So why haven’t all FIs instituted AI solutions?
One reason is that regulators haven’t always been on board.
You may be proud that your AI solution automatically closes 75 percent of KYC alerts because the system has determined them to be false positives. The industry understands that traditional screening systems produce a huge number of false positives. This isn’t news to anyone. So automatically closing those false alerts is a good thing, right?
Not necessarily. Regulators may look at these automatic closures and wonder, ‘How?’ How does AI know which alerts are false positives? How does the AI system know it’s closing the right alerts?
“This stage of AI adoption is really about building trust. The justification piece of this is very important,” Koduru explained. “We find that regulators now take kindly to automated closure of alerts, with the caveat that the process is explainable to the regulator. That the regulators understand what rules are governing the closures.”
Enter explainable AI. Explainable AI is a set of methods and processes that enables users to better understand what AI is doing, and on what data it’s basing its decisions. This type of explanation makes regulators and others trust the choices made by AI systems.
As Bell said, “With explainable AI, we can say to a regulator, ‘Here are the factors that matter most to this particular decision. That’s why it’s treated as a higher or lower probability event.’ If you build explainability into your approach to designing AI systems, you’re in a much better position to later explain AI to regulators, governance folks, auditors, and other stakeholders.”
End Note
Find out how to transform your data into actionable insights.
Schedule a DemoStay Informed
Sign up to receive the latest intel, news and updates from Babel Street.