Next-generation transaction monitoring tools that use AI and ML technologies are getting faster and more accurate, drastically cutting the number of false positives. Yet fear and suspicion around the “black box” nature of AI is still keeping potential adopters away from these technologies. Black box AI refers to a problem in machine learning whereby it is difficult to explain how conclusions were reached – sometimes even by the algorithm designers themselves. This presents a huge hurdle in a stringent regulatory world where influential bodies such as the Monetary Authority of Singapore (MAS) are calling for fairness, ethics, accountability, and traceability (aka. FEAT) when using AI in the financial sector.
In today’s climate, transparency is critical, and regulators must be able to determine if results produced by AI transaction monitoring software were influenced by bias or prejudice and that the correct datasets were used. Fintechs must be able to explain their past screenings transparently and comprehensively to any external auditors, but many of the AI-based screening tools in use today make this impossible.
What the industry really needs (and what very few are able to provide) is a “glass box” model with a highly transparent path from data, through the process, to output and in which it is possible to not only explain why certain results occurred but also how the algorithm got there.
The Tides are Turning
There is no stopping the march of progress. Fears around black box AI notwithstanding, regulators, analysts, and big players in the financial services industry are promoting AI’s potential to upgrade the effectiveness and efficiency of AML transaction monitoring on multiple levels. In fact, FATF (The Financial Action Task Force) strongly recommended the usage of AI and machine learning for AML and CFT detection, and also recommends the following:
Move away from a rules-based approach – With their abundance of false positives and excessive reliance on human input to create underlying rules, these tools are too burdensome and expensive to run, slow to adapt to new realities, and often not sensitive enough to pick up sophisticated or innovative crimes.
Adoption of AI transaction monitoring tools by regulators – The high volume of data generated by fintechs and other financial institutions makes the task of the regulatory bodies virtually impossible without newer tools that are more fit for purpose.
Greater use of AI to enhance financial inclusion – AI transaction monitoring is more sensitive and can spot suspicious activities that rules-based systems have been historically unable to intercept. This means that instead of blacklisting entire countries deemed to be too risky, these countries can now be invited back into the world’s financial fold, knowing we have the right tools to provide adequate security, Citizens of these nations will then be able to benefit from conveniences such as online payments and mobile banking which will help break the cycle of poverty.
What is Glass Box AI (And is it Too Good to be True)?
With regulators actively promoting AI transaction monitoring software, these tools are here to stay. In order, however, to encourage more fintechs and banks to jump on board, the fear of the unknown must be addressed. To be confident in the tools used, regulators need to be offered the possibility of looking under the hood and seeing how results and conclusions were reached. The ultimate safe and trustworthy transaction monitoring tool will be like a “glass box” – clearly displaying its workings and offering the following features:
Transparency – Clear display of the reason for each alert in a way that can be understood by anyone.
Traceability – Direct access to the raw data in which the unusual behavior was detected so that further investigation can be carried out if desired.
Visibility – A comprehensive audit trail that documents each step in the process, including all the parameters used and the reasons behind every decision.
In a glass box system, each flagged transaction should include an explanation as to why the transaction was deemed suspicious in the context of both the customer’s personal history and in the wider population. This will enable financial institutions to demonstrate tangibly that they are not discriminating against specific entities or jurisdictions.
While it may sound too good to be true, there are currently solutions on the market that are reaching these formerly unreachable levels of transparency.
The Human Touch – Responsible AI
There is a widespread fear that as transaction monitoring systems get better and better, humans will be pushed out of the picture altogether, but this is not advisable.
Responsible AI dictates that as transaction monitoring tools become more sophisticated, it will be increasingly important to ensure that human involvement is maintained in three key areas:
Data sources & features design – Selecting the data sources and designing the overall parameters of the screening tool.
Severity and prioritization – Decisions around which transactions are deemed the most severe and which will be prioritized by the system.
Review by an analyst before being sent as SAR – All results must be checked by an analyst before suspicious activity reports are sent to regulators.
Where It’s All Going
Regulators are leading the charge toward full adoption of AI-based transaction monitoring by pointing out the inadequacies of traditional rules-based tools which make compliance less effective than it should be. While not perfect yet, the high accuracy and low false-positive rate of AI transaction monitoring systems offer great potential to level the playing field by allowing more of the world’s population to benefit from financial services. The true potential of these technologies will only be reached, however, when greater transparency can be introduced into the tools. This will reduce fear and make AI transaction monitoring the preferred system for banks, regulators, fintechs, and other financial service providers.
Glass box AI transaction monitoring is already here – Learn more.