In a recent proposal to reform anti-money laundering (AML) rules under the Bank Secrecy Act (BSA), the Financial Crimes Enforcement Network (FinCEN) is following the regulatory trend of modernized rules, “risk-based programs,” and reduced “compliance burden.”
Within the proposed rule change, FinCEN highlights the potential of evolving technologies to help “improve financial crime compliance” – a sentiment that was shared by the Federal Deposit Insurance Corporation (FDIC) in a recent speech.
With regulators encouraging firms to utilize advancing technologies to help combat misconduct, how can AI-enabled tools be leveraged to elevate risk monitoring capabilities?
FinCEN says generative AI can “effectively combat financial crime”
FinCEN’s has proposed a variety of reforms to AML compliance practices, such as “reinforcing the belief that financial institutions are best positioned to evaluate their illicit finance risks” and empowering firms “to devote more resources toward higher risks rather than lower risks.”
FinCEN stated that new technologies, such as machine learning or generative AI (GenAI), could be monumental in identifying and addressing misconduct:
“FinCEN encourages financial institutions to evaluate whether new technology or innovative approaches might help to more effectively combat financial crime. Innovative approaches could involve machine learning, GenAI, digital identity, blockchain monitoring and analytics, or application programming interfaces (APIs).“
As cybercriminals increasingly leverage evolving technologies to commit crime, regulators acknowledge that firms must adapt their risk management approaches at the same pace to effectively defend against threats – fighting fire with fire.
AI-enabled risk monitoring empowers firms to perform more accurate risk detection beyond the scope of traditional monitoring systems. In the case of communications monitoring, developing AI-enabled models can parse the context of messages and flag potential risk (or avoid producing a false positive), whereas lexicon-based systems only identify risk by identifying exact words or phrases.
Beyond financial crime, AI-enabled technologies can help firms identify potential misconduct involving emerging financial technologies, including cryptocurrencies:
“These technologies may be especially useful in countering illicit finance activity involving digital assets, an effort for which FinCEN supports financial institutions’ responsible use of novel models, techniques, or strategies.”
“Go for it” not “gotcha”
FinCEN stated that any firm looking to experiment with AI will not find themselves in the regulators’ spotlight solely based on their use of innovative technologies:
“Institutions that responsibly experiment with innovative technologies in their AML/CFT programs will not incur any additional risk of being subject to a significant… enforcement action solely based on the use of innovative technologies. To the contrary, FinCEN recognizes that fostering the use of innovative technologies is vital to improving financial crime compliance and fighting illicit finance and strongly encourages their responsible use.”
FinCEN isn’t the only regulator encouraging firms to adopt AI technologies into their compliance programs. In a recent speech, the FDIC’s Chairman Travis Hall announced changes to its supervisory and regulatory approach to boost market growth and support AI innovation.
In particular, he recognized the potential hesitance from firms to adopt new technologies in case it would open them up to greater regulatory scrutiny, and reassured firms that the FDIC supports responsible AI implementation:
“I have heard of some reluctance to adopt these technologies because of fear that examiners will require parallel technology runs, play “gotcha” for past failures that new technologies reveal…At the FDIC, we want banks to innovate in this space, and we will ensure our supervisory approach encourages it.”
With regulators now increasing clarity on the topic, firms may now have the reassurance they were waiting for to begin broader AI adoption.
Enablement over enforcement
FinCEN’s proposals come amid a climate of reduced enforcement actions from U.S. regulators. The Securities and Exchange Commission (SEC) and Commodities Future Trading Commission (CFTC) have pivoted their regulatory approaches to promoting innovation and away from “technical non-compliance issues,” though both are focusing on financial crime enforcement, especially against individual perpetrators.
Regulators in the U.K. and Canada have echoed this sentiment, with the Financial Conduct Authority (FCA) launching a new five-year strategy to support sustained economic growth by enabling firms to responsibly experiment with, develop, and test AI in a way that drives innovation.
The FCA is also conducting ongoing reviews into firm’s AML practices and procedures. Additionally, the Office of the Superintendent of Financial Institutions (OSFI) has established a new “AGILE” framework to encourage firms to embrace AI responsibly and in line with regulatory expectation.
Green light – but with oversight
Firms have always been subject to the balancing act of adopting evolving technologies while navigating regulatory expectations. The run of regulatory enforcements around off-channel communications can be seen as an example of what can happen when firms get this balance wrong, and may go some way to explaining why firms have been hesitant to fully embrace AI.
Regulators encouraging firms to utilize evolving technologies without fear of enforcement reprisal is reassuring, but firms must put in the work to ensure AI is integrated and rolled out responsibly. This means working across departments to identify risk areas, thoroughly vetting third-party vendors, setting clear performance benchmarks, and ensuring human oversight.
With regulators seemingly giving firms the green light for experimentation and innovation, risk management appears to be entering a new frontier. However, compliance teams need to ensure they’re laying firm foundations of transparency and accountability – technology may evolve, but the basics remain the same.
As regulators give firms the go ahead to experiment with evolving technologies in risk monitoring frameworks, compliance teams would do well to ensure they’re implementing robust AI-enabled tools that enable them to innovate responsibly.