Global Relay’s Content and Brand Coordinator, Aarti Agarwal, sat down with our Senior SME, Surveillance, Robert Nowacki, to debunk the industry’s most prevalent myths surrounding large language models (LLM):

Debunking the latest LLM myths
Skepticism around LLMs is common within the financial services industry, however, much of this is based on outdated or inaccurate assumptions. Global Relay's SME sits down to unpack what modern LLMs can actually deliver.
MYTH: LLMs are not as easily able to detect financial misconduct and lack effectiveness and accuracy as they are not industry specific
Pre-trained LLMs are not limited to a single dataset, therefore, they are not restricted to absorb and adapt to a limited set of information, allowing them to identify patterns and anomalies that may be missed in a model that is industry specific. They have a deep understanding of multiple domains and have access to all data on the internet to inform their understanding and ability to detect financial misconduct.
LLMs cannot provide meaningful and justifiable reasons when flagging communications for risk
This statement is false and, in actual fact, LLMs analyze messages entirely to detect risk by extracting context and key phrases, and test against risk categories to do so. In direct comparison, lexicon-based and industry-specific models require information to be fed into them and are more likely to produce false positives.
Global Relay’s LLM is aligned to 11 key risk categories and covers over 130 risk indicators to capture compliance threats, where each category is further refined to address specific concerns such as excessive trading, money laundering, and insider trading.
Due to their large size and limitless capabilities require a lengthy training and retraining process
Pre-trained LLMs, like those engineered by Global Relay, only require simple prompt adjustments to adapt to emerging and evolving risks. If large language models are constantly being upgraded and replaced to meet innovation, then lengthy training will not be needed.
Firms must be concerned about the risk LLMs pose to data security and client privacy
This can be true, however, if LLMs are hosted within their own data environment this assures security and will prevent data privacy from being compromised. Global Relay’s private data center means more data can be processed at a significantly lower cost with the benefit of client privacy and data security assured.
LLMs cannot be explainable nor transparent, therefore, they are not trustworthy
Explainability is often a prevalent concern regarding AI in the financial services industry, firms can ensure trust through maintaining a comprehensive risk governance framework that adheres to global AI governance principles.
Global Relay’s LLM not only flags risks but also provides a clear explanation as to why it has done this, therefore, preventing ambiguity and confusion and reinforcing transparency.
Innovation and AI seem to go hand in hand in the current regulatory climate, and so firms should view it as an opportunity rather than a risk. Instead, AI and LLMs can be used to detect risk and help financial services remain compliant and prosperous. This is why Global Relay has acted to harness the power of AI to improve surveillance systems to the benefit of financial services and regulated industries.