AI is a “shiny new Ferrari” – can your compliance team drive it?

Compliance leaders visited the Global Relay offices to share their thoughts on integrating AI into compliance and monitoring frameworks and the steps firms can take to ensure successful implementation.

17 April 2026 7 mins read
Profile picture of Kathryn Fallah By Kathryn Fallah
Written by humans

Written by a human

Compliance leaders from Robinhood, Prudential, and Barclays shared their thoughts on integrating AI into compliance and monitoring frameworks at Global Relay’s Compliance & Conversation event, held at our Midtown Manhattan office. The expert panel included:

AI in risk monitoring: Where do firms really stand?

Artificial intelligence (AI) is on every compliance team’s radar. As with any major technological advancement, successful adoption begins with laying the groundwork and developing an understanding of that technology’s capabilities – and the risks.

Firms have begun rolling out a variety of AI implementations, from leveraging efficiency tools like Microsoft Copilot to AI-enabled communications monitoring solutions. These early applications have planted the seeds of adoption, with experts stating that it’s only a matter of time before firms utilize AI more extensively:

“Outside the likes of platforms like Copilot, we are starting to see the introduction of [AI] agents as a deployable tool. I do think we still are at the beginning of what will be a very long ‘short journey’ as everybody tries to keep up.”

“AI risk doesn’t live in one department”: Cross-departmental partnerships are vital for success

As with all transformative technologies, firms must weigh up the risks and rewards AI can present, with pressure falling on compliance teams to ensure responsible adoption. But AI risks, like establishing secure frameworks, safeguarding sensitive client data, and maintaining thorough documentation about how models are built and trained, aren’t just a “compliance team problem.”

Felix Abu described how firms need to establish lines of responsibility and cooperation between stakeholders across the business:

“We realized that AI risk doesn’t live in one department. So, we are working with engineering, legal, security, etc., to get that cross-functional partnership, which has helped us innovate responsibly but also at speed.”

Additionally, global organizations must consider AI legislation across all regions they operate in.  Abu flagged that there can be legal discrepancies across jurisdictions, which firms need to take into account:

“What’s appropriate in the U.S. might not be elsewhere. It will take an additional layer of governance and expertise to understand what the gaps are. It’s a challenge, so you must be aware of international laws. As you grow, you must take that into account.”

Overcoming “AI hysteria” to deliver competitive advantage

AI promises to accelerate workflows and enhance productivity, but a key question is whether it can actually deliver on that promise. Claudio Crisafulli cautions how AI hype can drive firms to adopt generative technologies as quickly as possible, though it’s more important to prioritize due diligence to ensure long-term success:

“You have to make sure these opportunities are properly vetted. If you’re spending capital for these projects, you must overcome the ‘AI hysteria.’ Everyone thinks, ‘If I’m not using [AI] in some sort of way, I fall behind.’ But technology changes very quickly – today you’re on the leading edge and tomorrow, you’re behind. To keep up with that, spend time thinking about what will impact your business and if that keeps you efficient.”

Three key steps to successful AI integration:

1. “If the foundation of your house is crumbling, don’t build a pool”

The pressure is mounting to roll out AI deployment and stay at the cutting edge of evolving technology. Firms want to implement generative tools to reap efficiency and scaling benefits, though adoption looks different for every firm. 

If firms don’t allow compliance teams to establish a strong foundation to enable responsible implementation, AI could cause unintended risk instead of helping to address it. Therefore, according to the panel, it’s important for firms to focus on the compliance fundamentals:

“If the foundation of your house is crumbling, don’t build a pool. There’s a lot of space between where people are today in AI – you don’t jump from spreadsheets to agentic AI. Take care of the foundational stuff first and then move accordingly.”

2. Training is essential so teams don’t crash their “shiny new Ferrari”

Above all, training and education should be central to your AI compliance strategy. The landscape is constantly evolving, and those that don’t build AI fluency across their teams may be setting them up to fail.

Abu explained how firms that help their compliance teams develop their AI skills and knowledge will be in the best position to capitalize on the advantages of AI, while mitigating the risks:

“If you’re going to deploy an AI tool, make sure you train your staff on how to use it. They’ve got a shiny new Ferrari, and they don’t know how to drive stick. You’re going to crash it right away. So, teach them some basic prompts, teach them those guardrails, what information you should or shouldn’t input.”

3. Review processes are changing in real time

As a “disruptive” technology, AI is driving changes to many traditional workflows and approaches, especially within compliance. Where lexicon-based systems operate by identifying exact keyword matches, Large Language Models (LLMs) analyze complete messages and context to flag potential risk. Upon identifying alerts, AI-enabled systems then provide an explanation for why a message may contain risk.


Since this method is designed to reduce false positives, reviewers may be faced with a lower volume of alerts, but more detailed descriptions of what risks may be present. Therefore, Crisafulli advises reviewers that they may need to develop a different approach to assessing and managing flagged messages:

“[This] technology looks very different than what we’re used to in the lexicon world. When we look at the outcomes from these models, ask, ‘Are they identifying the right types of risks? Are they properly interpreting the conversation?’”

“Data volumes are going to increase – and so is complexity”

One of the most impactful uses of AI for compliance and monitoring is its ability to interpret more than just written communications. Coupled with its ability to quickly and accurately process vast quantities of data, firms onboarding AI solutions now set themselves in good stead to handle communications data that will only increase in volume.

As part of a fireside chat, Prabhu Ramamoorthy, Global Developer Relations Manager at NVIDIA for Financial Services, explained the scope of communications that compliance teams will be able to analyze with AI moving forward:

“We have digital channels like transcription, language, and voice, which can be interpreted with AI for compliance. Data volumes are going to increase, and so is complexity, which can be addressed with vertical compliance AI solutions from specialized AI partners.”

While compliance teams across the industry are at various stages of AI implementation –from early adoption to more advanced use cases – the technology only continues to gain momentum. As firms begin to consider AI use, they must develop robust frameworks and governance to ensure AI is used compliantly, responsibly, and effectively.

Global Relay’s communications recordkeeping, eDiscovery, and risk monitoring solutions help compliance teams proactively manage risk in a deregulatory environment. Learn more about Global Relay solutions for risk management.

About Article

Published 17 April 2026

About Author

Share Article