Regulatory Unwrapped: Innovation, regulation, revolution – which stage has AI reached?
Can firms retire lexicon-based monitoring? Is AI really just a cost and resource reduction tool? Find the answers to all this and more in episode two of Regulatory Unwrapped.
Written by a human
In this episode of Regulatory Unwrapped, we are joined by Adam Clarke, Partner, Deloitte, and Global Relay’s Director of Regulatory Intelligence, Rob Mason, to explore how financial firms should look to capture and monitor communications data in the age of artificial intelligence.
Listen in for key insights on:
What communication channels are considered in the scope of business communications, and what should be done in the event you capture more data than you can review?
- The prevalence of mobile has created a raft of challenges as it widens the scope of channel types and forms.
- Different firms draw the lines in different places. What is important is alignment between monitored channels and the policies governing unmonitored ones.
- The volume of data is outpacing review capacity; therefore, the question is not just what to capture but what to do with it.
On AI as a game changer – Where is AI presenting opportunities versus challenges, and are bad actors ahead of the game?
- AI is being adopted widely under the guise of reducing cost. However, while it reduces resource burden, it cannot compensate for poor data management. Here, human judgment is focal as front-to-back governance is more important than ever.
- The greatest opportunity for AI is within unstructured data, where AI allows firms to move away from lexicon-based models to prompt-based approaches.
- AI-powered algorithms introduce new risk areas, unlike rules-based algorithms, as they make it more difficult to guarantee guardrails against market abuse.
- Firms and bad actors are potentially in an arms race as to who scales faster when it comes to AI.
Validating AI models – how do you govern an AI model that makes surveillance decisions and where does accountability lie?
- Regulators such as the FCA have flagged that compliance teams need clear processes for ongoing quality assurance.
- Regulators’ core expectations have not changed, they require evidence of system effectiveness, a testing methodology, and ongoing quality assurance and calibration.
- Explainability is an outcomes-based question. Can firms demonstrate that their systems are producing meaningful risk identification, even if the model logic isn’t fully transparent?
- Regulators’ historic accountability frameworks aren’t built for this; however, regulators are weary of becoming a blocker to AI adoption. The answer lies in testing in a way that is robust, documented, and most importantly ongoing.
Can the industry retire the lexicon-based monitoring systems currently in place, and how do they build the internal confidence to support this move?
- Surveillance will never find everything; The goal is meaningful improvement, not perfect outcomes.
- AI can surface risks that lexicons would never be able to, and its ability to understand language in context, rather than match keywords, creates different risk management capabilities.
- Technology sign-off is a critical part of the transition, and stakeholders across the business are increasingly aware of the commercial and reputational stakes of doing this well.
- For buy-in, internal education is key. An AI solution that flags 50 alerts with 45 good ones is in fact better than a legacy system that flags 10 with 10, and stakeholders shouldn’t get fixated on the alerts the new system does not replicate.
Are regulators keeping pace when it comes to AI integration into surveillance workflows?
- Regulators are consciously holding back from over-prescribing rules; they are increasingly viewing everything through an AI lens while also trying to avoid stifling innovation.
- Regulators are transparent in that they are learning with the industry, but their fundamental requirements have not shifted: demonstrate effectiveness, show your testing, evidence ongoing calibration.
- Any gap between regulators and firms around AI is not necessarily dangerous if firms are proactive in sharing what they’re doing and having open conversations about AI-driven approaches.