Heads or t-AI-ls: FCA speech sets out ‘coin toss’ view on AI adoption

A recent speech by Jessica Rusu, FCA Chief Data, Information and Intelligence Officer, has laid out FCA’s views on AI, its potential, and how we might already have the frameworks to regulate its use.

13 October 2023 9 mins read
By Jay Hampshire

In brief:

  • Jessica Rusu, FCA Chief Data, Information and Intelligence Officer, recently gave a speech on the use of AI in financial services
  • The speech indicates existing regulatory frameworks, like SM&CR, can be applicable in regulating AI use
  • Other regulators, including the SEC, have proposed plans for dedicated, specific AI regulations

Few things start a speech off as well as an apt analogy. In her speech to the City and Financial Global AI Regulation Summit 2023 on 5th October, 2023, Jessica Rusu, FCA Chief Data, Information and Intelligence Officer for the Financial Conduct Authority (FCA) chose a particularly good one.

She compared potential of Artificial Intelligence (AI) to two sides of a coin: on one, its power to transform the financial industry, on the other, unchecked risk. She summarized that “right now the coin is currently spinning in mid-air, and we can influence the outcome. This is a pivotal moment.”

Certainly a neat analogy, but to take the comparison further, AI is arguably more akin to another contentious currency – cryptocurrency. Widely misunderstood, fast-changing, and one that regulators are running to catch up with.

Toss a coin to your regulator

Rusu’s speech was centred around a root question, one that is playing out in boardrooms around the world at organizations both large and small:

“Should the financial services industry be embracing and adopting AI, or should they steer clear?”

On one side of the AI coin, Rusu highlights “the shiny prospects of AI-powered innovation, promising greater operational efficiencies and accessibility … increasing revenues and driving innovation”. On the other side of the coin, “a whole host of potential risks”.

Rusu contextualizes the potential risks AI can present around current risks that businesses are already engaging with, including digital infrastructure resilience, critical third-party risk, consumer risk, and data protection and ethics considerations. Her speech also broke down how the FCA itself is currently deploying AI in a beneficial capacity:

“Our Advanced Analytics unit is using AI and Machine Learning (ML) in providing us additional tools to protect consumers and markets. We have developed web-scraping and social media monitoring tools that are able to detect, review and triage potential scam websites, which we use to proactively monitor.”

There was also mention of the FCA’s ‘Digital Sandbox’, an initiative designed to provide a ‘safe’ environment for AI innovation that allows developers access to datasets and APIs to build AI solutions, including “300 public and synthetic datasets as well as 1000-plus Application Programing Interfaces (APIs)”, according to Rusu.

Not-so newly minted

What was particularly interesting about Rusu’s speech is it suggests that the framework to regulate AI is already in place:

“The FCA is technology-neutral and pro-innovation. But, we are very clear when it comes to the effect and outcomes these technologies can have. We expect firms to be fully compliant with the existing framework, including the Senior Managers & Certification Regime (SM&CR) and Consumer Duty.”

Rusu asserts that “these frameworks provide us the tools we need to work with regulated firms to address material risks associated with AI, and provide both a context for the regulation of the technology and create incentives for the right outcomes”. The application of the Consumer Duty, which came into force at the end of July, is used quite broadly here to ensure firms work to counteract the potential risk to consumers AI might present:

“The Consumer Duty requires firms to play a greater and more positive role in delivering good outcomes for retail customers, including those who are not direct clients of the firm. The Consumer Duty also includes cross-cutting rules pertaining to retail customers, requiring firms to act in good faith, avoid causing foreseeable harm, and enable and support retail customers to pursue their financial objectives.”

One would hope that any AI implementation would “avoid causing foreseeable harm”, and it will be interesting to see how the Consumer Duty is applied to AI regulation. Rusu’s discussion of the Senior Managers and Certification Regime (SM&CR) is more focused:

“The SM&CR, in particular, has a direct and important relevance to governance arrangements and creates a system that holds senior managers ultimately accountable for the activities of their firm, and the products and services that they deliver – including their use of technology and AI, insofar as they impact on regulated activities.”

We have seen a wider shift towards regulators holding senior staff to account as part of an increasingly “zero tolerance” approach to non-compliance and the culture behind it, and Rusu’s view that leaders are accountable for their firm’s use of technologies like AI continues this shift. The best defense that senior staff will be able to employ as and when the SM&CR is used as a regulatory yard stick to measure AI compliance is explainability – something the FCA has highlighted before.

In a speech delivered earlier in the year, Nikhil Rathi, Chief Executive of the FCA, emphasized the importance of firms being able to account for how their AI models work:

“Another area that we are examining is explainability – or otherwise – of AI models … Firms in most regulatory regimes are required to have adequate systems and controls. Many in the financial services industry themselves feel that they want to be able to explain their AI models – or prove that the machines behaved in the way they were instructed to – in order to protect their customers and their reputations – particularly in the event that things go wrong.”

Whether an AI model behaves in the way it was intended to or not, the FCA will expect firms (and their leadership) to be able to explain why the model behaved in that way. This will extend to the type of data the AI model is built on and uses (with data ethics and governance being a focus of Rusu’s speech), and its intended outcomes. Being able to explain – and prove – that negative outcomes were not the intended result of a model will be essential components of compliant AI implementation.

Foreign exchange

While Rusu’s speech indicates confidence from the FCA that existing regulatory frameworks are robust and flexible enough to be applied to the financial industry’s evolving use of AI, it is worth noting that regulators across the pond may take a different view.

A recent speech from Commodity Futures Trading Commission (CFTC) Commissioner Christy Goldsmith Romero set out her belief that:

“Federal regulators are just getting started when it comes to AI.”

Goldsmith Romero’s view is that regulators are still learning about AI and its potential for both risk and reward, and that working with industry is required in order to establish a culture of ‘responsible’ AI usage – a culture in which explainability is once again a vital component:

“Responsible AI is AI that is designed and deployed in a way that aligns with the interests of many stakeholders, is transparent and explainable, auditable by humans, uses unbiased data, and minimizes the potential for harm.”

Meanwhile, the Securities and Exchange Commission (SEC) has set out a clear intention to create a regulatory framework around AI use. On the 13th June, 2023, the Commission announced its semi-annual rule writing agenda, which included several propositions around future regulation of AI and machine learning (ML) approaches within finance, securities, and trading – a clear statement of intent to establish firm AI regulation.

SEC Chairman Gary Gensler has confirmed that establishing a framework around AI is a priority for the regulator, part of the SEC’s ongoing mission to “protect investors, maintain fair, orderly, and efficient markets, and facilitate capital formation”. Like Goldsmith-Romero, Gensler’s comments center around the need for regulators (and their rules) to keep pace with change:

“Technology, markets, and business models constantly change. Thus, the nature of the SEC’s work must evolve as the markets we oversee evolve. In every generation since President Franklin Roosevelt’s, our Commission has updated its ruleset to meet the challenges of a new hour.”

Call it in the air

With AI and its uses within the financial sector still in the evolutionary phase, it remains to be seen which side Rusu’s coin is yet to land. It may be ‘heads’, and the FCA’s existing frameworks like the SM&CR and Consumer Duty are flexible enough to be applied to AI’s growing use. It may be ‘tails’, and the SEC and CFTC’s drive to establish bespoke, dedicated regulations covering AI is proved necessary. Rusu’s speech outlines that the outcome of the AI coin toss can be affected by “collaboration, both domestic and international”:

“It may be that the proverbial coin that is AI is still in the air, with the outcome hanging in the balance. We can, however, determine the fate of this digital coin toss through collaboration, our commitment to beneficial innovation and ensuring the right guardrails are in place.”

Whether those guardrails already exist, or need to be built, and whether regulators use existing frameworks or establish new ones to enforce non-compliant AI use, when it comes to compliant implementation and ensuring explainability, firms can’t leave it to chance.

As regulators begin to set out their agendas around artificial intelligence, understanding how to keep compliant in the age of AI can be daunting. Our solutions are already implementing AI and machine learning to enhance your compliance, and our team of compliance professionals are on hand to give you expert advice on AI implementation and explainability.