A recent FCA speech details the regulators stance on AI innovation moving forward.

FCA encourages AI innovation as industry views on evolving technology shift

The FCA – alongside other regulators – has encouraged AI innovation in business workflows to support growth, while also urging firms to continue prioritizing responsible use through strong security frameworks.

06 May 2025 7 mins read
Profile picture of Kathryn Fallah By Kathryn Fallah

Written by humans

Written by a human

In brief:

  • In a recent speech, the FCA expressed its support for AI innovation, highlighting the launch of its AI Live Testing initiative to encourage responsible deployment of evolving tech models
  • JP Morgan released an open letter urging third-party providers to prioritize thorough security frameworks as opposed to rushing to integrate new features
  • The SEC and OCC have also made remarks in support of AI innovation, suggesting a shift in approach surrounding regulation of evolving technology in the U.S.

From one coast to the other, financial regulators are undergoing a state of transition. With the Financial Conduct Authority (FCA) adopting a new five-year strategy to promote U.K. growth, and U.S. regulators like the Securities and Exchange Commission (SEC) experiencing a change in leadership, priorities are being re-examined.


One of the most prominent concentrations on the roster, unsurprisingly, has been artificial intelligence (AI). Despite AI being within the industry’s purview for years, recent remarks have revealed that it is at the forefront of regulators’ minds – particularly the FCA.


In a recent speech, the FCA’s chief data, information, and intelligence officer, Jessica Rusu, announced the regulator’s support of AI use for growth and the ways in which it can help firms streamline the innovation and integration of new technologies.

The FCA’s innovation front door is open

As part of its new five-year strategy, the FCA has committed to becoming more “tech positive to support growth.” As part of its push to become a leader in regulatory innovation, the regulator announced the launch of its AI Live Testing initiative to assist regulated firms in adopting AI technologies:

“FCA AI Live Testing enables generative AI model testing in partnership between firms and supervisors, to develop shared understanding and explore evaluation methods that will facilitate the responsible deployment of AI in UK financial markets.”

This AI Live Testing initiative allows firms to foster confidence in the performance of the AI technologies they plan to implement while also receiving regulatory support:

“Our goal is to give firms the confidence to invest in AI in a way that drives growth and delivers positive outcomes for consumers and markets while at the same time offering us insights into what is needed to design and deploy responsible AI.”

This shift in strategy comes after criticism from the industry and U.K. government for a series of FCA proposals over the past years, such as its “name and shame” plan or 12-month email deletion policy. Instead, the regulator aims to decrease the “regulatory burden” on firms and reposition itself at the forefront of the global market.

In her speech, Rusu referenced the joint FCA and Bank of England survey issued in November 2024, which found that 75% of firms have already adopted some form of AI. However, she highlighted that many firms are not harnessing AI technology’s full potential.

While many firms have adopted some form of AI technology, most use cases are for internal applications as opposed to advancements that could benefit consumers and markets. In response, she encouraged firms to explore the ways that AI could benefit additional business workflows, stating that the FCA’s “innovation front-door is always open.”

Rusu also mentioned the ways in which the FCA is utilizing AI technologies to support its commitment to being a smarter regulator. One way in which the regulator is leveraging the use of new technologies is by experimenting with large language models (LLMs) to analyze text and deliver efficiencies in its supervisory processes.

While emphasizing that humans remain essential to make judgements calls, the FCA hopes to exploit the benefits of AI to hasten extraction and enhance synthesis. By implementing AI technologies, the regulator has made moves to maximize its productivity:

“A smarter regulator, enabled by smarter systems.”

Firms call for a safety check on SaaS

Regulators aren’t the only ones highlighting the need for responsible innovation when handling emerging technologies, industry players have also weighed in, advising service providers to ensure robust security when developing and deploying software services.

In an open letter to its third-party suppliers, Patrick Opet, chief information security officer at JP Morgan, explained that modern software as a service (Saas) models enable cyberattacks and create vulnerabilities that deteriorate the economic system. As a call to action, Opet urged providers to give emphasis to comprehensive security frameworks instead of rushing to incorporate the latest modern features:

“Fierce competition among software providers has driven prioritization of rapid feature development over robust security. This often results in rushed product releases without comprehensive security built in or enabled by default, creating repeated opportunities for attackers to exploit weaknesses.”

Opet goes on to state that SaaS has become the main method of software delivery, which has reshaped how companies integrate services and data. This move to modernizing software solutions has reframed the ways that security checks are performed, which can compromise security processes if not thoroughly developed – such as in the case of AI models:

“We must establish new security principles…that enable the swift adoption of cloud services while protecting customers from their providers’ vulnerabilities…We need sophisticated authorization methods, advanced detection capabilities, and proactive measures to prevent the abuse of interconnected systems.”

All in favor of innovation? Say AI

The FCA is not the only regulator to promote the use of AI within financial workflows – a recent speech from the Office of the Comptroller of the Currency’s (OCC) Acting Comptroller Rodney Hood also demonstrated the OCC’s commitment to promoting AI innovation in the U.S.

Hood emphasized the U.S.’s position as a global leader in AI innovation, and that to support AI integration, the OCC will ensure that innovations are “fit for purpose” and are “responsible and trustworthy.”

In the past month, the SEC has also adopted a new stance on tech regulations following leadership changes. Commissioner Hester Pierce and Acting Chairman Mark Uyeda opined that the SEC’s previous “approach with caution” AI stance was impeding innovation and instead stated that the regulator should avoid an “overly prescriptive approach.”

Our recent Industry Insights: State of AI in Surveillance 2025 report also demonstrated a move towards generative technology adoption. 31% of respondents shared that they are already integrating AI technologies into surveillance workflows, while 38% are watching the space to make a decision.

AI adoption has presented undoubted opportunities, though strong security and governance frameworks should never be an afterthought. When implementing AI-enabled solutions, firms must confirm that third-party providers are offering thorough documentation about how models are trained and validated to align with AI governance regulations.

It’s clear that tides are changing on the topic of emerging technologies. As AI integration and regulation progresses, firms must put security first internally and with external suppliers to confirm that compliance standards are in check while riding the wave of tech innovation.

With regulators and firms stressing responsible use when exploring AI innovation, it would behoove firms to enlist the support of third-party vendors who prioritize not only future-forward compliance, but comprehensive security controls.

 

SUPPORT 24 Hour