A man views a computer screen while using AI models to conduct business.

How are financial regulators approaching AI integration?

Regulators globally have been implementing a variety of measures to manage the utilization of AI within the financial sector. We take a look at the key approaches so far.

09 May 2024 11 mins read
Profile picture of Kathryn Fallah By Kathryn Fallah

In brief:

  • Regulators have used a range of methods to respond to AI integration, such as pro-innovation, technology-neutral, risk-based, and principles-based approaches
  • Though there are baseline frameworks and guidelines that have been created to structure AI use in finance, regulators worldwide are still working to understand the intricacies of the technology
  • Through collaboration, evaluation, and research, regulators are determining how to manage AI models moving forward

There’s no doubt that artificial intelligence (AI) has proven to be groundbreaking. Several decades ago, the widespread use of AI still seemed like a far-fetched concept. Now, it’s more common than not to talk to a customer service chatbot, get “Recommended for you” ads when online shopping, or use your GPS to detect traffic patterns.

While it has presented tremendous benefits, AI remains a polarizing topic. Beyond the ways in which it has helped, it is a technology that is still evolving. A study conducted by The Pew Research Center found that in 2023, 52% of Americans are more concerned than excited about the integration of AI into daily life.

Likewise, it seems that financial regulators share a similar feeling. While cautiously optimistic and thoughtfully critical, regulators across the board have made clear that they are aware of the shifting industry and are preparing to ensure that firms are using AI securely.

Learning the AI ropes

Due to its novelty and ongoing transformation, defined guidelines around AI are still developing.  Regulators are defining their expectations, creating broad outlines, and devising best practices to manage potential risks and establish standards that enable firms to harness AI’s abilities.

Per regulatory discussions and cumulative frameworks that are being created and revamped, governing agencies globally are chiming into the conversation and collaborating to enhance general knowledge about the evolving technology.

By building on collective insights and research across the field, regulators and governments seem to be aligning on how to gauge, design, and implement comprehensive parameters that allow firms to critically think about how they deploy AI models. To do so, they must anticipate challenges, embrace advancements, and stay alert in preparation.

There seems to be a similar theme of evaluation and exploration throughout all financial jurisdictions, though certain regulators are opting for a more assertive, hands-on approach, while others are delivering guidance from a higher level as to prevent the stifling of innovation.

Tackling AI: A mixed-bag approach

What is the U.S. approach to AI regulation?

We have seen intensified action from the U.S. in navigating the opportunities and obstacles that AI presents. Regulators like the Commodities and Future Trading Commission (CFTC) and Securities and Exchange Commission (SEC) have acknowledged the unparalleled ways that firms are using machine-learning technology to analyze information and streamline business operations.

Through a succession of speeches, statements, and roundtable discussions from U.S. financial regulators, the approach to AI management has come in the form of understanding how the technology works to establish parameters that shelter market integrity and consumer protection.

The CFTC, one of the most active regulators in initiating discussions around AI, has declared that it is “technology neutral” and focusing on AI evolution – particularly in relation to fairness, transparency, safety, security, and explainability. The regulator has held multiple meetings with the Technology Advisory Committee to exchange ideas about how different regulatory bodies are maneuvering AI use, evaluating its benefits, and advising on threat areas.

In the most recent AI Day meeting on May 2, Federal Reserve System Chief Innovation Officer, Sunaya Tuteja, spoke about how the agency is advancing responsible innovation, underscoring that in addition to minimizing risks, it is important to interrogate AI and ponder how it can reshape the industry:

“Are we looking at this new technology in the context of solving gnarly problems? Are we designing meaningful optionality and solutions that can help us level up the institution for the present and future?”

Another critical piece of guidance is the National Institute of Standards and Technology (NIST) Cybersecurity Framework, which was created to bring clarity to concepts like information security, risk, and trustworthiness. In relation, NIST states that a trustworthy AI system should be: valid and reliable, safe, secure and resilient, privacy enhanced, interpretable and explainable, fair with harmful bias managed, and transparent and accountable.

During AI Day, NIST Chief AI Advisor, Elham Tabassi, explained that instead of taking a prescriptive approach, this Framework is risk-based and puts emphasis on outcomes:

“In order to be able to improve the trustworthiness of the AI system – the safety, the security, and the privacy – you need to know what they are…and how to measure them.”

Similarly, the SEC has been vocal about its view on AI integration in the financial sector, highlighting the impact it can have on a micro and macro level. Last year, it proposed rules aiming to manage AI in investor interactions by requiring enhanced evaluation, though there has been no ratification of official guidance set yet. 

In a podcast with Politico Tech, SEC Chair Gary Gensler reflected on AI, what’s happened so far, and where things are headed. Though Gensler described AI as “the most transformative technology of our time,” he cautioned that AI’s ability to accumulate data to make predictions could lead to herding due to firms’ overreliance on the same base models. On the other hand, Gensler also acknowledged that the SEC has already leveraged the advantages of machine learning (ML), deep learning, and data review to oversee and surveil markets.

Outside of the industry, the Biden-Harris Administration released an Executive Order on the use of AI to increase transparency and accountability related to the morphing technology while laying the groundwork for defined governance. To tackle the subject more critically, the U.S. also announced the development of a new sector of the Department of Commerce, which will be called the U.S. Artificial Intelligence Safety Institute.

What is the Canadian approach to AI regulation?

The Office of the Superintendent of Financial Institutions (OFSI), Canada’s main financial regulator, is developing distinct expectations around model risk management in relation to AI and ML models. Following a consultation period, the regulator is now in the process of amending Guideline E-23, its Enterprise-Wide Model Risk Management for Deposit-Taking Institutions standard, to reflect advancing technologies.

Guideline E-23 is principles-based and pending updates that reevaluate the definition of “model” to include AI as it becomes an inherent tool for financial institutions. OFSI defines a model as “the application of theoretical, empirical, judgmental assumptions and/or statistical techniques, including AI/ML methods, which processes input data to generate results.”

OFSI also defined model risk as “The risk of adverse financial, operational, and/or reputational consequences arising from flaws or limitations in the design, development, implementation, and/or use of a model.” Alongside this description, OFSI outlined the range of situations that could bring about model risk, such as inappropriate specifications or flawed hypotheses.

Per revised guidelines, financial institutions are now expected to maintain a secure model risk management framework by monitoring, testing, risk reviewing, and troubleshooting AI systems they’re using to remain proactive in the face of technological evolution.

As opposed to other financial jurisdictions who are seeking to maximize the potential of AI while controlling risk areas, OFSI seems to have a warier view, narrowing in the repercussion of misuse in a report summarizing the ideas discussed during its Financial Industry Forum on AI:

“Often the focus is on the ‘mean’ of the outcome distribution to justify use and improve the validity of AI applications, however risk thinkers must assess the ‘tails’ of those same distributions, with their peripheral vision and creative minds, to be able to mitigate any unforeseen, disastrous consequences.”

What is the U.K. approach to AI regulation?

Though U.K. regulators like the Financial Conduct Authority (FCA) and Prudential Regulatory Authority (PRA) have released statements recognizing AI’s growth and possible challenges, the general consensus is utilizing a hands-off approach as to foster competitiveness and support innovation.

The U.K. government released a pro-innovation strategy to remain forward-thinking in consideration of transformative technologies. In response, the FCA declared it is a “technology-agnostic, principles-based and outcomes-focused regulator” and will be accepting the integration of AI into markets, but taking a closer look at the risks to ensure that they don’t violate its main regulatory objectives.

Instead of focusing on specifically AI, the FCA is considering technology and data overall and has stated that the tools for AI management are contained within existing guidance:

“Many risks related to AI are not necessarily unique to AI itself and can therefore be mitigated within existing legislative and/or regulatory frameworks. Under our outcomes-based approach, we already have a number of frameworks in place which are relevant to firms’ safe use of AI.”

Similarly, the PRA’s main objective is to ensure consumers’ safety and soundness when monitoring firm operations. Its ancillary objectives aim to facilitate effective competition between firms and the overall competitiveness of the U.K. economy. The PRA responded to the U.K. government’s “Pro-innovation approach to AI regulation” policy paper and indicated that its approach generally lines up with the following four objectives:

  1. Safety, security, and robustness: Risks should be continuously identified, addressed, and managed
  2. Transparency and explainability: The PRA and FCA do not define ML’s interpretability or explainability, but expect regulated banks to do so
  3. Fairness: AI models should not violate individual or organizational legal rights, discriminate unfairly against individuals, or bring about unjust market outcomes
  4. Accountability and governance: Governance measures could be utilized to oversee AI models and ensure accountability, such as those covered in the Senior Managers and Certification Regime or Model Risk Management framework (SS1/23)

In addition, the PRA and FCA will continue to run surveys compiling industry responses to ML in U.K. financial services to ensure regulatory practices remain up to date.

What is the EU approach to AI regulation?

The most noteworthy move from European regulators and across AI governance worldwide has been the EU AI Act, which was passed in March 2024. The EU AI Act takes a “risk-based approach” to AI governance and prevents certain practices while balancing innovation.

A major aspect of the EU AI Act is that financial organizations will have to comply with heightened requirements in the case that their use of AI models is considered high-risk, such as AI-based creditworthiness. These guidelines have been implemented to enhance safety and ensure that fundamental rights and ethics are preserved.

This Act also addresses the use of large AI models, such as large language models and generative models, obliging organizations that are utilizing them to self-assess and mitigate systemic risks, conduct model evaluations, and remain mindful of cybersecurity requirements.

The European Central Bank (ECB) also recognized AI’s ability to make supervisory processes more efficient. In an article, Elizabeth McCaul, Member of the Supervisory Board of the ECB, summarized the Bank’s future-facing outlook:  

“The role of ECB Banking Supervision is to ensure that banks remain safe and sound. It is not for us to dictate which business models banks adopt…What we can do…is draw on the power of AI to decipher data, understand risks and speed up processes, freeing up more time for human analysis…in an increasingly complex world.”

A(I) matter of time

These matters go hand in hand with other key focus areas on regulators’ agendas, such as cybersecurity and operational resilience, which have also been at the top of the regulatory agenda as risks become more palpable in reflection of the transforming industry.

Past the stages of “if” and “when,” it is evident that AI will only become further ingrained in the way that financial firms operate. Since it is here to stay, regulators must determine how to actively manage mutating risks while allowing firms to harness new technologies and optimize the ways they conduct business.

AI innovation has already had a significant impact on finance and will continue to transform the industry as we look ahead. While regulators establish more defined frameworks to manage AI, it is important that firms ensure they are approaching new technologies with a compliance-first approach.

If you’d like to know more about our AI-enabled products, speak to the team or visit our product page.