As with every other industry, artificial intelligence (AI) has taken financial services by storm. From planning and analysis to compliance and risk management, firms are increasingly using AI to enhance and streamline a range of vital functions.
As firms begin to engage with advanced AI-enabled tools like GenAI and Large Language Models (LLMs) to free up time for resource-constricted teams, regulators are reassessing their stances on these fast-evolving technologies.
Though, regulators are caught between firms eager to explore the possibilities of these transformational technologies and their role in ensuring these technologies are implemented responsibly to protect markets and consumers. So, how are they finding the balance between regulation and innovation?
OSFI urges firms to stay “AGILE” with new framework
On March 23, the Office of the Superintendent of Financial Institutions (OSFI) released a final report on the AI risks and opportunities identified during its second series of Financial Industry Forum on AI (FIFIA II) workshops.
The report highlighted that AI is transforming financial services by “redefining operational models and competitive dynamics.” It is also reshaping the risk landscape by enabling fraud with “unprecedented speed, scale, and sophistication.”
The regulator outlined six core areas that firms should be mindful of as AI advances and risks emerge: strategic risks, security and cybersecurity threats, consumer risks, knowledge gaps, third-party risks, and financial stability risks.
A continuous theme across these risk areas is the importance of agility. AI is moving at a rapid pace, requiring firms to reassess their compliance policies, security infrastructure, training programs, and vendor management consistently, in near real-time.
“Agility emerged as a central theme to guide a sector that must move dynamically to capture AI’s benefits while responding to fast-evolving risks.”
To help navigate risks and seize opportunities, OSFI introduced its AGILE framework, which outlines the need for:
- Awareness: Understanding how technologies transform the risk landscape to remain ahead of AI-related risks.
- Guardrails: Ensuring responsible use by implementing strong controls, high data integrity standards, human oversight, transparency, and thorough third-party oversight.
- Innovation: Leading with a growth mindset and investing in talent, modern infrastructure, and responsible innovation to drive competitiveness.
- Learning: Building AI skills across all levels of your organization through regular training and collaboration.
- Ecosystem Resiliency: Strengthening resilience through improved third-party oversight, regulatory clarity, and incident-response frameworks.
Firms and their compliance operations are having to be lighter on their feet than ever before, especially as AI use cases become more complex and more prevalent. GenAI has seen explosive momentum, with a 3,000% increase in firms compliantly capturing ChatGPT data between 2024 and 2025.
Guidance like the OSFI’s AGILE framework is a useful starting point for firms looking to establish a compliance framework that can keep pace with evolving technologies and allow them to harness AI innovation.
FDIC “wants banks to innovate in this space”
Like OSFI, the Federal Deposit Insurance Corporation’s (FDIC) Chairman Travis Hill announced changes to its supervisory and regulatory approach to bolster economic growth and foster AI innovation.
Of these changes, Hill stated that reforming the FDIC’s approach to anti-money laundering (AML) and the Bank Secrecy Act (BSA) is a key priority. Particularly, he noted GenAI’s potential to improve the BSA:
“AI…can identify suspicious activity with a speed and precision that legacy “rules-based” systems cannot match.”
Hill also highlighted potential hesitance from firms to adopt AI solutions for fear of opening themselves up to regulatory scrutiny. He clarified that the FDIC’s updated approach recognizes AI’s benefits and supports innovation if it is responsible:
“I have heard of some reluctance to adopt these technologies because of fear that examiners will require parallel technology runs, play “gotcha” for past failures that new technologies reveal…At the FDIC, we want banks to innovate in this space, and we will ensure our supervisory approach encourages it.”
Answering the AI question: Global regulators reconsider their stance on innovation
OSFI and the FDIC’s actions follow a wider trend that has seen other regulators acknowledge AI’s revolutionary potential to drive innovation. The Securities and Exchange Commission (SEC) and Financial Conduct Regulatory Authority (FINRA) have launched initiatives to “embrace advancements” and understand how member firms are leveraging GenAI technologies, including the SEC’s AI Task Force and the FINRA Forward initiative.
FINRA identified common use cases for GenAI tools within financial services, which include data analysis and decision making, summarization and information extraction, and translation and sentiment analysis.
The UK Financial Conduct Authority (FCA) announced that it has put innovation “at the heart” of its five-year strategy. This includes the introduction of a Supercharged Sandbox, which was implemented to offer firms enhanced datasets and advanced tools for experimentation in a safe environment. The regulator is also utilizing GenAI within its own processes to “modernize regulation and streamline supervision.”
Balancing innovation and regulatory expectation
AI use cases continue to grow, both within day-to-day business functions and compliance workflows, as teams begin to leverage advancements including risk monitoring and communications surveillance. Fast-moving innovations like agentic AI, which can autonomously make decisions without human oversight, introduce additional benefits and risks that firms need to weigh in order to innovate compliantly and responsibly.
While most regulators are yet to set out specific AI rules, the expectation is that firms remain cognizant of evolving risks to ensure their AI experimentation and implementation meets existing requirements – and the pressure is on to make sure they’re playing by these rules.
With regulators putting forward guidelines, initiatives, and statements to support the responsible adoption of advancing AI technologies, firms that implement robust AI-enabled tools can ensure they remain at the forefront of innovation.