GenAI use is expanding fast, with firms leveraging models for a variety of complex tasks and workflows. But the ever-widening variety of applications firms are finding for GenAI also widens the potential risks. In its 2026 Annual Oversight Report, the Financial Industry Regulatory Authority (FINRA) has outlined the top GenAI use cases within financial services, alongside best practice tips for firms to follow to use AI tools compliantly.
What are the most common GenAI use cases in financial services?
The use cases FINRA lists include an array of GenAI applications, ranging from simple automation tasks to intensive analysis. While GenAI use in any capacity can present potential risks if not properly controlled, certain applications are particularly noteworthy in their compliance impact.
Content creation and data enhancement
FINRA stated that summarization and information extraction was the top use case amongst firms, especially in parsing through large amounts of unstructured data to extract relevant information. With the number of business communications platforms quickly multiplying, this application is particularly beneficial to help compliance teams maintain oversight of increasing data volumes – although firms must ensure GenAI models are rigorously tested to avoid potential security threats that could compromise data.
Content generation and drafting is another use case which sees GenAI models generate content such as documents or marketing materials. This is becoming increasingly prevalent, such as with Deloitte, which gave 75,000 employees access to GenAI to help with drafting content. However, firms must make sure any and all outward-facing marketing materials are captured and retained in line with regulations such as the Securities and Exchange Commission’s Marketing Rule.
Data analysis and decision-making
Increasingly, firms are using GenAI tools to handle data analysis and decision-making, which can carry a high degree of risk if not carefully tested and validated.
FINRA defined analysis and pattern recognition as the ability for GenAI tools to identify trends or variances within datasets. Similarly, it defined sentiment analysis as the ability to assess the tone and intent behind a message.
These applications are incredibly useful to compliance teams, such as in the case of communications monitoring, as they enhance risk detection abilities and enable reviewers to identify signs of misconduct. However, without robust model risk management standards, GenAI models could draw inaccurate conclusions or overlook potential risk indicators, leading to either an increase in false positive or firms missing signs of real risks.
What is FINRAs advice on controlling GenAI risk?
Alongside other regulators, FINRA has made clear that its existing rules frameworks apply to GenAI tools just as they do with any other technology that firms use. As GenAI takes on a more prominent role, firms must establish thorough supervisory processes to assess the integrity, accuracy, and reliability of generative models.
FINRA has recommended the following steps for firms looking to implement GenAI tools within workflows compliantly:
- Develop a governance framework: Build a governance framework that outlines clear policies around implementation and use of GenAI, maintaining complete documentation of each step. In addition, include review and approval procedures to understand unique risks and what controls may be needed as GenAI use within an organization evolves.
- Be aware of model risk areas: Remain aware of risks associated with GenAI outputs, such as hallucinations or bias, and develop approaches to mitigate these risks.
- Regularly test and monitor: Test GenAI models to understand their capabilities and limitations in certain areas, such as privacy, integrity, and reliability. Continually monitor outputs, prompts, and responses to confirm the model is performing up to standard, especially as models and their use mature.
Ryan Sheridan, Senior Manager of Regulatory Intelligence at Global Relay, weighed in with advice for firms implementing advancing AI use cases:
Ryan Sheridan, Senior Manager, Regulatory Intelligence – Global Relay
“As firms strategically extend AI into more tightly regulated and high-risk areas within their organization, they must remain proactive in anticipating potential pitfalls. By embedding layered governance controls and reinforcing oversight, teams can identify, escalate, and remediate anomalies more effectively.”

The next generation: AI agents
For many firms, the use of AI agents seems to be the next step in the evolution of AI within their business. Where GenAI tools are trained to generate insights or create content that is reviewed by a human, agentic AI can make decisions without rules, logic programming, or oversight.
Greg Rupert, FINRA’s Executive Vice President and Chief Regulatory Operations Officer, explained how AI agents can be used to enhance GenAI use cases:
“These agents can enhance GenAI capabilities by providing users with additional opportunities for task automation and the ability to interact with a wider range of data and systems faster and at a potentially lower cost than more traditional process automation.”
As agentic AI grows in prevalence and sophistication, it seems firms can expect even more opportunities to transform their business workflows. Though, FINRA noted that autonomy, scope, authority, and auditability will need to be taken into consideration.
While the regulator has not imposed AI-specific rules, firms should be well aware that GenAI must be used as compliantly as any other tool in their arsenal – and that compliance policies must evolve in-step with AI.
GenAI has presented various opportunities to financial workflows, from transforming data insights to optimizing risk management practices. As firms look to implement GenAI-enabled tools, a reliable and secure third-party partner can make all the difference in maintaining high standards of compliance.