EU AI Act
As a world-first, the EU’s Artificial Intelligence Act has been introduced as the most comprehensive AI regulation from August 2024 onwards. It is a direct response to the surge of unmonitored AI innovation and usage, which now present dangerous risks to end users and wider communities if misused.
Written by a human
The regulation is powered by risk-based AI classification, with its scope covering data governance, record-keeping, quality assurance, and risk management. But with 78% of companies now using or offering AI globally, and €35 million (at the lower end!) in potential fines for wrongdoing, firms must consider the boundaries of this AI regulation and ensure they prioritise compliance.
Key provisions of the EU AI Act for financial institutions
The EU AI Act classifies AI technology into four key categories, each subject to a different level of regulation:
| Level of risk | Details | Example | Regulation |
| Unacceptable risk | These AI programs are prohibited as they are very likely to cause harm | Social scoring matrix | These platforms are prohibited, so do not require regulation |
| High-risk AI systems | Programs that could cause significant harm if used inappropriately | Remote biometric identification systems | This is the focus area of the regulation |
| Limited risk AI systems | Programs that directly interact with end users, that could cause a lower level of harm if used inappropriately | Chatbots and generative AI platforms that generate synthetic images, audio and video | Platforms in this category are lightly regulated |
| Minimal risk AI systems | Systems that are unlikely to cause harm | AI video games and spam filters | Largely unregulated |
The majority of regulation within the EU AI Act is focused on the high-risk AI category, especially in industries like finance, due to the potential widespread effects. EU AI Act requirements for banks 2025 will apply to the likes of fraud detection tools, credit scoring platforms, and risk assessment programs (especially with reference to insurance).
Before we dive into the requirements, it’s important to note that there are two separate groups that the regulation applies to.
- AI providers: are those that develop models and place them on the market first
- AI deployers: are those that use AI in a professional capacity under their authority
Individuals using AI platforms for personal reasons aren’t subject to the rules.
So what are the requirements?
The requirements for AI compliance include:
- Risk management system: AI risk management for financial services needs a dedicated system across the entire lifecycle
- Data governance: train, validate, and test the data sets so that they keep their integrity and are complete, according to their intended purpose
- Technical documentation: demonstrate compliance by giving the regulators the necessary back-end information
- Record-keeping: automatically record events relevant for identifying national level risks and any key changes throughout the lifecycle
- Usage instructions: for downstream users to remain compliant
- Human-in-the-loop: enable downstream users to have sufficient oversight
- Quality assurance: check for accuracy, robustness, and cybersecurity
The timelines for compliance are as follows:
- February 2025: The ban on AI programs with an unacceptable level of risk begins
- August 2025: Rules for general purpose AI (which fit into the limited risk category), are enforced. Also at this time, each country is required to have set up its national authority on AI, so that penalty enforcements for violations can begin.
- August 2026: Rules for the high-risk category are enforced. Conformity assessments, which are effectively an audit of compliance, are mandated.
- August 2027: By this time, any AI program that’s integrated into other technology, such as medical devices or financial trading platforms, are fully compliant.
Compliance challenges and best practices
There are several challenges associated with AI governance in finance, and specifically with this act:
- How can individuals accurately identify whether their AI platform is high-risk?
- How can regulators and proprietors ensure data quality?
- How can regulators manage all third-party AI providers?
Identifying your AI risk category
Clearly, there are a large range of requirements under the EU AI Act, which wholly depend on whether the AI is high or limited risk. So how can compliance officers and risk managers confidently determine where their technology sits?
Reviewing the prohibited list is a good place to start. This ensures, at least, that your use case for AI is allowed under the regulation The high-risk criteria is also worth checking, as regulated products in this category are clearly spelled out under Annex III.
A risk assessment is a strong final step in compliance risk management. Consider if it involves human interaction, and if so, the potential impact on users. Is there any scenario where your AI program generates content that requires transparency measures?
If you checked ‘no’ on all of the above, it’s likely to fit into limited or minimal risk.
Ensuring data quality
The challenge with many AI platforms is data degradation over time. When information from one AI feeds into another, there is a higher risk of error.
It’s due to the output of synthetic data, which is not grounded in previous conversation or learning, and is effectively ‘made up’ by the AI model ‘filling the gaps’. One notable example is when a lawyer used ChatGPT to build its legal case, and the judge discovered six completely made up citations to research that didn’t exist.
Once synthetic data outputs go beyond a certain point, it can lead to complete model collapse.
The solution? Human-in-the-loop, which involves manual inspection of suspicious activity to continually feed back and ensure trust. Global Relay enforces human-in-the-loop by requiring human reviewers to sort through risk alerts under our communications surveillance product. This was a key theme of the State of Surveillance 2025 report, and accompanies AI capabilities, such as data enrichment, transcription and noise reduction.
Managing third-parties
For the regulators, managing all of the potential AI third parties under this single regulation must be challenging. There is little oversight into how many new AI platforms are producing their results, not least because AI is a ‘black box’ by nature, in that it lacks transparency.
One of the solutions, built into the EU AI Act already, is a governance structure built upon national bodies. Since the EU is made up of 27 countries, each nation has their own regulatory body, following the same framework. This, in theory, should enable the standardization of management, keeping up minimum compliance standards across the European Union.
How does the EU AI Act affect financial institutions?
Financial institutions that work with AI, especially fintechs, are particularly affected by the EU AI Act. In 2024, it was found that 91% of all financial institutions were already using AI, and the legislation specifically calls out AI-based creditworthiness assessments as a high-risk activity. AI-based fraud detection is another trending activity in the industry.
Therefore, financial institutions must build in strong measures to both comply with the EU AI Act, and ensure their AI is operating at a ‘safe’ level in terms of human impact.
One example of a successful AI-based collaboration with a financial institution is Financials’ partnership with Global Relay. When evaluating potential compliance vendors, its priority was a partner that cared about the clients they served, while continuing to invest in future-proof solutions.
The team implemented Global Relay App for remote archiving across all communications channels, enabling the data to be ready in real-time for analysis with AI tools.
Technology and EU AI Act compliance
There are several types of technology that can help with EU AI Act compliance, including:
- Data archiving: tools that automatically capture and record data for storage
- Audit trails: tools that link information for easy navigation through connected data
- AI archiving and monitoring: such as communications monitoring, flagging risky communication that may go against compliance standards
With plenty of tools on the market, it can be overwhelming for compliance and risk managers to assess their options. We recommend opting for tools that work in real-time, so that risks are revealed as fast as possible, and teams have time to react.
For example, AI providers will have to undergo a conformity assessment in order to take their products to market from August 2027 onwards. CE marking is the signal to show that products are considered ‘safe’ within the EU market. Attempting to manually perform the conformity assessment, includes:
- Risk classification
- Document preparation
- Assessment
- Post-market surveillance
But many of these processes could be automated without increased risk. The document gathering process, for example, could be significantly shortened with the application of AI, and still confirmed by human-in-the-loop protocols. This is where RegTech solutions have particular benefits, and are likely to become part of the best practices for AI compliance in fintech.
Considering the impact of the EU AI Act
With its firm focus on ethical governance and robust security, the landmark EU AI regulation is poised to fundamentally reshape the use of artificial intelligence in the financial sector.
Technology solutions for EU AI Act compliance will involve a heavy focus on documentation and logs, ensuring that firms capture and maintain a complete and auditable record. To capture and monitor AI-related communications throughout the system’s lifecycle, Global Relay provides dedicated Archive and Surveillance solutions.
Explore how to enrich, store and manage your data all in one powerful archive, and detect communication risks to prevent escalation. For financial institutions, these requirements will soon be integrated with existing compliance mandates, so the race is on.