White Compliance Hub Rules and Regulations text on black background

EU Artificial Intelligence Act

“We finally have the world’s first binding law on artificial intelligence, to reduce risks, create opportunities, combat discrimination, and bring transparency”, said Brando Benefei of the European Parliament.

Article
17 June 2024 7 mins read
By Jennie Clarke
Written by humans

Written by a human

The EU’s Artificial Intelligence (AI) act is the first of its kind, and aims to lead the way for governments around the globe. In this piece, you’ll learn why the act was necessary, what it covers, and about some examples of how developers will need to comply as they build their own AI technology.

Plus, for those that need it, learn how to ensure your innovative new products are built in compliance.

Why was the AI Act introduced?

It wasn’t easy to introduce the EU’s AI Act, with debate and discussion taking years to overcome. But when it finally passed in the European Parliament on the 8th of December 2023, there were 523 votes in favor, 43 against and 49 abstentions. Almost unanimous.

The reason why all of these opposing parties came together? Because we are all beginning to understand the true risks of AI, which are turning out to be more dangerous as the technology advances.

Some of these risks include:

  • A lack transparency within AI algorithms: without a regulated sandbox, there’s no way to tell how a program came to its decision, and therefore harder to control it
  • Over-access to personal and biometric data: leading to the potential exploitation of groups and individuals, such as the targeting of victims
  • Vulnerabilities of integral infrastructure such as the water, gas or electric that could potentially disrupt energy supply

With AI now ingrained into our daily lives, whether that be through chatGPT for work or interactive gaming for fun, it’s able to affect almost everybody in the EU. That’s why policy-makers felt urged to introduce the Artificial Intelligence Act – to limit the harmful consequences of unregulated AI. This is in contrast to the UK’s so-called ‘pro-innovation’ approach.  

AI will push us to rethink the social contract at the heart of our democracies, our education models, labor markets, and the way we conduct warfare”, said another European representative Dragos Tudorache. He continued, “the AI Act is a starting point for a new model of governance built around technology. We must now focus on putting this law into practice”.

What is the EU AI Act?

The EU’s Artificial Intelligence Act is set up based on four main objectives:

  1. Upholding rights and safety: laying out the legal framework for deploying AI systems
  2. Safeguarding trustworthy AI: risk-based approach outlines the requirements to balance innovation and user protection
  3. Enforcing AI regulations: co-operation from member states to enforce fundamental rights and safety to AI systems
  4. Building a single market for AI: build an environment for innovation and establish a level playing field for AI in the European Union

Through these four pillars, the Act mandates several requirements for those looking to build, deploy and monitor AI platforms within the EU.

The most prominent part of the regulation is in regards to classification. The EU designates that there are four key categories of AI, each with a growing level of risk. At each level, the regulatory requirements are different:

Level of RiskMinimal RiskLimited RiskHigh RiskUnacceptable Risk
ExampleSuch as AI-enabled video games or spam filtersChatbots in low-risk industries, such as customer support chatbots for a workplace productivity softwareTransport infrastructure that could put the lives of citizens at risk. Such as facial scanning at the airport.Social scoring systems (such as those shown on the Black Mirror episode, Nosedive)
RequirementsUnregulatedDevelopers / deployers must make end-users aware that they are interacting with AIAssess and work to reduce risks, be transparent and accurate, and ensure human oversight. Citizens have more control over receiving decision-making information from these systems, especially when they affect their rights. These systems are prohibited under the act.

It’s clear that most of the regulations fall on the high-risk category of AI developers. It’s at this level, whether their programs are open-source or closed, that developers must conduct regular testing to ensure the detection and prevention of cybersecurity attacks. Tests like model evaluations, scenario testing and automated reporting will help AI creators to ensure that their platforms are safe to deploy.

However, the overriding phrase for this act is transparency. Developers must have a strong level of explainability for how their AI makes its decisions, and must use governance best practices to deploy their technology in an ethical way. 

What are some examples of high risk use cases for the EU AI Act?

Use Case 1: Education and training

There are already some educational or training apps that leverage AI technology. For example, the app Lingopie uses AI to automate the generation of captions and definitions if students click on particular words.

But AI could impact so much more in educational apps. For example, the camera function could apply AI to monitor student behavior and check that there is no cheating during a test. Alternatively AI could be used to assign different tasks to different students, based on their previous history and learning outcomes.

Applied to the EU’s AI Act, this type of case would be considered high risk. Particularly, it “improves the result of previously completed human activity”, under the Act’s criteria. 

AI compliance means that the developers would be required to establish a risk management system and perform data governance to ensure that their educational app meets its intended purpose. There are also design elements that their application must meet, such as allowing human oversight, and measuring the appropriateness of their cyber controls.

Finally, the developers of any education or training AI programs must keep detailed records of algorithmic modifications, as this can impact the security and risk profile of the platform.

Each of these criteria should be managed so that the developers can prove their compliance with the EU’s AI regulatory body.

Use Case 2: Rail infrastructure

A second case study of the EU’s Artificial Intelligence Act is rail infrastructure. For example, AI could be used to track conditions, and automatically predict when a fault might occur to maintenance teams (before it actually does).

This type of product is classified as a safety component of a product, so is subject to different requirements under the high-risk rules. In this case, developers are required to undergo a third-party conformity assessment before the product can be placed on the market or used.

In our opinion, the major difference between the two high-risk scenarios is that the first applies to entrepreneurs who value creativity and innovation, and are less likely to consider the risks around AI. The second example, on the other hand, has been built specifically for safety, and therefore the risks are considered much earlier within the development process.

Prioritizing compliance

While most industry participants are welcoming the EU’s rules on AI, it doesn’t mean that the transition will be plain sailing. There are many components of the Act, and while there are lots of examples, it’s up to the developers to determine which risk category they might fit in.

Fortunately, there are ample resources and support for those who want to innovate with AI. While the Act was officially adopted in March 2024, the first parts of the regulation (the banning of AI programs with unacceptable risk levels) will come into force in September, six months later.

Companies have nine months to comply with codes of conduct, and twelve months to set up their transparency requirements. For those developing or deploying AI technology in the high-risk category, their obligations won’t come in until up to three years time.

The timeline is long and gives organizations a good chance to comply before they risk being fined, or other penalties. But to get a head start, why not reach out to the team at Global Relay? We have specialized in proactive supervision and risk management for more than 25 years. Reach out here to chat to an advisor.

< Back to the hub

About Article

Published 17 June 2024

About Author

Share Article

SUPPORT 24 Hour