Following the 523 - 46 vote of the European Parliament on 13 March 2024, the EU AI Act will enter into force 20 days after publication in the Official Journal (expected in May or June 2024).
If your business uses artificial intelligence, you need to get up to speed. Non-compliance may expose you to legal risks, significant fines, or loss of customer trust. If you take steps to prepare, you can avoid these issues.
Proper preparation includes:
If your business uses artificial intelligence, you need to get up to speed. Non-compliance may expose you to legal risks, significant fines, or loss of customer trust. If you take steps to prepare, you can avoid these issues.
Proper preparation includes:
-
Setting up a risk management system
-
Improving data quality and integrity
-
Increasing cybersecurity and privacy protection
- Keeping up to date with any changes as the law develops.
What is the EU AI Act?
The EU AI Act is the world's first standalone law governing AI and a landmark piece of legislation for the EU. It sets out the framework for regulating AI systems. It takes a risk-based approach to classifying these systems and applies greater or fewer restrictions to those systems depending on the risk.
If you are using AI in your business or organization, this law can apply to you and may have compliance requirements that you need to follow.
Parts of the Act will come into force as soon as 6 months after the final text is published. This is a relatively short lead-in time.
The Act establishes a risk-based approach, meaning that different kinds of AI systems are classified as being unacceptable, high, minimal, or no risk. This risk-based approach is intended to allow as much innovation as possible while preventing the most harm.
The compliance requirements can be rigid, especially for high-risk AI systems. This will particularly affect enterprises in big sectors like financial services, healthcare, and telecommunications. If you are working in one of these sectors, getting up to speed with the new law is a crucial first step.
One way businesses can start preparing is to familiarize themselves with the Act and its functions.
Parts of the Act will come into force as soon as 6 months after the final text is published. This is a relatively short lead-in time.
The Act establishes a risk-based approach, meaning that different kinds of AI systems are classified as being unacceptable, high, minimal, or no risk. This risk-based approach is intended to allow as much innovation as possible while preventing the most harm.
The compliance requirements can be rigid, especially for high-risk AI systems. This will particularly affect enterprises in big sectors like financial services, healthcare, and telecommunications. If you are working in one of these sectors, getting up to speed with the new law is a crucial first step.
One way businesses can start preparing is to familiarize themselves with the Act and its functions.
To Whom Does the EU AI Act Apply?
The Act applies to those developing, deploying, and using AI systems within the European Union. This includes companies, organizations, and individuals that develop or deploy AI systems within the EU, regardless of their size or sector.
An AI system, as defined in the EU AI Act, is:
In addition, the Act may also apply to companies or organizations based outside of the EU. This could be the case if they offer AI products or services to EU residents or if their AI systems are used within the EU market.
-
a machine-based system
-
designed to operate with varying levels of autonomy
-
that may exhibit adaptiveness after deployment
- that, for explicit or implicit objectives, infers, from the input it receives, how to generate outputs such as predictions, content, recommendations, or decisions that can influence physical or virtual environments
In addition, the Act may also apply to companies or organizations based outside of the EU. This could be the case if they offer AI products or services to EU residents or if their AI systems are used within the EU market.
Why Was the EU AI Act Created?
The Act was proposed because the development of AI over the past few years has been explosive. Several general-purpose AI models have been developed recently, the most well-known of which is ChatGPT.
These technologies have huge potential benefits, but technologists, lawyers, human rights experts, and business owners are concerned about their misuse.
This is because AI systems can also create systemic risk, particularly to fundamental rights such as privacy.
These technologies have huge potential benefits, but technologists, lawyers, human rights experts, and business owners are concerned about their misuse.
This is because AI systems can also create systemic risk, particularly to fundamental rights such as privacy.
There has been no stand-alone law dealing with AI until now. This means that a number of AI tools are being developed and used without oversight. AI regulation is one way to ensure that AI projects can be carried out with a clear set of guidelines, bringing everyone to a level playing field.
If you bring your projects into line with the regulation comprehensively and efficiently, you can focus on profits and innovation without falling afoul of the law.
Lawyers focused on business, cybersecurity, and data protection, Thomas Prete and Yuri Rodrigues, note that the way AI is developing and the regulatory efforts around it “Will require multidisciplinary skills from managers, lawyers, software developers, engineers, and cybersecurity experts to simultaneously address legal requirements, security vulnerabilities and system robustness.”
Many organizations are concerned that the regulation will restrict them to the point where they cannot develop AI systems. No matter if you work in healthcare, finance, or other industries dealing with sensitive data, it’s understandable that you might be worried that the regulation will prevent you from using AI in your unique use case.
Oksana Rasskazova, an AI and tech leader with Gestalt Robotics, says, "We need to be really careful that the law doesn't stop innovation," and notes that there is a wide variety of business cases that use AI.
She also explains that because the Act is in a state of development without defined guidelines, businesses are experiencing "a sense of anxiety" and looking for clarity on how they should approach things.
Fortunately, the Act does not intend to stop innovation or prevent business use cases from happening. Instead, it sets out a number of measures and safeguards that you need to take if you want to do these things. Enter the risk-based approach.
Lawyers focused on business, cybersecurity, and data protection, Thomas Prete and Yuri Rodrigues, note that the way AI is developing and the regulatory efforts around it “Will require multidisciplinary skills from managers, lawyers, software developers, engineers, and cybersecurity experts to simultaneously address legal requirements, security vulnerabilities and system robustness.”
Many organizations are concerned that the regulation will restrict them to the point where they cannot develop AI systems. No matter if you work in healthcare, finance, or other industries dealing with sensitive data, it’s understandable that you might be worried that the regulation will prevent you from using AI in your unique use case.
Oksana Rasskazova, an AI and tech leader with Gestalt Robotics, says, "We need to be really careful that the law doesn't stop innovation," and notes that there is a wide variety of business cases that use AI.
She also explains that because the Act is in a state of development without defined guidelines, businesses are experiencing "a sense of anxiety" and looking for clarity on how they should approach things.
Fortunately, the Act does not intend to stop innovation or prevent business use cases from happening. Instead, it sets out a number of measures and safeguards that you need to take if you want to do these things. Enter the risk-based approach.
What is the Risk-based Approach of the EU AI Act?
The Act's risk-based approach categorizes AI systems as unacceptable risk, high risk, limited risk, and minimal risk.
AI systems that pose unacceptable risks are essentially banned and subject to high levels of oversight. This includes systems that establish social scoring, real-time biometric identification, and certain uses of emotion recognition, among other things.
High-risk systems are also tightly regulated. Systems classified as high-risk are organized more by sector than by use. This includes AI systems such as those used in government or political systems, healthcare, or migration contexts.
Organizations should self-assess what level of risk their AI system poses.
For example, if you were placing onto the EU market an AI model used as a medical device that generates synthetic audio and interacts with patients, this is likely to be classified as a high-risk AI system.
Another example would be if you wanted to deploy an AI system that performed emotion recognition, it would likely be prohibited. However, if this same emotion recognition system was used by law enforcement, it would likely be permitted by the Act.
The Future of Life Institute (FLI), an independent non-profit working to reduce the risks of harm of AI, has developed a risk-checking system for informative purposes. By selecting what type of AI system you are developing or deploying, FLI’s risk checker can give you a basic assessment of whether your AI system is likely to be prohibited, high risk, or minimal risk.
While FLI’s checker is useful, this risk assessment should still be carried out internally with the help of experts and significant professional analysis. The risks of non-compliance are very high.
Pavel Čech, a lawyer focused on the intersection between law and technology, explains that the risk-based approach in the AI Act is the EU’s attempt to deal with the “Challenge of fostering innovation while ensuring these technologies are developed and deployed ethically, safely, and without compromising privacy or security.
This balance is crucial to enable the benefits of AI, such as improved healthcare diagnostics, more efficient energy use, and enhanced educational tools while mitigating risks like algorithmic bias, job displacement, and potential misuse.”
In addition, the Act introduces penalties for non-compliance with its provisions. Organizations that breach the Act's requirements may face significant fines.
Prete and Rodrigues say, in particular, that “in our view, a wave of heavy scrutiny is coming together with the AI Act, and organizations must be prepared for extra steps of accountability for their systems and operations.”
AI systems that pose unacceptable risks are essentially banned and subject to high levels of oversight. This includes systems that establish social scoring, real-time biometric identification, and certain uses of emotion recognition, among other things.
High-risk systems are also tightly regulated. Systems classified as high-risk are organized more by sector than by use. This includes AI systems such as those used in government or political systems, healthcare, or migration contexts.
Organizations should self-assess what level of risk their AI system poses.
For example, if you were placing onto the EU market an AI model used as a medical device that generates synthetic audio and interacts with patients, this is likely to be classified as a high-risk AI system.
Another example would be if you wanted to deploy an AI system that performed emotion recognition, it would likely be prohibited. However, if this same emotion recognition system was used by law enforcement, it would likely be permitted by the Act.
The Future of Life Institute (FLI), an independent non-profit working to reduce the risks of harm of AI, has developed a risk-checking system for informative purposes. By selecting what type of AI system you are developing or deploying, FLI’s risk checker can give you a basic assessment of whether your AI system is likely to be prohibited, high risk, or minimal risk.
While FLI’s checker is useful, this risk assessment should still be carried out internally with the help of experts and significant professional analysis. The risks of non-compliance are very high.
Pavel Čech, a lawyer focused on the intersection between law and technology, explains that the risk-based approach in the AI Act is the EU’s attempt to deal with the “Challenge of fostering innovation while ensuring these technologies are developed and deployed ethically, safely, and without compromising privacy or security.
This balance is crucial to enable the benefits of AI, such as improved healthcare diagnostics, more efficient energy use, and enhanced educational tools while mitigating risks like algorithmic bias, job displacement, and potential misuse.”
In addition, the Act introduces penalties for non-compliance with its provisions. Organizations that breach the Act's requirements may face significant fines.
Prete and Rodrigues say, in particular, that “in our view, a wave of heavy scrutiny is coming together with the AI Act, and organizations must be prepared for extra steps of accountability for their systems and operations.”
What are the Levels of Risk for the EU AI Act?
The risk categories are as follows:
Some AI systems behave in ways that are biased, discriminatory, or invade people’s privacy in severe ways.
For example, a tool made by Northpointe Inc. called COMPAS was intended to determine people’s risk of reoffending after having committed a crime. The tool racially discriminated against black defendants, saying that they were at a much higher risk of both violent and non-violent reoffending than they actually were.
AI systems classified as unacceptable risk under the Act are those that can cause significant potential harm to people. They are also known as "Prohibited Artificial Intelligence Practices."
The EU AI Act intends to either completely prevent the use of tools like COMPAS or ensure that they work in appropriate, high-quality ways without bias, discrimination, or unfair effects on people.
Under the Act, AI systems that are said to be unacceptable risk are those that:
AI systems that are high risk can be placed on the market but with several safeguards in place. High-risk AI systems include those used in areas or sectors such as:
AI systems are not considered high risk if they perform a minimal function, even in these sectors.
For example, the system might not influence the outcome of decision-making or only complete a narrow procedural task. In other cases, the system only supplements the decisions of a human decision-maker or carries out preparatory tasks rather than anything substantial.
Limited-risk AI systems are those that pose fewer potential risks to safety, fundamental rights, or societal welfare.
This includes certain types of chatbots, recommendation systems, and content moderation tools used in non-critical applications such as e-commerce platforms or social media.
For example, Siri, Alexa, and the Google voice assistant would all most likely be classified as limited-risk AI systems.
On the other hand, chatbots involved in healthcare could be classified as high-risk. This could include AI systems that make triage decisions for patients and recommend whether they see a doctor.
"Minimal-risk" or "no-risk" systems have the lowest potential for harm. They are subject to minimal regulatory requirements.
Minimal or no-risk AI systems include basic rule-based decision-making algorithms, simple chatbots with limited functionality, or tools used for spell-checking or basic data analysis. One example of a minimal or no-risk AI system could be the use of a spam filter on email systems.
Considering various risk levels, you’re most likely curious about what regulations apply to those in the high-risk category. Let’s explore this now.
Unacceptable risk
Some AI systems behave in ways that are biased, discriminatory, or invade people’s privacy in severe ways.
For example, a tool made by Northpointe Inc. called COMPAS was intended to determine people’s risk of reoffending after having committed a crime. The tool racially discriminated against black defendants, saying that they were at a much higher risk of both violent and non-violent reoffending than they actually were.
AI systems classified as unacceptable risk under the Act are those that can cause significant potential harm to people. They are also known as "Prohibited Artificial Intelligence Practices."
The EU AI Act intends to either completely prevent the use of tools like COMPAS or ensure that they work in appropriate, high-quality ways without bias, discrimination, or unfair effects on people.
Under the Act, AI systems that are said to be unacceptable risk are those that:
-
Manipulate human behavior, particularly through subliminal or purposefully deceptive techniques
-
Exploit people's vulnerabilities to manipulate their behavior in a way that is likely to cause significant harm
-
Use biometric systems to categorize people and to determine their race, political opinions, religious beliefs, or sexual orientation, among other things
-
Implement social scoring by evaluating people's social behavior or personal characteristics
-
Use real-time remote biometric identification in public places
-
Create risk assessments of people to predict the risk that they might commit a criminal offense
-
Create facial recognition databases, or expand them through untargeted scraping of facial images from the internet or CCTV footage
- Conduct emotion recognition of people in the workplace and educational institutions, except if the AI system is used for medical or safety reasons
High risk
AI systems that are high risk can be placed on the market but with several safeguards in place. High-risk AI systems include those used in areas or sectors such as:
-
Transport
-
Toys
-
Financial services
-
Healthcare
-
Telecommunications
-
Government
-
Public services
-
Law enforcement
- Migration contexts
AI systems are not considered high risk if they perform a minimal function, even in these sectors.
For example, the system might not influence the outcome of decision-making or only complete a narrow procedural task. In other cases, the system only supplements the decisions of a human decision-maker or carries out preparatory tasks rather than anything substantial.
Limited risk
Limited-risk AI systems are those that pose fewer potential risks to safety, fundamental rights, or societal welfare.
This includes certain types of chatbots, recommendation systems, and content moderation tools used in non-critical applications such as e-commerce platforms or social media.
For example, Siri, Alexa, and the Google voice assistant would all most likely be classified as limited-risk AI systems.
On the other hand, chatbots involved in healthcare could be classified as high-risk. This could include AI systems that make triage decisions for patients and recommend whether they see a doctor.
Minimal or No Risk
"Minimal-risk" or "no-risk" systems have the lowest potential for harm. They are subject to minimal regulatory requirements.
Minimal or no-risk AI systems include basic rule-based decision-making algorithms, simple chatbots with limited functionality, or tools used for spell-checking or basic data analysis. One example of a minimal or no-risk AI system could be the use of a spam filter on email systems.
Considering various risk levels, you’re most likely curious about what regulations apply to those in the high-risk category. Let’s explore this now.
What Regulations Apply to High-Risk AI Systems?
In particular, high-risk AI systems are subject to specific requirements that enterprises need to be aware of. Some key requirements include:
-
Data quality and governance: High-risk AI systems must adhere to stringent data quality standards. You must ensure that data used for training and operation are accurate, reliable, and representative. You must also put in place data governance practices to prevent bias, discrimination, and other adverse effects.
-
Transparency and explainability: If you are developing high-risk AI systems, you must write clear and comprehensive documentation. This must explain the system's functionality, capabilities, and limitations. In addition, it must say what data was used for training, the algorithms used, and how the system makes decisions.
-
Human oversight and control: High-risk AI systems must have the ability for humans to have oversight and control. This may include the ability to override automated decisions, monitor system performance, and take corrective actions in case of errors or unexpected outcomes. For instance, in cases where an AI system has produced a biased result, a human could take over and review the system, make changes, or use different data.
-
Accuracy, robustness, and safety: You should design and test high-risk AI systems to ensure their accuracy, robustness, and safety. This should be done across a range of possible scenarios and conditions. In addition, you should carry out thorough risk assessments and testing procedures for all of your AI systems. This can identify and mitigate potential risks, including biases, vulnerabilities, and unintended consequences.
- Documentation and conformity assessment: If you deploy high-risk AI systems, you must maintain comprehensive documentation throughout the development lifecycle. Independent third-party assessment bodies may also check your compliance with regulatory requirements and standards.
Why Should Your Business Care About the EU AI Act?
Tanya Chib, Founder and Privacy Lawyer at Privacy Rules, states that businesses should care about the EU AI Act because “We are amid an AI regulatory storm.”
Regulations are changing fast, and organizations that do not keep pace can be left behind. She says, “The AI legislative and regulatory landscape is evolving exponentially. This is a time for companies to closely monitor these developments, stay flexible, and start investing in an AI governance program.”
Especially if your business operates in a sector with high-risk AI systems, without proper compliance measures, you could be exposed to legal and financial risk. In addition, following the Act's principles of transparency, accountability, and ethical use of AI can enhance trust and confidence among customers, investors, and other stakeholders.
The law is also not changing evenly around the globe.
Pavel Čech explains that the difference in “Regulatory approaches can lead to a fragmented global AI landscape, where cross-border cooperation and standardization become a challenge.”
Those organizations who have kept themselves up to date with changes, both in the EU and on a global level, are better placed to manage these discrepancies.
Getting up to speed quickly can be the key.
Chib says, “I recommend diving into the provisions of the EU AI Act and how it affects your company ASAP, and not leaving that to the last months.”
She also highlights early ways in which businesses can prepare: “The EU AI Act will have different implications for different companies. Start with understanding the use cases within your organization and making an inventory of your AI assets.”
It is also important to understand that the AI Act is not in lieu of but rather adds to the obligations under the GDPR, which also must be satisfied.
Regulations are changing fast, and organizations that do not keep pace can be left behind. She says, “The AI legislative and regulatory landscape is evolving exponentially. This is a time for companies to closely monitor these developments, stay flexible, and start investing in an AI governance program.”
Especially if your business operates in a sector with high-risk AI systems, without proper compliance measures, you could be exposed to legal and financial risk. In addition, following the Act's principles of transparency, accountability, and ethical use of AI can enhance trust and confidence among customers, investors, and other stakeholders.
The law is also not changing evenly around the globe.
Pavel Čech explains that the difference in “Regulatory approaches can lead to a fragmented global AI landscape, where cross-border cooperation and standardization become a challenge.”
Those organizations who have kept themselves up to date with changes, both in the EU and on a global level, are better placed to manage these discrepancies.
Getting up to speed quickly can be the key.
Chib says, “I recommend diving into the provisions of the EU AI Act and how it affects your company ASAP, and not leaving that to the last months.”
She also highlights early ways in which businesses can prepare: “The EU AI Act will have different implications for different companies. Start with understanding the use cases within your organization and making an inventory of your AI assets.”
It is also important to understand that the AI Act is not in lieu of but rather adds to the obligations under the GDPR, which also must be satisfied.