The EU AI Act: Key provisions and future impacts

view of EU from space at night

The EU AI Act (aka the European Union Artificial Intelligence Act), introduced by the European Commission, aims to regulate AI systems to ensure they respect fundamental rights and foster trust. In this blog post, we’ll provide an overview of the Act’s key provisions, its risk-based classification of AI systems, and the global impact of the Act.

Key takeaways

  • The EU AI Act introduces comprehensive regulations for AI systems to ensure safety, transparency, and fundamental rights, potentially setting global standards for AI governance.
  • The Act adopts a risk-based classification for AI systems, ranging from outright bans on unacceptable risks to minimal requirements for low-risk applications, with high-risk AI systems facing stringent regulatory scrutiny.
  • The Act supports innovation and small and medium-sized enterprises (SMEs) by providing regulatory sandboxes, leniency in documentation, and technical support, facilitating a balanced approach between regulation and technological advancement.

An overview and a brief history of the EU AI act

The journey to regulate artificial intelligence within the European Union has been marked by several pivotal milestones. In April 2021, the European Commission took a groundbreaking step by proposing the first EU regulatory framework for AI. This proposal laid the foundation for a unified approach to ensure that AI systems are developed and utilized in a way that is safe, transparent, and respects fundamental rights across all member states.

After extensive discussions and negotiations, European Union lawmakers reached a political agreement on the draft artificial intelligence (AI) act in December 2023. This agreement was a significant achievement, representing a consensus on the principles and guidelines that would govern the use and development of AI within the Union. Finally, the Parliament adopted the Artificial Intelligence Act in March 2024, marking the culmination of years of work and setting the stage for a new era of AI governance. 

The European Union Artificial Intelligence Act, also known as the EU AI Act, is a pioneering piece of legislation. The act is aimed at businesses that provide, deploy, import, or distribute AI systems. At a high level, it aims to:

  • Regulate artificial intelligence systems
  • Ensure those businesses respect fundamental rights
  • Promote innovation and investment in AI technology
  • Foster the development and uptake of safe and trustworthy AI systems across the EU’s single market
  • Mitigate the risks posed by certain AI systems
  • Set a global standard for AI regulation
  • Emphasize trust, transparency, and accountability

These requirements have the potential to influence global regulatory standards for AI.

The European Parliament prioritizes the safety, transparency, traceability, non-discrimination, and environmental friendliness of AI systems used within the Union. The potential benefits of the Act are far-reaching, with the hope of creating better healthcare, safer and cleaner transportation, more efficient manufacturing, and cheaper and more sustainable energy using artificial intelligence.


Flag of the European Union outside a building
Recommended Reading
EU-U.S. Data Privacy Framework: How the European Commission’s Decision Affects Data Transfers 
EU-U.S. Data Privacy Framework: How the European Commission’s Decision Affects Data Transfers  icon-arrow-long

Why AI needs oversight 🤖

The rapid development and deployment of artificial intelligence (AI) across various sectors have brought about transformative changes in society. With its potential to revolutionize industries, improve efficiency, and solve complex problems, AI also poses significant challenges that necessitate governance.

AI governance is, therefore, essential for several reasons:

  • Ethical considerations: AI systems can make decisions that profoundly affect individuals and communities. Without proper governance, there is a risk of reinforcing biases, infringing on privacy, and making unethical decisions.
  • Safety and reliability: AI systems must be safe and reliable, especially when they are used in critical sectors like healthcare, transportation, and finance. Governance ensures that AI systems are thoroughly tested and monitored to prevent harm or malfunction.
  • Accountability: When AI systems make decisions, it can be difficult to trace the rationale behind those decisions. Governance frameworks assign responsibility and ensure that there is a clear line of accountability when things go wrong.
  • Public trust: For AI to be widely accepted and integrated into society, the public must trust that it is being used responsibly. Governance helps build this trust by ensuring transparency in how AI systems are developed and used.
  • Preventing misuse: AI has the potential to be misused for fraudulent activities, surveillance, and other malicious purposes. Governance can provide safeguards against such misuse.
  • Global standards: As AI technologies cross borders, international governance can help establish global standards and prevent a ‘race to the bottom’ where countries or companies compete by lowering ethical standards.

Governance helps ensure that AI benefits society while minimizing its risks. The EU AI Act represents a pioneering effort to create a regulatory framework that balances the advancement of technology with the need to protect fundamental human rights and societal values.

A risk-based classification of AI systems

A distinguishing feature of the EU AI Act is its risk-based approach to AI regulation. The Act categorizes AI systems based on their risk to society, with varying levels of regulatory scrutiny applied to each category:

Risk level = Unacceptable risk

  • High-level description: AI systems that pose an unacceptable risk to the safety, livelihoods, and rights of people.
  • Action: Prohibition

Risk level = High risk

  • High-level description: AI systems that pose significant risks to health, safety, and fundamental rights.
  • Action: Strict assessment

Risk level: Limited risk

  • High-level description: AI systems that pose a lower level of risk but still have the potential to impact individuals’ rights and well-being.
  • Action: Maintain transparency

Risk level = Minimal or no risk

  • High-level description: AI systems that pose little to no risk to individuals’ rights or safety. These systems are typically used for purposes that do not have significant impacts on people’s lives.
  • Action: No specific regulatory requirements

Of course, any form of legislation contains a lot of nuance. So, in the subsequent subsections, let’s explore this classification system in greater depth.

Unacceptable risk 

Action: Prohibition—these systems are outright banned.

AI practices deemed to pose unacceptable risks are at the top of the risk hierarchy. The Act outright bans these systems to protect fundamental rights and safety.

The EU AI Act identifies several AI practices that are considered to pose unacceptable risks and are therefore prohibited. These include:

  • Subliminal manipulation: Utilizing covert techniques that subconsciously influence individuals, undermining their ability to make informed decisions and causing significant harm.
  • Exploitation of vulnerabilities: Leveraging weaknesses associated with age, disabilities, or socio-economic status to alter behavior detrimentally, leading to substantial harm.
  • Sensitive biometric categorization: Systems that infer sensitive personal attributes such as ethnicity, political stance, union affiliation, religious or philosophical convictions, or sexual orientation, with exceptions for certain law enforcement activities and dataset labeling or filtering.
  • Social scoring schemes: Assigning ratings to individuals or groups based on their social behavior or personal characteristics, resulting in adverse or discriminatory outcomes.
  • Criminal risk assessment: Estimating the likelihood of an individual committing a crime based solely on profiling or personality traits, barring instances that support human judgment with objective, verifiable evidence directly related to criminal conduct.
  • Facial recognition databases: Compiling extensive databases of facial images through indiscriminate scraping from online sources or surveillance footage without targeted justification.
  • Emotion inference in sensitive contexts: Analyzing emotional states in environments like workplaces or educational settings, unless it serves a medical purpose or is crucial for safety reasons.
  • Real-time remote biometric identification: Implementing ‘real-time’ remote biometric identification in public spaces for law enforcement purposes, except under specific conditions such as locating missing or trafficked individuals, averting significant and immediate threats to life or terrorist acts, or identifying perpetrators of serious crimes.

High risk

Action: High-risk AI systems must adhere to several regulatory obligations.


Descending the risk ladder, high-risk AI systems are encountered next. These systems, which include high risk applications such as those used in critical infrastructure management, law enforcement, and biometric identification, are subject to stringent requirements to access the EU market.

The Act necessitates that providers of high-risk AI systems:

  • Implement a comprehensive risk management system that remains active throughout the entire lifecycle of the high-risk AI system, ensuring that all potential issues are identified, assessed, and mitigated in a timely manner.
  • Enforce rigorous data governance protocols to guarantee that the AI system’s training, validation, and testing datasets are not only relevant and representative but also as error-free and complete as possible, tailored to the system’s specific objectives.
  • Compile and maintain detailed technical documentation that transparently demonstrates the AI system’s compliance with regulatory requirements, providing authorities with the necessary insights to evaluate the system’s adherence to the established standards.
  • Integrate advanced record-keeping functionalities within the high-risk AI system, enabling automatic logging of critical events that could influence risk assessment at a national level or reflect significant modifications throughout the system’s lifecycle.
  • Supply comprehensive instructions for use to downstream deployers, equipping them with the knowledge and tools required to ensure their own compliance when utilizing the high-risk AI system.
  • Architect the high-risk AI system with built-in capabilities for human oversight, allowing deployers to monitor and intervene in the system’s operations as needed to maintain control and accountability.
  • Design the high-risk AI system with a focus on achieving and maintaining high levels of accuracy, robustness, and cybersecurity, to protect against potential threats and ensure reliable performance.
  • Establish and maintain a robust quality management system, which is fundamental for ongoing compliance assurance and for fostering a culture of continuous improvement within the organization.

A full list of Annex III: High-Risk AI Systems can be found here. Some examples include:

  • Remote biometric identification systems: These systems, excluding those used for simple verification of identity, are considered high-risk when they identify individuals in public spaces or analyze biometric data to infer sensitive attributes such as ethnicity, political beliefs, or emotional states.
  • Infrastructure safety components: AI systems integral to the management and operation of critical infrastructure, such as utilities (water, gas, electricity) and transportation networks, are high-risk due to their role in ensuring public safety and the continuity of essential services.
  • AI in education: Systems that determine access to or assignment in educational and vocational institutions, evaluate learning outcomes to guide student development, or monitor student behavior during examinations are high-risk due to their influence on academic and career opportunities.
  • Recruitment and employment: High-risk systems in this category include those used for screening job applications, evaluating candidates, managing tasks, and monitoring employee performance. These systems can significantly affect employment prospects and workplace dynamics.
  • Public services: AI systems that assess eligibility for public benefits, manage service allocations, or evaluate creditworthiness are high-risk, as they directly affect individuals’ access to essential services and financial stability. Similarly, AI systems that prioritize emergency response calls or assess risks for health and life insurance purposes are included in this category.
  • Law enforcement: Systems used for profiling during criminal investigations, assessing the reliability of evidence, or evaluating the risk of re-offending are considered high-risk. These systems can have profound implications for personal freedom and the fairness of legal proceedings.
  • Migration and border control: High-risk systems include those used for assessing migration risks, processing asylum or visa applications, and identifying individuals at borders, except for the verification of travel documents. These systems play a critical role in migration management and individual rights.
  • AI in legal and political arenas: Systems that assist in fact-finding, legal interpretation, or alternative dispute resolution are high-risk due to their potential influence on judicial outcomes. AI systems that could affect election results or voting behavior, other than organizational tools for political campaigns, are also classified as high-risk.

These examples illustrate the broad range of applications for high-risk AI systems and the importance of rigorous regulatory oversight to ensure they operate within ethical and legal boundaries.

Limited risk

Action: Transparency – these AI systems must meet specific transparency requirements.


The Act, which focuses on regulation on artificial intelligence, applies lighter regulatory scrutiny to AI systems with limited risk, such as chatbots and generative models. This includes Chat-GPT. This category is primarily concerned with the risks associated with a lack of transparency in AI usage.

The Act (Article 50) requires ‘limited risk’ AI systems to comply with transparency mandates, informing users of their interaction with AI. If an AI system produces text that is made public to inform people about important matters, it should be identified as artificially generated. This labeling is necessary to ensure transparency and trust in the information. Similarly, images, audio, or video files modified with AI, such as deepfakes, need to be labeled as AI-generated.

Users of emotion recognition systems must also inform individuals when they are being exposed to such technology.

Minimal or no risk

Action: Encouraged to adhere to voluntary codes of conduct and best practices to ensure ethical and responsible use.

AI systems that pose a minimal risk are found at the bottom of the risk hierarchy, with such AI systems considered to be safe for free use. These technologies include AI-enabled video games and spam filters.

The Act considers AI technologies used in video games or spam filters as posing minimal or no risk. Therefore, these applications are allowed to operate in the EU market without needing to comply with the stringent requirements that apply to higher-risk AI systems.


AI pentesting offering
Thoropass's AI pentesting offering ensures secure and ethical use of AI

Manage AI-related risk and ensure compliance with new and emerging AI frameworks with AI pentesting.

Join the waitlist icon-arrow-long

Practical implementation for providers of high-risk AI

The EU AI Act imposes several obligations on providers of high-risk AI systems to guarantee compliance with regulatory standards. Before deploying high-risk AI technology, these businesses must conduct an initial risk assessment. Here’s a brief overview of the assessment process, including who conducts it and what steps are involved:

  • Developers: Conduct initial risk assessments and classify their AI systems based on provided guidelines.
  • Notified bodies: For high-risk AI systems, these independent entities may need to verify compliance.
  • National competent authorities: Oversee compliance, conduct audits, and enforce regulations.
  • Continuous monitoring: Developers must continuously monitor and reassess their AI systems to ensure ongoing compliance.

In addition to quality management and transparency, human oversight is a mandatory requirement for the operation of high-risk AI systems to ensure accountability. 

Post-market monitoring systems are also required to track the performance and impact of high-risk AI systems. Providers must maintain comprehensive records and report any serious incidents involving high-risk AI systems. 

In essence, AI providers are required to maintain ongoing quality and risk management to ensure that AI applications remain trustworthy even after they are released to the market.

Provisions for small and medium-sized businesses

Despite imposing strict regulatory requirements, the EU AI Act also includes provisions that support innovation and Small and Medium-sized Enterprises (SMEs). The Act introduces regulatory sandboxes to allow businesses to test AI systems in controlled environments.

Moreover, SMEs and startups benefit from the Act’s leniency in documentation requirements and exemptions from certain regulatory mandates. European Digital Innovation Hubs also provide technical and legal guidance to help SME AI innovators become compliant with the AI Act.

The AI Pact, a voluntary initiative, seeks to support the future implementation of the Act, inviting AI developers from Europe and beyond to comply with the Act’s key obligations ahead of time.

Institutional governance and enforcement

The European AI Office was established in 2024. It has several key responsibilities, including: 

  • Monitoring the enforcement and implementation of the EU AI Act
  • Investigating violations within their jurisdictions
  • Coordinating enforcement actions to ensure regulatory coherence across all EU Member States
  • Imposing substantial fines for noncompliance with the EU AI Act
  • Fostering collaboration, innovation, and research in AI
  • Engaging in international dialogue
  • Striving to position Europe as a leader in the ethical and sustainable development of AI technologies

These measures highlight the seriousness with which the Act’s provisions are enforced.

Transparency and trust in general-purpose AI

The EU AI Act regards transparency as fundamental, especially for general-purpose AI models. Article 50 of the Act introduces transparency obligations, like disclosing AI system use and maintaining detailed technical documentation, to enable a better understanding and management of these models.

General-purpose AI systems without systemic risks have limited transparency requirements. However, those posing systemic risks must adhere to stricter rules under the EU AI Act. This approach ensures that even the most complex and potentially impactful AI models are held to high standards of transparency and accountability.

Future-proofing and global influence

The EU AI Act’s future-proof approach is a significant feature. This approach allows the Act’s rules to adapt to technological change, ensuring that the legislation remains relevant as AI technology continues to evolve.

This means AI providers need to engage in ongoing quality and risk management to ensure their AI applications remain trustworthy even after market release. This approach ensures that the Act remains applicable and effective in the face of rapid technological advancements in AI.

The EU AI Act’s potential global influence is immense. Just as the EU’s General Data Protection Regulation (GDPR) has shaped data protection laws around the world, the EU AI Act could become a global standard, determining the impact of AI worldwide.

Countries worldwide are considering the EU AI Act while formulating their AI policies, potentially standardizing its provisions globally. The Act has already inspired countries like Canada and Japan to align their AI governance frameworks with the EU’s approach. Moreover, the Act’s extraterritorial reach means it impacts US companies if their AI systems are used by EU customers, further extending its global influence.

Looking ahead: Next steps for the EU AI Act

Having delved into the details of the EU AI Act, what can we expect next? Well, the Act is set to enter into force between May and June, with phased implementation through 2027 (full timelines are available here).

With some exceptions, the Act will become fully applicable two years after its publication in the Official Journal. The obligations concerning high-risk systems will become applicable three years after their entry into force. This phased implementation timeline allows for a smooth transition and gives businesses ample time to understand and comply with the new requirements.

In conclusion, the EU AI Act is a revolutionary piece of legislation that sets a global standard for AI regulation. It’s a comprehensive and future-proof framework that protects individuals and society while encouraging innovation and development in AI. As the Act moves towards full implementation, its influence on global AI governance will undoubtedly continue to grow.

More FAQs

The EU AI Act launched in January 2024 includes measures to support European startups and SMEs in developing trustworthy AI that aligns with EU values and rules.

The EU AI Act categorizes AI systems based on their risk to society, leading to different levels of regulatory scrutiny for each category. These classifications include unacceptable, high, limited, and minimal or no risk.

The EU AI Act supports innovation and SMEs by introducing regulatory sandboxes for testing AI systems and providing leniency in documentation requirements for small and medium-sized enterprises (SMEs). This allows businesses to innovate and test AI technologies in controlled environments while reducing regulatory burdens for SMEs and startups.

The EU AI Act’s future-proof approach allows its rules to adapt to technological change, ensuring that the legislation remains relevant as AI technology continues to evolve. This adaptability is a key strength in addressing future challenges and developments in AI.

The EU AI Act has the potential to influence AI policies worldwide, as its provisions could become a global standard for AI regulation and impact companies outside the EU. Its reach extends to companies whose AI systems are used by EU customers.


Share this post with your network:

LinkedIn