Navigating the future: Key AI regulations for 2024

The acceleration of AI adoption in technology has been nothing short of revolutionary, offering immense opportunities for innovation, efficiency, and product development. From automating mundane tasks to generating predictive insights, artificial intelligence is at the forefront of transforming industries and enhancing human capabilities. The promise of AI extends to creating smarter cities, improving healthcare outcomes, and driving economic growth through intelligent automation. 

However, this rapid advancement is not without its inherent risks. Concerns, ranging from dystopian doomsday scenarios to the displacement of jobs and cybersecurity, raise urgent questions about the ethical and societal implications of AI. These concerns necessitate robust regulations to govern AI’s usage and potentially curtail some of its more advanced research.

In the last two years, a post-COVID turbulent world has seen an unprecedented scramble by countries to understand this powerful technology, its vast potential, and the accompanying risks. Governments are racing to put governance frameworks in place to ensure that the benefits of AI trump any potential downsides. This urgency is driven by the need to safeguard public interest while fostering innovation in a rapidly evolving technological landscape. As we stand on the cusp of a new era in AI, the importance of balanced and forward-thinking regulations cannot be overstated.

Key takeaways

  • The United States is moving towards more comprehensive AI regulation, with recent legislative efforts and executive orders hinting at potential federal frameworks to address the current fragmented landscape.
  • The European Union’s AI Act aims to set global standards in AI governance, emphasizing transparency, accountability, and ethical principles, particularly through a risk-based approach to regulation.
  • China’s strategic AI regulations focus on data security and international collaboration. The country aims to position itself as a global leader in AI innovation while emphasizing compliance and timely risk management.

An overview of AI regulations

With the advancement in AI technology comes an increase in the complexity surrounding compliance with laws and regulations, bringing up new challenges, including algorithmic accountability and consideration of what roles legal professionals will play going forward.

Below, we’ve listed some of the key Al regulations and regulatory proposals in 2024.

Region: U.S. 

Name: AI Bill of Rights

  • Description: Focuses on ensuring fairness, privacy, and transparency in AI systems.
  • Link 

Name: White House Executive Order on AI

  • Description: Seeks to tackle threats the new technology could pose to civil rights, privacy, and national security, while promoting innovation and competition and the use of AI for public services.
  • Link

Name: Algorithmic Accountability Act

  • Description:Mandates impact assessments for AI systems used in critical sectors such as finance and healthcare.
  • Link

Name: DEEP FAKES Accountability Act

  • Description: Requires creators and distributors of deepfake technology to include watermarks indicating altered media.
  • Link

Name: Digital Services Oversight and Safety Act

  • Description: Mandates transparency reports, algorithmic audits, and accountability measures to protect consumers and ensure safe use of digital services.
  • Link

Name: NIST’s AI Risk Management Framework

  • Description: Emphasizes a risk-based approach to ensure AI technologies are trustworthy, fair, and secure.
  • Link

Region: Canada

Name: Artificial Intelligence and Data Act (AIDA)

  • Description: Aims to regulate the use of AI for protecting personal data and ensuring ethical use.
  • Link

Name: Pan-Canadian Artificial Intelligence Strategy

  • Description: Enhances investments in AI research while emphasizing ethical standards and inclusivity.
  • Link

Region: EU

Name: European Union’s Artificial Intelligence Act (EU AI Act)

  • Description: Comprehensive framework categorizing AI systems into risk levels (unacceptable, high, limited, minimal) and imposing strict requirements on high-risk systems.
  • Link

Name: Digital Services Act (DSA)

  • Description: Addresses the accountability of online platforms, including AI-driven services, focusing on transparency and user safety.
  • Link

Region: UK

Name: National AI Strategy

  • Description: Focuses on maintaining leadership in AI innovation while promoting ethical AI and robust safety standards.
  • Link

Name: AI White Paper

  • Description: Proposes flexible regulatory frameworks to encourage innovation while ensuring AI technologies are trustworthy and transparent.
  • Link

Region: China

Name: AI Development Plan

  • Description: Emphasizes becoming a global leader in AI by 2030, with a focus on innovation, data protection, and international collaboration.
  • Link

AI regulation in the United States

At present, the United States is at a pivotal juncture regarding AI regulation. Right now, the US lacks specific comprehensive federal legislation to oversee artificial intelligence systems. Instead, a hodgepodge of regulations adopted by individual states and various sector-specific entities addresses matters related to AI individually. 

This fragmented regulatory environment requires companies that operate interstate to engage in cautious navigation. Examples include:

  • The Algorithmic Accountability Act
  • The DEEP FAKES Accountability Act
  • Digital Services Oversight and Safety Act

Meanwhile, AI is also mentioned in other sector-specific regulations. For example, the Federal Aviation Administration Reauthorization Act includes language requiring a review of AI in aviation.

In this context, without an overarching federal system for regulating artificial intelligence, existing laws and regulations concerning privacy and intellectual property have become makeshift tools for governing the domain of AI technologies within the U.S. 

These preexisting statutes are undergoing reinterpretation and modification in attempts to meet the novel challenges presented by AI advancements. State-level legislation that often carries implications beyond its borders complicates compliance issues for enterprises. As such, businesses must adhere to not only where they physically exist, but also across multiple jurisdictions with varying requirements regarding AI technologies.


Flag of the European Union outside a building
Recommended Reading
EU-U.S. Data Privacy Framework: How the European Commission’s Decision Affects Data Transfers 
EU-U.S. Data Privacy Framework: How the European Commission’s Decision Affects Data Transfers  icon-arrow-long

Let’s look at two of the more important frameworks in the US that seek to regulate the use of artificial intelligence.

White House Executive Order on AI

In October 2023, the White House issued an Executive Order on AI, marking a significant step toward comprehensive federal oversight of artificial intelligence technologies. This directive aims to establish a unified framework for the development, deployment, and regulation of AI across various sectors in the United States. Key components of the Executive Order include:

  • Federal coordination: The order mandates the creation of an interagency task force to ensure cohesive policy development and implementation across federal agencies. This task force is responsible for aligning AI initiatives with national security, economic growth, and ethical standards.
  • Transparency and accountability: The directive emphasizes the need for transparency in AI systems, requiring federal agencies to adopt measures that ensure AI decision-making processes are understandable and auditable. This includes guidelines for explainable AI (XAI) and public disclosure of AI use in government services.
  • Ethical considerations: The Executive Order underscores the importance of ethical AI development, advocating for the incorporation of fairness, non-discrimination, and privacy protections into AI systems. This aligns with broader efforts to foster public trust and confidence in AI technologies.
  • Research and development: The order allocates funding for AI research and development, focusing on areas such as AI safety, robustness, and human-AI interaction. This investment aims to position the United States as a global leader in AI innovation while ensuring that advancements are aligned with societal values.

AI Bill of Rights

The AI Bill of Rights, introduced in early 2024, represents a landmark effort to safeguard individual freedoms and rights in the age of artificial intelligence. This legislative initiative seeks to address the ethical and societal implications of AI, providing a comprehensive framework for protecting citizens from potential harms associated with AI technologies. Key provisions of the AI Bill of Rights include:

  • Right to transparency: Individuals have the right to know when they are interacting with AI systems and to understand how these systems make decisions that affect them. This includes clear disclosures about the use of AI in various contexts, such as hiring, lending, and law enforcement.
  • Right to accountability: The bill holds AI developers and operators accountable for their systems’ outcomes. This includes mechanisms for redress and remediation in cases where AI systems cause harm or violate individuals’ rights.
  • Right to privacy: The AI Bill of Rights enshrines the protection of personal data, requiring AI systems to adhere to strict data privacy standards. This includes limiting data collection to what is necessary and ensuring that data is securely stored and processed.
  • Right to non-discrimination: The bill prohibits AI systems from perpetuating or exacerbating biases and discrimination. It mandates regular audits and impact assessments to identify and mitigate discriminatory practices in AI algorithms.
  • Right to safety: Individuals are entitled to the assurance that AI systems will not pose undue risks to their safety and well-being. This includes rigorous testing and validation of AI technologies before they are deployed in critical applications.

These subsections highlight the proactive steps being taken by the United States to regulate AI technologies and protect individual rights, reflecting a broader commitment to ethical AI governance on a national scale.

Now let’s take a look at a few major laws from other countries.

The European Union’s AI Act

The endorsement of the EU AI Act in February 2024 marked significant progress in regulating artificial intelligence within the European Union. This pioneering regulation aims to:

  • Regulate artificial intelligence systems
  • Ensure those businesses respect fundamental rights
  • Promote innovation and investment in AI technology
  • Foster the development and uptake of safe and trustworthy AI systems across the EU’s single market
  • Mitigate the risks posed by certain AI systems
  • Set a global standard for AI regulation
  • Emphasize trust, transparency, and accountability

For a deeper dive into the Act, click here


Recommended Reading
The EU AI Act

Deep dive into one of the newest acts in Europe including key provisions and future impacts

The EU AI Act: Key provisions and future impacts icon-arrow-long

Under the EU AI Act, there is a strong focus on a risk-based approach that categorizes various Artificial Intelligence Systems based on their potential impact on public welfare and fundamental rights. Intensive scrutiny protocols are particularly applied to “high-risk” categories before any deployment can proceed, ensuring measured control while balancing flexibility for less critical domains and application potentials.

Also central to the EU AI Act are principal values such as:

  • Transparency: Specific high-risk AI systems must ensure clarity, making users aware when their interactions involve AI rather than human beings—a vital provision as artificially intelligent entities become more integrated into everyday life.
  • Accountability: The legislation seeks to hold the creators and suppliers of these technologies responsible for the actions and verdicts rendered by their products.
  • Ethical standards: An imperative focus is placed on advocating for the ethical usage of AI technologies so that they contribute positively while being harnessed responsibly.

These foundational principles endeavor both to address user reservations regarding these emerging tools and foster confidence in them.

China’s strategic AI regulations

In 2021 and 2022, China became the first country to implement detailed, binding regulations on some of the most common applications of artificial intelligence (AI). (Carnegie Endowment for International Peace)

China has adopted an active and calculated strategy concerning AI regulation, crafting a well-rounded framework that encompasses the Chinese Cybersecurity Law as well as the New Generation AI Development Plan. These regulations are intended to govern both the development and utilization of AI technologies while simultaneously catapulting China to the forefront of global AI innovation. 

In 2023, China enhanced its adherence to data security by enacting several new laws: Personal Information Protection Law (PIPL), Data Security Law (DSL), and Cyber Security Law (CSL). This multifaceted regulatory scheme underscores China’s dedication to establishing a solid legislative backdrop for AI.

Central to China’s vision in this arena is realizing the objective of dominating global AI innovation by 2030. To materialize such ambitions, China is innovating operational frameworks specific to AI governance, thereby allowing state control over research patterns within this domain. Such methods empower authorities with oversight on how advances in artificial intelligence align with national interests alongside ethical benchmarks—marrying compliance enforcement with proactive management against potential risks tied up in these sophisticated systems.

Canada’s balanced approach to AI

Canada has advanced its federal oversight of artificial intelligence with the Artificial Intelligence and Data Act (AIDA), a vital part of AI Bill C-27. This legislative move underscores Canada’s proactive stance in thoroughly tackling the complex issues surrounding AI technologies. AIDA sets out to introduce uniform standards for AI systems throughout Canada, govern cross-border trade involving these systems, and ban activities connected to AI that could cause significant harm or produce prejudiced outcomes.

Notably, AIDA’s establishment includes forming an office for an Artificial Intelligence and Data Commissioner who will oversee enforcement procedures related to this act—underscoring just how seriously Canadian authorities take regulation adherence concerning these emerging technologies. Severe financial penalties have been stipulated under this Act, ranging from CAD$10 million up to as much as five percent of global gross revenues depending upon breach severity – sending a clear message regarding the consequences of regulatory violations.

Key issues in AI legislation

The incorporation of artificial intelligence into numerous spheres of our daily lives – and the tools we use – presents a complex challenge for lawmakers and policymakers. At the core, it is imperative to make responsible use of automated systems, calling for robust guidelines to ensure ethical development and application of AI technologies. This principle forms the basis for various specific concerns tackled within AI legislation, encompassing:

  • Protection of privacy and data (including protection from abusive data practices)
  • Mitigation against bias and discrimination
  • Enhancement of accountability and transparency
  • Assurance of safety and security
  • Safeguarding intellectual property rights
  • Assessments on employment effects
  • Transparency around the use of any generative artificial intelligence system to create content and imagery

Central to advancing this framework is embedding ethical principles during all stages of creating AI systems. Such proactive integration serves as a safeguard against potential harm while promoting equity across these technological constructs. It has become clear among legislators that ethics must be foundational rather than supplemental throughout both design phases and deployment processes tied with Artificial Intelligence.

Managing AI’s risks without falling behind…

On the flip side, there is a strong desire to encourage advancement and innovation in the field of AI. As the stock market is already showing, AI can supercharge economies and every country wants a piece of that action. Policymakers and industry leaders also recognize that AI holds the potential to revolutionize various sectors, driving economic growth, improving efficiency, and solving complex problems. 

To foster an environment conducive to innovation, regulations must strike a balance between ensuring safety and allowing for creative experimentation. This involves providing incentives for research and development, supporting startups and academic institutions, and creating flexible regulatory frameworks that can adapt to the rapid pace of technological change. By nurturing a culture of innovation, we can harness the transformative power of AI to address global challenges and enhance the quality of life for people around the world.

Compliance and enforcement mechanisms

With the increasing integration of AI technologies into various sectors, regulatory frameworks are evolving to confront new challenges. But how exactly will AI regulations be monitored and enforced? 

In the United States, oversight for AI governance falls under established legal and/or regulatory  enforcement agencies such as:

  • Federal Trade Commission (FTC)
  • Department of Justice (DOJ)
  • Consumer Financial Protection Bureau (CFPB)
  • Equal Employment Opportunity Commission (EEOC)

These bodies utilize their deep-rooted expertise to manage issues pertaining to AI within their specialized areas.

Meanwhile, in preparation for emerging realities, the European Union is developing a standardized method through its AI Act. This legislation suggests establishing an overarching EU-wide AI Board and permits individual member states to appoint one or more market surveillance authorities. The dual-level structure intends to guarantee uniform adherence to regulations across all EU countries while also allowing leeway at state levels. 

China has adopted a collaborative strategy among several organizations in enforcing its own set of rules surrounding artificial intelligence, which includes:

  • Cyberspace Administration of China (CAC)
  • Ministry of Industry and Information Technology (MIIT)
  • Ministry of Public Security (MPS)
  • State Administration for Market Regulation (SAMR)

This reflects China’s holistic approach towards guiding the development and utilization of artificial intelligence within its national borders.

Looking ahead: Future directions in AI regulation

As we contemplate the future trajectory of AI regulation, it’s worth keeping an eye on certain areas. 

Expected to spearhead legislative movement in 2024, California is slated to embark on significant actions concerning AI, which will include mandates for divulging information about testing methods and safety protocols related to artificial intelligence technology. With its dual role as a major global economy and technological nexus, California’s leadership may very well establish patterns that not only other states follow, but also shape federal regulations. 

It’s also anticipated that state governments throughout the United States will amplify their examination of how artificial intelligence affects us all while establishing committees or task forces designed to advise on this matter. Such localized experiments with regulatory concepts could ultimately contribute insights that are instrumental to crafting expansive national guidelines.

AI is also likely to become a hot debate topic during the upcoming US election. No doubt, the presidential candidates will have differing opinions on how to regulate the tech industry (so far, neither candidate has offered a concrete roadmap to regulate AI and related companies.)

Globally speaking, the Organization for Economic Co-operation and Development (OECD) has a huge influence in cultivating cohesive governance over AI. The updated OECD principles on artificial intelligence—revised as recently as May 2024—are poised to steer both national and regional policies regarding this technology by fostering a coordinated stance among disparate legal jurisdictions.

The future of AI regulation will undoubtedly be shaped by ongoing technological advancements, emerging ethical considerations, and the lessons learned from early regulatory frameworks. As we move forward, it’s crucial for policymakers, industry leaders, and the public to engage in ongoing dialogue and collaboration to ensure that AI technologies are developed and deployed in ways that benefit society as a whole. The path ahead may be complex, but by addressing these challenges head-on, we can harness the transformative potential of AI while safeguarding our values and fundamental rights. The journey of AI regulation is just beginning, and its trajectory will play a pivotal role in shaping our technological future.

More FAQs

Presently, the United States does not have a broad federal framework specifically governing AI. The regulatory approach for AI is piecemeal, depending on traditional privacy and intellectual property laws, various agency regulations pertinent to their sectors, and individual state actions.

Nevertheless, indications from recent legislative endeavors suggest that there may be a shift toward adopting an overarching national strategy for controlling AI technologies.

In its unique and risk-focused categorization of AI systems, the EU’s stance on AI regulation stands out from other areas by implementing more rigorous standards for applications deemed high-risk. This strategy aims to establish a consistent framework across all member states, ensuring that innovation is harmonized with the protection of fundamental rights and safety.

Ensuring that AI systems remain transparent and accountable poses significant challenges, including the need to reconcile transparency with data security, demystifying intricate AI models for individuals without a technical background, and overseeing autonomous decision-making processes in AI to maintain explainability. There is a challenge in establishing robust supervisory frameworks that do not hinder creativity within the field of artificial intelligence.

Countries around the world are confronting the issue of algorithmic discrimination. They’re doing so by compelling AI development teams to be diverse, insisting on utilizing datasets that accurately represent different populations, adopting strategies to find and reduce bias, and imposing rules for transparency. Assessments designed to gauge an AI system’s potential discriminatory impacts are being put into practice before these systems are rolled out.


Share this post with your network:

LinkedIn