The EU AI Act and the Emergence of New Global Standards

ABSTRACT: The European Union recently passed the EU AI Act. It is the first major legal framework to address the use and governance of artificial intelligence.

Someone call Paul Revere, because the Europeans are coming! In April 2021, the European Commission, the executive arm of the European Union (EU) proposed a set of rules governing artificial intelligence (AI). Called the European Union Artificial Intelligence Act (EU AI Act), these rules predated the explosion of large language models (LLM’s), such as ChatGPT, placing the EU at the forefront of establishing regulatory frameworks for AI. Like other proposed EU legislation related to technology and data, such as the Digital Services Act (DSA), Digital Governance Act (DGA), and the General Data Protection Regulation (GDPR), the EU has been a model for other nations who want or need to establish similar frameworks. 

Although the new AI rules have not yet been implemented, significant strides have been made. On December 9th, 2023, the European Parliament and European Council agreed on the final wording of the EU AI Act. Its scope applies to manufacturers, users, distributors, and providers of AI systems used within the EU. When the Act goes into effect at the end of 2025, it will set a global precedent by creating a comprehensive framework for the development, deployment, and use of AI systems. Since we live in an interconnected world, this act has implications for global AI governance. It also addresses rising public discomfort about AI: a recent survey by the Pew Research Center found that 52% of Americans are more concerned than excited about AI in daily life, and just 10% are more excited than concerned.

How the EU AI Act defines AI

AI Defined. The Act defines AI as a machine-based system that “generates outputs such as content, predictions, recommendations, or decisions” in response to human-defined objectives. These AI systems are learning systems that can interpret complex data, learn from data and feedback, and adapt its behaviors. Since AI has the potential to be used in every sector, the EU Commission created a broad and inclusive definition to ensure the Act’s provisions remain relevant as AI continues to evolve.  

Risk-Based Approach to AI Regulation

Risk Levels. To guide regulatory development, the EU AI Act proposes a risk-based classification system. AI systems are classified into four distinct categories based on the risk they pose. The different risk levels include “unacceptable”, “high”, “limited”, and “minimal/none”. [See figure 1]

  • Unacceptable – These AI systems are banned outright. These include social scoring that rewards or punishes individuals for exhibiting certain personal or social characteristics; untargeted scraping of facial images from the web, video feeds, or other channels, to create a facial recognition database; and dark-pattern systems or applications intended to deceive people into purchasing or doing things without their explicit consent. 

  • High – AI systems considered high risk are typically deployed in critical sectors like biometrics, healthcare, law enforcement, education, and employment. For example, an AI system used for making decisions about a person’s eligibility for surgery would fall in this category. While these systems are not banned, their use is contingent upon strict compliance with regulatory standards to safeguard individual rights. 

  • Limited – This category includes AI systems like chatbots and systems that generate ‘deepfakes’ or other manipulative content. The act requires the provider to fully disclose the nature of the AI delivering the information to recipients. 

  • Minimal – AI applications that pose little to no risk are in this category. These AI systems include spam filters or AI-enabled video games. These systems are subject to minimal regulatory requirements, allowing them to operate freely under a code of conduct. 

Figure 1. Risk-Based Framework of EU AI Act

Governance in the EU AI Act

Stringent Controls. The EU AI Act is set within a broader legislative ecosystem that includes the DSA and DGA. While the provisions within the EU AI Act complement protections in the GDPR, it also expands the scope beyond personal data. This intertwining of regulations underscores the EU’s holistic approach to governance in the digital space. It aims to ensure that AI systems operate within a framework that upholds individual rights and market integrity. 

Article 56 of the Act establishes a European AI Board (EAIB) whose objective is to ensure that the Act is enforced consistently across member countries. The board is designed to facilitate a cooperative environment by working closely with member state authorities tasked with implementing rules in their jurisdictions. The Board’s responsibility also involves advising on codes of conduct, risk assessment methodologies, and other standards. 

Non-compliance with the Act results in substantial sanctions, which vary depending on the nature of the violation. For example, a company that uses banned AI applications will receive a fine of 35 million or 7% of global annual turnover or sales, whichever is higher. Similarly, if an organization supplies incorrect information, a fine of 7.5 million or 1.5% of the company’s annual global turnover will be levied. 

In addition to the EAIB, the Commission will establish a new European AI Office. It will primarily oversee and enforce new rules for general purpose AI models. The Office will be supported by a scientific panel, an advisory forum, and will cooperate with the EAIB. This collaborative approach is designed to ensure regulations remain current with technological advancements and industry practices. 


The EU AI act is a pioneering step in global AI regulation. This act, the first of its kind, sets standards for how AI is defined, classified, and governed. The comprehensive risk-based approach aims to ensure AI development respects European values of dignity, privacy, and fundamental rights. The EU AI Act is poised to help create a safe and innovative digital future by integrating a governance structure related to the successful aspects of the GDPR. This act will force entities around the world to align their AI systems with this new regulatory framework, reinforcing the EU’s role as a regulatory leader in the technology space. 

Nicholas Letarte

Nicholas Letarte is a junior research analyst at Eckerson Group and is pursuing a master’s degree in computer science from Northeastern University. Nicholas originally studied marine biology and helped work...

More About Nicholas Letarte