Breaking Down the New EU Regulations for Artificial Intelligence

Last month the European Commission released a set of proposed rules for artificial intelligence (AI). This framework is the first of its kind and implements the recommendations outlined in the Commission’s February whitepaper on the topic. As with 2016’s General Data Protection Regulation (GDPR), Europe leads the way on regulating data and technology with major ramifications for international companies. Although not yet enacted, this proposal sets a precedent for future AI regulations around the world.

GDPR forced organizations to get serious about data governance. The Artificial Intelligence Act looks poised to do the same thing for AI auditability. The legislation effectively mandates that if you provide or operate an AI system deemed “high-risk” within the EU, the output must be explainable in human terms. This manifests in requirements such as technical documentation about how the system functions, logs to show the process the system uses to reach a given outcome, and policies to mitigate risk and reduce bias. 

Perhaps the most important section of the entire document is Annex III, which lays out what constitutes a “high-risk” AI system. Essentially, if the AI system has the potential to negatively impact the life or livelihood of a human being, it’s high-risk. This means AI systems whose failure would cause physical harm, such as those used as safety features or to control critical infrastructure, as well as a wide variety of applications with the potential for social harm. Systems that manipulate people or produce social trust scores are banned outright, and real-time biometric identification systems are banned for everyone but law-enforcement. Any system used to regulate access to services, educational opportunities, or employment falls in the high-risk category, as do AI systems used by law enforcement, border control, and judicial authorities.

The proposal has proved controversial as many companies feel it goes too far and many activists feel it does not go far enough. Businesses claim the restrictions will hamper the development of AI in Europe and allow China to dominate this increasingly important sector of technology. On the other hand, civil rights groups are concerned by wide exceptions for law enforcement that allow police and state security agencies to use AI surveillance systems.

The draft legislation consists of 85 articles and 9 annexes. This reader’s guide provides one-sentence summaries of each. You can also read the entire 108 page draft and its 17 page annex here:

TITLE I GENERAL PROVISIONS

  • Article 1: Subject Matter (p. 38) — Outlines the goals of regulation

  • Article 2:  Scope (p. 38) — Applies the regulation (with limited exceptions) to: 

    • AI systems in the EU

    • AI systems with users in the EU

    • AI systems that produce outcomes that are used in the EU

  • Article 3: Definitions (p. 39) — Defines 44 key terms

  • Article 4: Amendments to Annex I (p. 43) — Allows for updates to the definition of AI techniques and approaches

TITLE II PROHIBITED ARTIFICIAL INTELLIGENCE PRACTICES

  • Article 5 (p. 43) — Prohibits AI that manipulates people to cause harm, exploits group vulnerabilities, creates social trustworthiness scores for public use, or applies real-time biometric identification (with law enforcement exceptions)

TITLE III HIGH-RISK AI SYSTEMS (p. 45)

  • CHAPTER 1: CLASSIFICATION OF AI SYSTEMS AS HIGH-RISK

    • Article 6: Classification rules for high-risk AI systems (p. 45) —Defines “high-risk” AI systems as:

      • Those that function as a product safety feature

      • Those listed Annex III which revolve around potential social harms

    • Article 7: Amendments to Annex III (p. 45) — allows the commission to update the list of high-risk systems in Annex III 

  • CHAPTER 2: REQUIREMENTS FOR HIGH-RISK AI SYSTEMS

    • Article 8: Compliance with the requirements (p. 46) — Requires high-risk AI systems to follow the rules set forth in the rest of the chapter

    • Article 9: Risk management system (p. 46) — Outlines the risk management system required for high-risk AI systems

    • Article 10: Data and data governance (p. 48) — Requires that training, validation, and testing data follow governance procedures to ensure accuracy and avoid bias

    • Article 11: Technical documentation (p. 49) — Requires up to date technical documentation of high-risk AI systems

    • Article 12: Record-keeping (p. 49) — Requires high-risk AI systems to generate logs for duration of use, databases referenced, input data leading to a match (as in a classification system), and human users involved in verification

    • Article 13: Transparency and provision of information to users (p. 50) — Requires high-risk AI systems to be understandable to humans and to have thorough operating instructions

    • Article 14: Human oversight (p. 51) — Requires high-risk AI systems to permit human oversight

    • Article 15: Accuracy, robustness and cybersecurity (p. 51) — Requires that accuracy and security of high-risk AI systems correspond to their intended purpose

  • CHAPTER 3: OBLIGATIONS OF PROVIDERS AND USERS OF HIGH-RISK AI SYSTEMS AND OTHER PARTIES

    • Article 16: Obligations of providers of high-risk AI systems (p. 52) — Requires high-risk AI system providers to ensure compliance, implement management systems, write documentation, and fix anything that is not in compliance  

    • Article 17: Quality management system (p. 53) — Requires providers to implement a system to ensure regulatory compliance

    • Article 18: Obligation to draw up technical documentation (p. 54) — Requires providers to write technical documentation for their high-risk AI systems

    • Article 19: Conformity assessment (p. 54) — Requires providers to put their high-risk AI systems through a regulatory conformity (compliance) assessment before putting them on the market

    • Article 20: Automatically generated logs (p. 54) — Requires providers to save logs

    • Article 21: Corrective actions (p. 55) — Requires providers to recall AI systems that they believe might not conform to regulation

    • Article 22: Duty of information (p. 55) —Requires providers to notify EU member states where their AI system is available of any risks

    • Article 23: Cooperation with competent authorities (p. 55) — Requires providers to meet requests for information and documentation from a national authority

    • Article 24: Obligations of product manufacturers (p. 55) — Makes system manufacturers responsible for the compliance AI systems

    • Article 25: Authorized representatives (p. 55) — Requires providers from outside the EU to designate a representative to manage compliance documentation

    • Article 26: Obligations of importers (p. 56) — Requires importers of high-risk AI systems to ensure their compliance before putting them on the market and to provide documentation when requested

    • Article 27: Obligations of distributors (p. 57) — Requires distributors of high-risk AI systems to ensure their compliance before putting them on the market and to provide documentation when requested

    • Article 28: Obligations of distributors, importers, users or any other third-party (p. 57) — Makes any organization that modifies a high-risk AI system or offers it under their name the same as a provider

    • Article 29: Obligations of users of high-risk AI systems (p. 58) — Requires users follow instructions for AI systems, save logs, and monitor operations

  • CHAPTER 4: NOTIFYING AUTHORITIES AND NOTIFIED BODIES

    • Article 30: Notifying authorities (p. 58) — Requires EU member states to designate notifying authorities that set up procedures and assessments for conformity assessment bodies

    • Article 31: Application of a conformity assessment body for notification (p. 59) — Requires conformity assessment bodies submit applications to the national notifying authorities

    • Article 32: Notification procedure (p. 59) — Outlines how notifications can take place between notifying authorities and conformity assessment bodies

    • Article 33: Notified bodies (p. 60) — Establishes that notified bodies will determine the conformity of AI systems in an impartial way

    • Article 34: Subsidiaries of and subcontracting by notified bodies (p. 61) — Makes notified bodies responsible for the tasks performed by subcontractors or subsidiaries 

    • Article 35: Identification numbers and lists of notified bodies designated under this regulation (p. 61) — Requires the Commission to assign id numbers to and keep lists of all notified bodies

    • Article 36: Changes to notifications (p. 62) — Requires notifying authorities to inform notified bodies when they are no longer meeting their requirements

    • Article 37: Challenge to the competence of notified bodies (p. 62) — States that the Commission will investigate notified bodies when compliance is in doubt

    • Article 38: Coordination of notified bodies (p. 62) — Requires that notified bodies cooperate with one another in assessing AI systems

    • Article 39: Conformity assessment of third countries (p. 63) — Allows conformity assessment bodies outside of the EU to act as notified bodies

  • CHAPTER 5: STANDARDS, CONFORMITY ASSESSMENT, CERTIFICATES, REGISTRATION

    • Article 40: Harmonised standards (p. 63) — Permits AI systems that conform to the harmonized standards of the Official Journal of the European union to be considered in compliance with Chapter 2

    • Article 41: Common specifications (p. 63) — Outlines what to do where there is not overlap between Chapter 2 and the harmonized standards

    • Article 42: Presumption of conformity with certain requirements (p. 63) — Defines two kinds of high-risk AI that are assumed to be in compliance

    • Article 43: Conformity assessment (p. 64) — Sets up the requirements for the AI regulatory conformity (compliance) assessments

    • Article 44: Certificates (p. 65) — Lays out the language and duration restrictions for certificates

    • Article 45: Appeal against decisions of notified bodies (p. 66) — Requires EU member states to allow appeals against the decision of notified bodies

    • Article 46: Information obligations of the notified bodies (p. 66) — Outlines what notified bodies must communicate to notifying authorities and to one another

    • Article 47: Derogation from conformity assessment procedure (p. 66) — Allows the special authorization of high-risk AU systems within specific member states for exceptional reasons

    • Article 48: EU declaration of conformity (p. 67) — Requires AI system providers to write a declaration of conformity for every AI system

    • Article 49: CE marking of conformity (p. 68) — Requires the presence of European Commission marking on high-risk AI systems

    • Article 50: Documentation retention (p. 68) — Requires providers to save documentation for at least 10 years after an AI system is put on the market

    • Article 51: Registration (p. 68) — Requires providers or authorized representative of high-risk AI systems to register the systems in the EU database

TITLE IV TRANSPARENCY OBLIGATIONS FOR CERTAIN AI SYSTEMS

  • Article 52: Transparency obligations for certain AI systems (p. 69) — Requires AI systems make clear to humans that they are AI systems

TITLE V MEASURES IN SUPPORT OF INNOVATION

  • Article 53: AI regulatory sandboxes (p. 69) — Provides controlled environments at the member state level for the development, testing, and validation, of AI systems prior to their being put on the market

  • Article 54: Further processing of personal data for developing certain AI systems in the public interest in the AI regulatory sandboxes (p. 70) — Erects safeguards around personal data used in AI regulatory sandboxes to ensure it is kept private and deleted after use

  • Article 55: Measures for small-scale providers and users (p. 71) — Prioritizes the access of small-providers and start-ups to regulatory sandboxes

TITLE VI GOVERNANCE

  • CHAPTER 1: EUROPEAN ARTIFICIAL INTELLIGENCE BOARD

    • Article 56: Establishment of the European Artificial Intelligence Board (p. 72) — Creates a board to facilitate cooperation between national supervisory organizations and the European Commission on the topic of AI

    • Article 57: Structure of the Board (p. 72) — Outlines the make-up of the Board and the procedures for it to adopt rules

    • Article 58: Tasks of the Board (p. 73) — Empowers the Board to share best practices and issue opinions about the implementation of the regulation

  • CHAPTER 2: NATIONAL COMPETENT AUTHORITIES

    • Article 59: Designation of national competent authorities (p. 73) — Requires member states to set up and fund national authorities to apply and implement the regulation 

TITLE VII EU DATABASE FOR STAND-ALONE HIGH-RISK AI SYSTEMS

  • Article 60: EU database for stand-alone high-risk AI systems (p. 74) — Establishes a database of high-risk AI systems

TITLE VIII POST-MARKET MONITORING, INFORMATION SHARING, MARKET SURVEILLANCE

  • CHAPTER 1: POST-MARKET MONITORING

    • Article 61: Post-market monitoring by providers and post-market monitoring plan for high-risk AI systems (p. 74) — Requires providers to establish and document systems for monitoring their products after they have been placed on the market

  • CHAPTER 2: SHARING OF INFORMATION ON INCIDENTS AND MALFUNCTIONING

    • Article 62: Reporting of serious incidents and of malfunctioning (p. 75) — Requires providers to report any malfunctions or serious incident with their systems to the authorities of the member state(s) where the incident occurred

  • CHAPTER 3: ENFORCEMENT

    • Article 63: Market surveillance and control of AI systems in the Union market (p. 76) — Requires that the national authorities report regularly to the Commission and further defines the responsibilities of these authorities

    • Article 64: Access to data and documentation (p. 77) — Requires all documentation be made accessible to market surveillance authorities and source code be made available when necessary for compliance testing

    • Article 65: Procedure for dealing with AI systems presenting a risk at a national level (p. 77) — Empowers national authorities to investigate and inform the Commission and notified bodies of AI systems that do not appear to be in compliance and pose a national or multinational risk

    • Article 66: Union safeguard procedure (p. 79) — Creates a procedure for dealing with disputes between the judgments of different member states

    • Article 67: Compliant AI systems which present a risk (p. 79) — Empowers national authorities to require providers to reduce the risk of AI systems, even when technically in compliance, if they are still found to pose a risk

    • Article 68: Formal non-compliance (p. 80) — Requires national authorities to force providers to fix lapse in compliance by preventing the system from going to market

TITLE IX CODES OF CONDUCT

  • Article 69: Codes of conduct (p. 80) — Encourages the voluntary application of codes of conduct to AI systems that are not considered high-risk

TITLE X CONFIDENTIALITY AND PENALTIES

  • Article 70: Confidentiality (p. 81) — Requires national authorities and notified bodies to respect the confidentiality of information obtained during monitoring and assessment

  • Article 71: Penalties (p. 82) — Empowers members states to establish penalties for organizations that defy the regulation

  • Article 72: Administrative fines on Union institutions, agencies and bodies (p. 83) — Allows The European Data Protection Supervisor to levy fines on EU bodies that are in non-compliance with the regulation

TITLE XI DELEGATION OF POWER AND COMMITTEE PROCEDURE

  • Article 73: Exercise of the delegation (p. 83) — Delegates power from the European Parliament and Council to the Commission for the purposes of the regulation 

  • Article 74: Committee procedure (p. 84) — Establishes a committee to assist the Commission with the regulation

TITLE XII FINAL PROVISIONS  Articles 75-82 amend prior legislation to reference the new regulations

  • Article 75: Amendment to Regulation (EC) No 300/2008 (p. 84)

  • Article 76: Amendment to Regulation (EU) No 167/2013 (p. 84)

  • Article 77: Amendment to Regulation (EU) No 168/2013 (p. 85)

  • Article 78: Amendment to Directive 2014/90/EU (p. 85)

  • Article 79: Amendment to Directive (EU) 2016/797 (p. 85)

  • Article 80: Amendment to Regulation (EU) 2018/858 (p. 86)

  • Article 81: Amendment to Regulation (EU) 2018/1139 (p. 86)

  • Article 82: Amendment to Regulation (EU) 2019/2144 (p. 87)

  • Article 83: AI systems already placed on the market or put into service (p. 87) — Grandfathers in AI systems that are already part of large IT Systems 

  • Article 84: Evaluation and review (p. 87) — Requires the Commission to consider updates to Annex III every year, evaluate and review the regulation every four years, and submit proposals to amend the regulation

  • Article 85: Entry into force and application (p. 88) — Sets timelines for when the regulation will go into effect

ANNEX I ARTIFICIAL INTELLIGENCE TECHNIQUES AND APPROACHES (referred to in Article 3(1)) — defines the techniques and approaches considered to be AI

ANNEX II LIST OF UNION HARMONISATION LEGISLATION — Lists EU directives and regulations considered to be harmonizing legislation

  • Section A: List of Union harmonization legislation based on the New Legislative Framework

  • Section B: List of other Union harmonization legislation

ANNEX III HIGH-RISK AI SYSTEMS (referred to in Article 6(2)) — Lists the use cases that constitute high-risk AI systems:

  • Biometric identification

  • Potential to do physical harm

  • Potential to limit access to service or opportunities

  • Related to law enforcement or administration of justice

ANNEX IV TECHNICAL DOCUMENTATION (referred to in Article 11(1)) — Outlines what must be included in the required technical documentation  

ANNEX V EU DECLARATION OF CONFORMITY — Outlines what must be included in the required declaration of conformity

ANNEX VI CONFORMITY ASSESSMENT PROCEDURE BASED ON INTERNAL CONTROL — Lays out how to assess conformity

ANNEX VII CONFORMITY ASSESSMENT OF QUALITY MANAGEMENT SYSTEM AND ASSESSMENT OF TECHNICAL DOCUMENTATION — Lays out how to assess the quality management system and technical documentation of a high risk AI system

ANNEX VIII INFORMATION TO BE SUBMITTED UPON THE REGISTRATION OF HIGH-RISK AI SYSTEMS IN ACCORDANCE WITH ARTICLE 51 — Lists the information needed to register a high-risk AI system 

ANNEX IX UNION LEGISLATION ON LARGE-SCALE IT SYSTEMS IN THE AREA OF FREEDOM, SECURITY AND JUSTICE — Lists the EU legislation that concerns the large-scale IT systems that include grandfathered-in AI systems

Joe Hilleary

Joe Hilleary is a writer, researcher, and data enthusiast. He believes that we are living through a pivotal moment in the evolution of data technology and is dedicated to...

More About Joe Hilleary