Last updated: 21 August 2024
The regulation EU 2024/1689, laying down rules on artificial intelligence (AI) was published in the Official Gazette of the EU on July 2024. The regulation came into effect on August 1, 2024, and is mandatory for all EU Member States starting with August 2, 2026. Still, some provisions shall apply starting with 20251 while others starting with 20272 .
Under pressure from different players asking for better regulation, the European Union adopted the “AI Act” on May 21, 2024, one of the first global regulations in this area. This also led to the creation of an "AI Office" to strengthen the EU's AI skills and an "AI Committee", made of one representative per Member State, to advise the European executive and ensure consistent application of the regulation.
Additional regulatory initiatives are expected in the following period, such as the adoption of harmonized standards for the placing on the market, commissioning and use of established AI systems under this Regulation, or the development of codes of conduct or good practices.
Classification of AI systems based on risk
According to the definition provided by the Regulation, an AI system means a machine-based system that is designed to operate with varying levels of autonomy and that may exhibit adaptiveness after deployment. It uses the data received to generate predictions, content, recommendations or decisions that can influence physical or virtual environments, according to explicit or implicit objectives.
Thus, the European legislator establishes 4 categories of AI systems according to the risks and potential harmful effects that they could generate, providing a series of obligations for operators (AI deployers3 and AI providers4).
1. Unacceptable risk AI: Prohibited AI systems include those that manipulate decisions, exploit vulnerabilities, evaluate or classify people based on their social behavior or personal traits, and predict crime risk. Also are prohibited the systems that retrieve facial images from the Internet or video surveillance, infer emotions at work or in educational establishments, and classify people based on biometric data. However, certain exceptions are provided for when these systems are used to search for missing persons or prevent terrorist attacks.
2. High-risk AI: an AI system is high-risk when it is intended to be used as a security component of a product itself covered by Union legislation, as listed in Annex I of the Regulation. These systems must be subject to conformity assessment by a third party with a view to placing the product on the market or it commissioning, in accordance with the European legislation.
Annex III of the Regulation also details the systems used in eight specific areas, including: systems used for (i) biometric identification, (ii) those used in essential services (for management and operation critical digital infrastructures, road traffic, systems used in the management of the supply of water, gas, heating or electricity), (iii) education, (iv) human resources, (v) the legal system etc. These AIs require reinforced requirements.
3. Low-risk AI: AI systems subject only to an obligation of transparency on the part of the provider who will have to register such a system in the European database. Therefore, a deepfake or an online chatbot must be reported as such to the user.
According to paragraph 53 of the Regulation, AI with limited impact on decisions is characterized by several essential conditions: (1) the system performs restricted tasks, such as structuring data or classifying documents, without increasing the risks associated with uses considered high risk; (2) AI improves the results of previous human activity, such as perfecting the language of a document, without significantly influencing the final decision; (3) AI detects deviations from previous decisions without changing them, only reporting inconsistencies; (4) AI performs preparatory tasks, such as file management or translation, with limited impact on subsequent assessments.
4. Minimal risk AI: these are AI systems that pose no threat to the user, such as a spam filter, and have no specific regulations. Prior to placing on the market or commissioning AI systems, whether high risk or non-high risk, providers, distributors, deployers, public authorities, agencies or other public bodies – as applicable, must register these systems in the EU database. For certain high-risk AI systems in sensitive areas, a secure section not available to the public will be provided in the database.
Specific regime for general purpose AI (GPAI)
The regulation also provides a special regime for what it designates as general-purpose5 AI models, indicating certain obligations to be respected.
A general-purpose AI model is classified as systemic risk if it has high impact capabilities according to appropriate assessment tools, or if it is deemed equivalent by the Commission based on specific criteria. Increased transparency obligations6 apply to providers of AI systems intended for the general public, or general-purpose AI (GPAI), generating synthetic audio, image, video or text content (Art. 50).
Each time such a system is used, the user must be able to access the list of content used to train the AI.
Obligations of operators (providers and deployers)
Several obligations are provided for providers and deployers of AI systems, including:
- Establish a quality management system proportional to the size of the service provider’s organization. This means implementing compliance strategies, design and development procedures, system validation processes, data, and risk management, monitoring measures or incident reporting, registers or communication procedures.
- Establish a risk management system: this obligation mainly concerns high-risk AI systems, in relation to which it is necessary to have the capacity to identify and analyse potential risks to health, security or fundamental rights, estimate and assess these risks and adopt corrective measures.
- Provide technical documentation of the high-risk system: Documentation must be kept up to date and prove that the AI system meets the legal requirements. Small businesses, including start-ups, can provide this information in simplified manner. The EU will create a simplified form for this purpose. If a high-risk AI system is linked to a product covered by other European laws (in the financial field for example), a single set of documents must contain all the necessary information. Suppliers of high-risk AI systems must retain certain documents for 10 years after the system is made available. In the event of bankruptcy or cessation of activity before the deadline expires, each EU Member State will decide on the arrangements for making these documents available.
- Keep a record: High-risk AI systems must automatically record and trace system actions, especially in situations where the AI may pose a risk or undergo significant changes. Suppliers must keep these logs/records for at least six months, or longer if required by European or national legislation, in particular that relating to the protection of personal data.
- Ensure transparency: High-risk AI systems must be designed transparently, so that those who use them can understand and use them correctly. The instructions should also explain how to interpret the system results, any predetermined changes to the system, and how to maintain it. The accuracy of these AI systems must be declared in their instructions.
Obligations of importers and distributors
- The importers must verify that the system has passed the necessary assessments, that it is supported by adequate documentation, and bears the CE mark. They must indicate their contact details on the system or its packaging, ensure its safe storage and transport, and keep a record of certification and instructions for 10 years.
- The distributors must ensure that the system bears the CE mark, and it is supported by a copy of the EU declaration of conformity. If the system doesn't meet these standards, they can't sell it. Post-sale incompliances if they exist, must be corrected, removed or the product recalled.
The AI system certificate
Certificates for AI systems must be written in a language understandable by local authorities. They are valid for four or five years, depending on the type of AI system; as listed in Annexes I and III of the Regulation, and are renewable after reassessment. The certificate may be suspended or withdrawn for non-compliance. These decisions are subject to appeal.
Sanctions
Sanctions for violation of legal provisions may be administrative or criminal, monetary or non-monetary. Member States must implement payment measures and provide for proportionate and dissuasive sanctions. Each Member State must establish a sanctions regime taking into account the nature, seriousness, duration of the infringement, its consequences, and the size of the provider, particularly for SMEs and start-ups.
The regulation provides for maximum penalties of €35 million or 7% of global annual turnover for certain prohibited AI practices, and €15 million or 3% of turnover for other violations. In both cases, the highest amount is retained. Providing inaccurate information to notified bodies or national authorities can result in an administrative fine of €7.5 million or 1% of global annual turnover, whichever is greater.
The Commission can finally impose fines on providers of general-purpose AI models of up to €15 million or 3% of their global annual turnover, whichever is greater.
Notes
- The general provisions and prohibited practices of AI will apply from February 2, 2025. The provisions on high-risk systems, on notification authorities and notified bodies, on general-purpose AI models, on governance, on sanctions and confidentiality will be applicable from August 2, 2025, except those relating to fines for providers of general-purpose AI models.
- The provisions on high-risk AI systems and the corresponding obligations will only apply from August 2, 2027.
- A natural or legal person, public authority, agency or other body using an AI system under its authority except where the AI system is used in the course of a personal non-professional activity.
- A natural or legal person, public authority, agency or other body that develops or orders that an AI system is made for own use or for placing on the market.
- A general-purpose AI model can perform many distinct tasks, due to its training on large amounts of data, and can be integrated into various systems or applications. It excludes models used for research or prototypes before commercialization.
- For example, people need to be informed when interacting with an AI system. Results from GPAI systems must be in machine-readable format and detectable as artificial. Those deploying AI systems that generate or manipulate potentially infringing images, audio, or video must disclose that the content is artificial.