AI Act
The AI Act frames the marketing, commissioning and use of artificial intelligence systems according to the level of risk they present.
Getting started
The European Regulation on Artificial Intelligence (RIA or AI Act) is the first general (or comprehensive) legislation dedicated to artificial intelligence. It aims to frame the development, marketing and use of artificial intelligence (AI) systems, which may present risks to health, safety or fundamental rights.
Objectives of the European AI Regulation :
- Ensure that AI systems are safe and respect laws on fundamental rights, EU values, the rule of law and the environment;
- Encourage reliable, human-centric IA;
- Create a uniform legal framework to facilitate investment and innovation;
- Strengthen governance and enforcement of existing laws on AI system safety and fundamental rights;
- Improve the internal market for legal and safe AI applications, and avoid market fragmentation.
In practice
The regulation sets rules, prohibits certain practices and imposes specific requirements on high-risk AI systems.
A risk-based approach by classifying AI systems into four levels:
- Unacceptable risk:
Definition : These systems are strictly prohibited because they threaten the security, fundamental rights and values of the EU.
Decision : Total ban.
Examples : Subliminal manipulation, social rating by governments, real-time biometric surveillance in public spaces, etc. - High risk:
Definition : AI systems with a significant impact on health, safety or fundamental rights.
Decision : Enhanced requirements for compliance with rigorous risk management, data governance and human control standards.
Examples : AI used in critical infrastructure (health, transport, energy), education and training systems, automated recruitment tools, AI for essential public services (health, justice), etc. - Limited risk:
Definition : These systems are subject to specific transparency obligations, particularly in the event of a clear risk of manipulation.
Decision : Users must be informed that they are interacting with an AI.
Examples : Chatbots, content generation, deepfakes, emotion recognition systems, etc. - Minimal risk:
Definition : All other AI systems are not subject to any specific obligations under the AI Regulation.
Decision : No specific regulatory constraints.
Examples : Spam filters, AI-based video games, etc.
The RIA also frames a new category of general-purpose models, such as those used in generative AI. These models, capable of performing numerous tasks, are difficult to classify in existing categories. For these models, the RIA imposes different levels of obligations, ranging from minimum transparency (article 53) to in-depth assessment and measures to mitigate systemic risks.
When?
The RIA, published in the Journal Officiel on July 12, 2024, came into force on August 1, 2024.
Its application is being phased in:
- February 2, 2025: Ban on AI systems with unacceptable risks
- August 2, 2025: Application of rules for general-purpose AI models
- August 2, 2026: All provisions of the AI Regulation become applicable, including rules for high-risk AI systems in Annex III (biometrics, critical infrastructure, education, employment, access to essential public services, law enforcement, immigration and justice)
- August 2, 2027: Application of rules for high-risk AI systems in Annex I (toys, radio equipment, in vitro diagnostic medical devices, civil aviation safety, agricultural vehicles, etc.)
Frequently asked questions
Who is affected by the AI Act?
The European Regulation on Artificial Intelligence (AI Act) concerns any organization that supplies, imports, distributes or deploys artificial intelligence systems governed by the regulation1. This includes companies, associations and public authorities.
Does the RIA replace the RGPD?
No, the RIA does not replace the RGPD, but complements it. In fact, the RGPD applies to all personal data processing. Thus, complying with the requirements of the RIA helps to comply with those of the RGPD.
To find out more consult the CNIL's questions - answers
How can I find out which regulations apply to my project?
- The RIA applies alone: if my solution uses a high-risk AI system without requiring personal data;
- The RGPD applies alone: if I process personal data without using an AI system subject to the RIA;
- Both apply: if my high-risk AI system requires personal data for its development or deployment;
- None of the two apply: if my minimal-risk AI system does not process personal data.
For more information, consult the CNIL's questions - answers
Who are the supervisory authorities and governance bodies under Regulation IA?
The AI Regulation provides for several levels of oversight and governance:
- European AI Office: set up within the European Commission, it oversees the application of the Regulation in the Member States and monitors general-purpose AI models
- National Market Surveillance Authorities: each member state must designate authorities responsible for overseeing and enforcing rules on AI systems, including prohibitions and requirements for high-risk systems
In France, oversight and governance authorities for the AI Regulation include:
- The CNIL (Commission Nationale de l'Informatique et des Libertés): it plays a central role in overseeing the application of the AI regulation, particularly with regard to the protection of personal data
- The DGE(Direction Générale des Entreprises): it is involved in implementing the rules for businesses and AI innovation
What are the penalties for non-compliance?
The penalties under the AI Act range from 1% to 7% of the company's worldwide annual sales, or from 7.5 to 35 million euros in fines. The amount depends on the nature of the non-compliance (prohibited uses, requirements for high-risk applications, or transparency requirements for limited risks) and the category of the company.