The European regulation on artificial intelligence came into force on August 1, 2024, and is already partially applicable. It aims to regulate the use of artificial intelligence (AI) to protect people's rights and freedoms.
This new regulation affects organizations that develop AI systems and bring them to market (known as providers), and those that use them for business purposes (known as deployers).
The regulation adopts a risk-based approach, and any provider or deployer of an AI system must, therefore, determine the level of risk likely to result.
The regulation defines four levels of risk:
Risk level | Criteria | Obligations |
Unacceptable risk | These are systems considered to be a clear threat to people's security, livelihoods, and rights. | Prohibition |
High risk | These are systems likely to present serious risks to health, safety, or fundamental rights. | Obligations: conformity assessment, technical documentation, risk management mechanisms, etc. |
Limited risk | These are systems that require specific transparency measures. | Users need to be aware that they are interacting with a machine, so they can make an informed decision. |
Minimal risk | These are systems that do not fall into any of the above categories. | No specific obligations |
This regulation applies in several stages:
Dates | Measures |
February 2, 2025 |
|
August 2, 2025 |
|
August 2, 2026 |
|
August 2, 2027 | Application of rules relating to Annex I high-risk AI systems (toys, radio equipment, in vitro diagnostic medical devices, etc.) |
The European Commission has published:
Failure to comply with the regulation exposes the offender to an administrative fine, the amount of which depends on the nature of the offense and may reach EUR 35,000,000, or 7% of the offender's total worldwide annual turnover for the preceding financial year.
In addition, regardless of their level of risk, AI systems are likely to undermine:
Therefore, both providers and deployers, as well as users of AI systems, need to be extremely vigilant.