julho 16 2024

EU AI Act Published: Which Provisions Apply When?

Authors:
Share

On July 12, 2024, the EU AI Act (EU Regulation 1689/2024) was published in the Official Journal of the European Union. The text of the law is final and will enter into force on August 1, 2024. Its provisions will apply according to the staggered timetable below. 

STAGGERED TIMELINE OF APPLICATION

  • February 2, 2025 (6 months after entry into force): provisions on banned artificial intelligence (AI) systems will start applying.
  • August 2, 2025 (1 year after entry into force): provisions on general purpose AI systems (GPAI) will start applying. 
  • August 2, 2026 (2 years after entry into force): the bulk of the obligations arising from the EU AI Act will start applying, including key obligations applying to providers of high-risk AI systems, such as in employment and workers' management. 
  • August 2, 2027 (3 years after entry into force): some obligations applying to high-risk AI systems which are safety components in products regulated in EU product safety legislation will begin applying (e.g., civil aviation and medical devices).

EXTRATERRITORIAL APPLICABILITY

If the output of the system is used in the European Union, US and global companies using AI anywhere could be subject to the EU AI Act.

For example, if high-risk AI systems are developed in the United States and integrated into a product (e.g., in connected cars) which are then sold in the European Union, this means that the output of the AI system is used in the European Union, and the EU AI law is likely to apply.

DIFFERENT SETS OF OBLIGATIONS APPLY DEPENDING ON THE SCENARIO

For each AI system developed or used, companies will need to assess a variety of aspects in order to determine their obligations, if any, arising from the EU AI Act:

  • What their role is with regard to the AI system (as a provider, deployer, importer or distributor);
  • The type of AI; in particular, whether the system is general purpose AI or not;
  • For general purpose AI, whether there is systemic risk; or
  • For other AI systems, what the level of risk is (unacceptable, high risk, limited risk, low risk, or no risk).

For example, providers of high-risk AI systems will need to conduct conformity assessments and meet extensive compliance obligations (e.g. in relation to cybersecurity, privacy, data governance, risk and quality management, and technical documentation).

In contrast, deployers of limited-risk systems, such as chatbots, must comply only with transparency obligations set out in the EU AI Act.

For more information and tips on what to do to prepare for compliance, please refer to our other Legal Updates: EU AI Act Adopted; European Parliament Reaches Agreement on Its Version of the Proposed EU Artificial Intelligence Act; and European Union Proposes New Legal Framework for Artificial Intelligence. Or listen to our podcast series: The Legal Eye on AI.

***

For help complying with the EU AI law, contact one of the authors or your regular Mayer Brown contact. We have been advising clients leveraging our system to assess their highest AI risks—a method that considers their specific business models—and helping them establish robust AI governance.

Stay Up To Date With Our Insights

See how we use a multidisciplinary, integrated approach to meet our clients' needs.
Subscribe