When will the Act come into effect?
The provisional agreement states that the Act should apply two years after entry into force, with some provisions coming into effect at a later date. Work still needs to take place to finalise the details of the new regulation, so it is likely that the Act will come into effect in 2026.
Who will the Act apply to?
The Act will apply to both providers and deployers of in-scope AI systems that are used in or produces an effect in the EU, irrespective of their place of establishment. This means the providers or deployers of AI systems in third countries, such as the United States, will have to comply with the EU AI Act if the output of the system is used in the EU.
Which AI systems will the Act cover?
The Act uses the definition of AI systems proposed by the OECD: "An AI system is a machine-based system that [...] infers from the input it receives how to generate outputs such as predictions, content, recommendations, or decisions that can affect physical or virtual environments."
The Act will not apply to AI systems:
- used exclusively for military or defence purposes;
- used solely for the purpose of research and innovation; and
- used by people for non-professional reasons.
Certain applications will be banned under the EU AI Act, including AI systems used for emotion recognition in the workplace, for untargeted scraping of facial images from the internet or CCTV footage to create facial recognition databases. Using AI systems to conduct remote biometric identification in public will be allowed when strictly necessary for law enforcement purposes. However, safeguards will need to be put in place, such as limiting the use of these systems to conducting searches for people suspected of the most serious crimes.
What are the requirements of the Act?
The requirements of the EU AI Act differ depending on the risk level posed by the AI system. For example, AI systems presenting a limited risk would be subject to more light touch transparency obligations, such as informing users that the content they are engaging with is AI generated.
High-risk AI systems would be authorised, but subject to tougher requirements and obligations, such as the need to carry out a mandatory fundamental rights impact assessment. Citizens will have a right to receive explanations about decisions based on the use of high-risk AI systems that affect their rights. At the other end of the scale, AI uses demonstrating unacceptable levels of risk would be prohibited.
Some examples include:
- Limited risk: chat bots or deepfakes;
- High risk: AI used in sensitive systems, such as welfare, employment, education, transport; and
- Unacceptable risk: social scoring based on social behaviour or personal characteristics, emotion recognition in the workplace and biometric categorisation to infer sensitive data, such as sexual orientation.
What are the penalties for non-compliance?
Similar to the way fines are calculated under the European General Data Protection Regulation, fines for violating the Act will be calculated as a percentage of the liable party's global annual turnover in the previous financial year, or a fixed sum, whichever is higher:
- €35 million or 7% for violations which involve the use of banned AI applications;
- €15 million or 3% for violations of the Act's obligations; and
- €7.5 million or 1.5% for the supply of incorrect information.
However, proportionate caps will be in place when issuing administrative fines against small and medium enterprises and start-ups. Citizens will be able to launch complaints about the use of AI systems that affect them.
Next steps
There will be technical refinements of the agreement conducted over the coming weeks before it is submitted to the representatives of the EU member states for approval. The final text of the EU AI Act will then be published.