Januar 31. 2025

EU AI Act: Ban on certain AI practices and requirements for AI literacy come into effect

Share

The first requirements under the EU Artificial Intelligence (AI) Act come into effect on February 2, 2025, banning the use of AI systems that involve prohibited AI practices and requiring providers and deployers of AI systems to take steps to ensure their personnel have sufficient AI literacy to operate AI systems.

For background, the EU AI Act entered into force on August 1, 2024, and its requirements are coming into effect under a staggered timeline, with the majority of its provisions being implemented by August 2, 2026.

THE REQUIREMENTS

AI Literacy Requirement (Article 4 EU AI Act)

Article 4 applies to both providers and deployers of AI systems and requires that these organisations take suitable measures to ensure that their staff and other persons engaged in the operation of their AI systems have a sufficient level of AI literacy. The EU AI Act defines AI literacy as the skills, knowledge and understanding required to facilitate the informed deployment of AI systems and to gain awareness about the opportunities and risks of AI and possible harm it can cause.

Organisations must take into account the technical knowledge, experience, education, and training of relevant staff—in addition to the context in which the AI systems are to be used—when determining the appropriate level of AI literacy required.

Recital 20 of the EU AI Act and the definition of AI literacy in Article 3(56) of the EU AI Act seem to broaden the concept of AI literacy and the corresponding obligation to promote it. Under these norms, AI literacy is intended to allow not only providers and deployers but also affected persons to make an informed deployment of AI systems and to provide all relevant actors in the AI value chain with the insights required to ensure appropriate compliance and correct enforcement of the EU AI Act. So the obligation to promote AI literacy is potentially not limited to staff, subject to expected guidance from AI regulators.

To implement an AI literacy program in line with the EU AI Act requirements, an organization will benefit from considering its specific practices in the development and use of AI, its staff and third parties involved in these activities, and its existing training programs—such as in privacy and cybersecurity—which could well serve as a good basis to build on considering the intersection between these areas. On the external facing side of things, AI literacy requirements might have an impact on the engagement with vendors, clients, persons affected by the use of AI systems, and other players in the AI value chain.

The AI literacy requirement under the EU AI Act applies to all AI systems, regardless of the level of risk those systems present. Although there is not a specific penalty or fine associated with breaches of Article 4, it is likely that breaches of Article 4 will be taken into account by a regulator when considering what penalties to apply for other breaches of an organisation’s obligations under the EU AI Act.

Ban on prohibited AI practices (Article 5 EU AI Act)

Article 5 applies to both providers and deployers of AI systems and prohibits both the placing on the market, the putting into service, and the use of:

  1. AI systems that deploy subliminal, manipulative, or deceptive techniques that materially distort the behaviour of a person or group of persons by appreciably impairing their ability to make an informed decision;
  2. AI systems that exploit vulnerability characteristics of a person or group of persons (including their age, disability, or socio-economic status) with the aim of materially distorting their behaviour;
  3. AI systems that use social scoring techniques to evaluate or classify a person or group of persons over a period of time based on their social behaviour or known, inferred, or predicted personal or personality characteristics;
  4. AI systems that use profiling techniques or assessment of personality traits and characteristics to predict the risk of criminal behaviour of individuals;
  5. AI systems that create or expand facial recognition databases through untargeted scraping of facial images from the internet or CCTV footage;
  6. AI systems that are used to infer emotions of persons in the workplace or education;
  7. Biometric categorisation systems that are used to categorise individuals based on biometric data to deduce or infer race, political opinions, trade union membership, religious or philosophical beliefs, or sex life or sexual orientation; and
  8. Biometric real-time identification systems that are used in publicly accessible spaces for law enforcement purposes.

The provisions of the EU AI Act that address penalties for breaches of Article 5 will only come into force on August 2, 2025. Fines associated with a breach of Article 5 may be a maximum of the higher of €35 million or 7% of total worldwide annual turnover.

The prohibited practices under Article 5 are considered to be practices that are harmful and abusive and are prohibited as they contradict Union values, the rule of law, and fundamental rights. There are limited exceptions available for certain prohibited practices (for example, for AI systems that are used to infer emotions of persons, the prohibition does not apply to AI systems placed on the market for medical or safety reasons).

The European AI Office launched a stakeholder consultation in November 2024 on the prohibited practices, the responses from which inform preparation of the European Commission guidelines on the definition of AI systems and on prohibited practices. These guidelines are due to be published in early 2025 are intended to offer further clarity on prohibited practices.

TO PREPARE FOR THE REQUIREMENTS

We have been working with clients to review AI literacy programs in line with the requirements of the EU AI Act, as well as to identify potentially prohibited AI practices, taking into account the specific industry and activity of the organization. In certain cases, there may be changes to the use of the AI system or guardrails that can be implemented to mitigate the risk of the AI system falling within the ban. We often work in close cooperation with our global team to address regulatory requirements from other jurisdictions as well.

verwandte Beratungsfelder und Industrien

Stay Up To Date With Our Insights

See how we use a multidisciplinary, integrated approach to meet our clients' needs.
Subscribe