November 14. 2024

The Impact of the EU AI Act On AI Reseller Deals

Share

The European Union’s Artificial Intelligence Act (the “AI Act”) is the world’s first comprehensive AI regulation. The AI Act’s extraterritorial reach means that many non-EU companies developing and/or using AI will also be required to comply with its obligations.1

A company’s obligations under the AI Act depend on its role in relation to an AI tool and the risk level of the AI system on individuals’ health and safety and fundamental rights. Providers (developers of AI tools) and high-risk uses are generally subject to stricter obligations than deployers (users of AI tools) and low-risk uses.

A company’s roles and the use cases may be complex and change over time. For example, a company that modifies another company’s low-risk AI tool to offer a high-risk AI system may itself become a provider of the modified AI tool and subject to a provider’s obligations under the AI Act.

This article describes transactions where there is an initial provider of an AI tool that will be used by a new provider to create a new AI system. In those transactions, this article recommends that the parties anticipate future uses and modifications of AI tools so that all parties along the value chain can comply with their statutory obligations under the AI Act. For example, the parties will want to clearly outline in the contract their cooperation obligations, including what information, technical access and other assistance the initial provider of the AI tool will provide to its customers aiming to comply with the AI Act. In addition, this article recommends that the parties reduce risks by contractually limiting modifications to and uses of the AI tools, and allocate the remaining risks using warranties, indemnities and limitations of liability.

Obligations of AI Providers Under the AI Act

The AI Act entered into force on August 1, 2024. Although the AI Act will apply generally two years after its publication,2 some specific provisions will come into effect earlier or later. The applicability period ranges from six months after entry into force (such as provisions on banned AI systems) to 36 months after entry into force (such as provisions on AI that are safety components in products subject to the European Union’s product safety regulation).3 Violations of the AI Act can result in fines of up to the greater of EUR 35 million or 7 percent of the annual turnover of the preceding fiscal year.4

Extraterritorial application

Although the AI Act is an EU legislation, its influence extends beyond the EU borders.5 US and global companies established outside the European Union that use an AI system anywhere could be subject to the AI Act if the output of that AI system is used in the European Union. For example, the AI Act may apply to a US company using an AI tool to recruit for a job in the European Union because the AI tool’s output is used in the European Union. Similarly, the AI Act may apply to a company that develops an AI system in the United States that is then made available in the European Union.

Different obligations depending on the scenario

A company’s obligations, if any, arising from the AI Act, depend on:

  • Its role – as a provider, deployer, product manufacturer, distributor or importer – with AI systems providers generally subject to more obligations than deployers,
  • The type of AI – with providers of general purpose AI models (such as foundation models) subject to additional obligations and regulatory oversight than other types of AI, and
  • The level of risk applicable to the AI system – unacceptable (prohibited under the AI Act), high-risk (subject to stringent requirements), limited risk (subject to certain transparency obligations) or minimal risk (not subject to additional requirements).

For example, providers of high-risk AI systems (such as AI used as safety components in products subject to product safety legislation, including civil aviation and medical devices, or AI systems used for biometrics or in employment) will have to conduct conformity assessments and comply with extensive compliance obligations (such as relating to cybersecurity, privacy, data governance, risk and quality management, and technical documentation).

In contrast, deployers of limited-risk systems (such as chatbots not falling into the high-risk AI system category) must only comply with transparency obligations under the AI Act.

Navigating AI Act Compliance in AI Reseller Deals

Certain uses and modifications of an AI system may turn a deployer into a new provider of that AI system. Companies should evaluate their respective roles under the AI Act at the point of negotiating the contract for the AI tool, and consider in advance other potential uses and modifications of the AI tool. If the potential uses and modifications may lead to another party also becoming a new provider of an AI tool, the parties should account for this in contractual negotiations.

The AI Act envisages three situations6 where a company may become a new provider of a high-risk AI system:

  1. If the company puts its name or trademark on a high-risk AI system (without prejudice to the contract with the initial provider of the high-risk AI system allocating the responsibilities differently),
  2. If the company makes substantial modifications to a high-risk AI system in a way that it remains a high-risk AI system (such as modifying an AI tool used for recruitment into a tool used also for employee management), or
  3. If the company modifies the intended purpose of the AI system that has not been classified as high-risk in a way that the AI system becomes a high-risk AI system (such as modifying a customer service chatbot into a chatbot that can analyze and evaluate resumes of job applicants in the company’s database).

In any of these situations, the new provider would be subject to the comprehensive obligations of a high-risk AI system provider under the AI Act. The initial provider will not be considered a provider of that new high-risk AI system under the AI Act. In our chatbot example above, the company that created the original chatbot would be the initial provider responsible for compliance for the unmodified AI tool with the AI Act. However, the company that modified the chatbot to analyze and evaluate job applicants’ resumes would be the new provider responsible for compliance for the modified chatbot. With respect to other companies to which the initial provider has licensed the AI tool, the initial provider will continue to be a provider of its AI tool.

In order to help the new providers, the AI Act also places cooperation obligations on the initial provider of the AI tool to help the new provider achieve compliance with the AI Act.7 Those cooperation obligations include:

  • Making available to the new provider the necessary information to fulfil the obligations for providers of high-risk AI systems in the AI Act. In case of an initial provider of a high-risk AI system, this may, for example, include the documentation and automated logs required under Articles 18 and 19 of the AI Act, and
  • Providing the new provider with reasonably expected technical access and other assistance required to fulfil the obligations of a provider under the AI Act.

The initial provider can avoid this cooperation obligation by prohibiting any changes to its AI tool into a high-risk AI system.8 The initial provider may also want an indemnity from the new provider for any losses resulting from the new provider’s uses or modifications to the AI tool. If the initial provider cooperates with the new provider, it may want to limit its liability for deficiencies in the cooperation services that it provides to the new provider. The new provider may want a representation and warranty that the initial provider will provide at least the legally required cooperation, along with an indemnity for deficiencies in that cooperation.

While the initial provider may prefer to fulfill its cooperation obligations with minimal effort, the new provider may benefit from more detailed and up-to-date information. These differing interests can be addressed in the contract by clearly defining the scope of the information, technical access and assistance to be provided and the related charges (if any).

As provision of information and technical access may involve disclosing trade secrets and/or confidential information to the new provider, the initial provider may seek confidentiality terms to protect its position. On the other hand, the new provider will seek to ensure that those confidentiality terms are not overly restrictive or impede compliance with the AI Act, including obligations to cooperate with regulators.9

Finally, since the AI Act does not define the meaning of “necessary information,” “reasonably expected technical access” or “other assistance,” the initial provider and the new provider may also want to specify that the parties consider the information, access and assistance outlined in the contract to be reasonable and meet the requirements of the AI Act.

Key Takeaways

To comply with the AI Act for AI tools and systems being sold or used in the European Union, companies will need to evaluate their roles, the types of AI, and the use cases. In addition, the parties may want to contract specifically for cooperation between the initial provider and the new provider of the AI system. As a result, AI providers may want contractual terms to address the AI Act, including prohibiting another company from putting its name or trademark, making substantial modifications, or modifying the intended purpose of the initial provider’s AI system.

 


 

1 For more information about the AI Act, please see Mayer Brown’s Legal Update EU AI Act Adopted or listen to the podcast The EU AI Act and the UK Approach.

2 Article 113 AI Act.

3 Id.

4 Article 99(3) AI Act.

5 Article 2 AI Act.

6 Article 25 AI Act.

7 Article 25(2) AI Act.

8 Id.

9 Article 21 AI Act.

Stay Up To Date With Our Insights

See how we use a multidisciplinary, integrated approach to meet our clients' needs.
Subscribe