2024年5月30日

AI-Specific Representations in Tech M&A

分享

Navigating the acquisition of any company which makes substantial use of artificial intelligence (AI) requires a nuanced understanding of both its technological intricacies and legal complexities. As the landscape of AI continues to evolve rapidly, we expect to encounter myriad representations and warranties aimed at specific issues in intellectual property rights, data rights, and regulatory compliance relating to AI. In this Legal Update, we delve into core representations in IP and data privacy, the concerns they cover, and certain representations and warranties we have seen in transactions specific to AI—which, to date, seem to be largely duplicative to previously existing, standard representations, but with AI-specific language.

Intellectual Property

There are a few key representations regarding intellectual property:

  • Ownership: A representation that the target company owns what it claims to own.
  • Non-Infringement: A representation that the conduct of the business of the target company does not infringe on any third-party intellectual property rights.
  • Sufficiency: A representation that the target company has available to it (through ownership or licensing) all of the intellectual property it needs to operate its business.

In considering AI companies, the core intellectual property representations will cover—with a few notable exceptions—many of the relevant issues. For example, if the target company lacked the rights to use the data that it used to train its model, then the non-infringement representation would be breached, absent an appropriate disclosure. Nonetheless, we see representations specific to AI—defined broadly to mean machine learning, reinforcement learning, deep learning and other artificial intelligence technologies:

  • Concerning data used to train, refine, or improve the AI technology, a representation that the target company has both (i) obtained all required licenses or consents to collect and use all such inputs in the course of its business; and (ii) complied with all use-restrictions governing its collection, and the use of such inputs. These representations are informed by the tidal wave of US training-data litigation we’ve seen in recent years against major AI tool providers, not by any gap in the standard buy-side representation regarding non-infringement.
  • A representation that the target company has implemented procedures designed to (i) ensure that any of its AI technology is reasonably reproducible; and (ii) enable the company to substitute or replace such AI technology with something reasonably similar . These representations address concerns raised by the White House’s Blueprint for an AI Bill of Rights. In our view, they reflect a concern that AI will be subject to regulatory disruption, and are additive to standard representations on compliance with law.
  • Very broadly, a representation that the target company has not used any generative AI in a manner that adversely affects the ownership, validity, enforceability, or protection of any of its owned intellectual property. While speaking to intellectual property rights themselves, this could be drafted in a manner that could be seen as overreaching, and not, in our view, easily made, in light of the ever-changing AI landscape.

The most recent NVCA Model Documents (updated in April 2024) include a few representations specific to “Generative AI Tools,” which they define as “generative artificial intelligence technology or similar tools capable of automatically producing various types of content (such as source code, text, images, audio, and synthetic data) based on user-supplied prompts.” In relation to intellectual property rights, investors are asking the target company to represent that:

  • It has not “used Generative AI Tools to develop any material Company-Controlled Intellectual Property that the Company intended to maintain as proprietary in a manner that it believes would materially affect the Company’s ownership or rights therein.” This representation reflects a concern that the target company owns what it claims to own, and would similarly be covered by the existing core representations. It also reflects a concern that AI-produced IP may not be protectable for lack of a human author or inventor. This representation is also overly broad, considering the definition of Generative AI Tools. For example, if a company uses an email service that (1) uses AI to draft messages based on prompts; (2) uses a hybrid language generation model to show wording suggestions; (3) includes a feature that can generate options for a prompt response to an email you receive; or (4) uses machine learning to categorize emails into different "tabs," it has arguably used Generative AI Tools to create all emails. It is unclear if all such emails would be material, though they would certainly be Company-Controlled Intellectual Property, which is also defined very broadly to include “all intellectual property rights [owned by the company], whether registered or unregistered, that are recognized in any jurisdiction of the world.”
  • It has not included any sensitive personal information, trade secrets, or other confidential/proprietary information in any prompts or inputs in tools that utilize prompts or inputs to improve the AI tool. This representation addresses the protection of this company information, which is typically covered by the standard representation that the target company has taken commercially reasonable steps to protect and maintain the confidentiality of its trade secrets.
  • It has used AI tools in material compliance with license terms, consents, agreements, and laws. This representation goes to non-infringement and contractual and legal compliance, though also raises concerns with accuracy, given the evolving AI regulatory landscape.

Data Privacy

As with intellectual property, there are a few key representations that will cover many AI-related issues, such as:

  • Compliance with Laws: A representation that the target company maintains policies regarding data privacy, protection, and security that comply with applicable laws, and that its business has been conducted in compliance with applicable laws.
  • Data Treatment: A representation that the target company has created, collected, used, stored, maintained, processed, recorded, distributed, transferred, received, imported, exported, accessed, manipulated, or otherwise taken actions with respect to data, in compliance with not only applicable law, but with applicable contracts and/or pursuant to rights in its favor, as well as with its own privacy policies.

Using the same example, if the target company lacked rights to use the personal data that it used to train its model, then the data-treatment representation would be breached, absent an appropriate disclosure, and the target company would violate the compliance with laws representation.

We are not seeing any representations specific to AI, other than the representations regarding data described above. While the general compliance with law representation would likely address this—since existing data privacy and other subject-area regulators (e.g., the Equal Employment Opportunity Commission and Federal Trade Commission) oversee AI in the United States—it could make sense to use an AI-specific representation in the future as well. The issues an AI-specific representation might address would pertain to the governance aspect of AI use and/or development, such as: a representation that the target company has adopted and implemented written policies, procedures, and technical documentation and logs, based on commercially reasonable AI governance laws, frameworks and guidelines that address risk management, impact assessments, data governance, risk mitigation measures, common AI ethics principles (transparency and explainability, nondiscrimination, fairness, accuracy, human-centered, human-involved, robustness, privacy-enhanced, safety, security, and contestability), continuous monitoring, and accountability.

相关服务及行业

及时掌握我们的最新见解

见证我们如何使用跨学科的综合方法来满足客户需求
[订阅]