June 04, 2024

Colorado Governor Signs Comprehensive AI Bill

Share

On May 17, 2024, Colorado became the first state to enact comprehensive AI legislation. Governor Jared Polis signed Senate Bill 24-205, “Concerning Consumer Protections in Interactions with Artificial Intelligence Systems,” (“the Colorado AI Act”), which introduces novel requirements for developers and deployers of high-risk artificial intelligence (AI) systems. 

Notably, on the same day he signed the Colorado AI Act into law, Polis wrote to the Colorado General Assembly that he had “reservations” about the law, urging the legislature to “fine tune the provisions and ensure that the final product does not hamper development and expansion of new technologies in Colorado that can improve the lives of individuals” as well as “amend [the] bill” if the federal government does not preempt it “with a needed cohesive federal approach.” 

The Colorado AI Act goes into effect on February 1, 2026. If unamended, the law would have significant effects on those who develop and deploy high-risk AI products. 

WHAT IS COVERED?

The Colorado AI Act applies to developers (i.e., tech companies creating the AI) and deployers (i.e., the AI system users) that do business in Colorado. Similar to the EU AI Act, the Colorado AI Act defines an AI system as a “machine-based system that, for any explicit or implicit objective, infers from the inputs the system receives how to generate outputs, including content, decisions, predictions, or recommendations, that can influence physical or virtual environments.” This definition could include all types of AI, not just generative AI. 

However, nearly all of the law’s obligations apply to “high-risk” AI systems that, when deployed, make, or are a substantial factor in making, a consequential decision. Consequential decisions include those pertaining to education, employment, financial or lending services, essential government services, healthcare, housing, insurance, or legal services. As currently written, this definition also expressly excludes a long list of technology, such as calculators, cybersecurity, databases, data storage, and spreadsheets. 

PROHIBITION AGAINST ALGORITHMIC DISCRIMINATION

Under the law, both developers and deployers must exercise “reasonable care to protect consumers from any known or reasonably foreseeable risks of algorithmic discrimination” in their high-risk AI systems. In addition, the law provides for a rebuttable presumption that the developer or deployer used reasonable care if they comply with the bill’s requirements. 

OBLIGATIONS FOR HIGH-RISK AI SYSTEMS

The Colorado AI Act imposes different obligations on developers and deployers of high-risk AI systems. 

Developers

Developers are required to share certain information with deployers, such as harmful or inappropriate uses of the high-risk AI system, summaries of the types of data used to train the system, reasonably foreseeable limitations of the system, risk mitigation measures taken by the deployer, and other information necessary for the deployer to satisfy its obligations under the law. 

On their website or in a public use case inventory, developers must also publish information such as the types of high-risk AI systems they developed and how they manage risks of algorithmic discrimination. The law also requires that developers disclose risks of algorithmic discrimination that could result from the intended use of the system to the Colorado Attorney General and developers of that system. 

Deployers

The Colorado AI Act requires deployers to adopt a risk management policy and program overseeing the deployment of the high-risk AI system, such as the National Institute of Standards and Technology’s Artificial Intelligence Risk Management Framework and the International Organization for Standardization’s Standard ISO/IEC 42001. The risk management policy and program must be reasonable when measured against these or similar AI risk management frameworks. Deployers must assess the reasonability of their policy and program against a number of additional factors, such as the size and complexity of the deployer. 

Additionally, the law requires deployers of a high-risk AI system to complete an impact assessment for their system, an annual impact assessment for any system that is deployed, and an impact assessment within 90 days of an “intentional and substantial modification” to the system. Deployers must also review the deployment of each of their high-risk AI systems for algorithmic discrimination. If a deployer deploys a high-risk AI system “to make, or be a substantial factor in making, a consequential decision concerning a consumer,” the law imposes several requirements on the deployer, such as notification to the consumer of that deployment and additional details about the system. As with developers, the law also requires that deployers publish information about deployed systems on their website and disclose instances of algorithmic discrimination to the Colorado Attorney General.

TRANSPARENCY FOR ALL AI SYSTEMS

The law has a basic transparency requirement, even for AI systems that are not high-risk. Similar to the Utah Artificial Intelligence Policy Act and chatbot laws in California and New Jersey, the Colorado AI Act requires deployers and developers that make available a consumer-facing AI system to disclose to consumers that they are interacting with an AI system unless the interaction with the system would be obvious to a reasonable person. 

NOTEWORTHY EXEMPTIONS

The Colorado AI Act contains a list of exemptions covering separate legal obligations, activities subject to legal protections, and/or industries that are already covered by AI requirements, among others. For example, the law exempts deployers from most of its requirements if they employ less than 50 employees and meet certain conditions. 

Certain Insurers

The Colorado AI Act also excludes insurers, as defined in Colo. Rev. Stat. Ann. § 10-1-102 (13), who are subject to both: 

  • Colo. Rev. Stat. Ann. § 10-3-1104.9, which governs insurers’ use of external consumer data and AI, and 
  • Any rules adopted by the Colorado Commissioner of Insurance under Colo. Rev. Stat. Ann. § 10-3-1104.9 

This means that “every person engaged as principal, indemnitor, surety, or contractor in the business of making contracts of insurance” and subject to the Colorado insurance law governing AI is exempt from the Colorado AI Act. 

Insurers Not Part of This Exemption

Insurers that are not subject to Colo. Rev. Stat. Ann. § 10-3-1104.9, and therefore not part of this exemption, include: 

  • Insurers offering title insurance
  • Qualified surety companies issuing bonds
  • Insurers issuing commercial insurance policies (except for insurers that issue business owners’ policies or commercial general liability policies with annual premiums of $10,000 or less)

Other Exemptions

The law further excludes certain banks or credit unions that are regulated by state or federal entities. These banks or credit unions must be subject to guidance or regulations that are at least as stringent as the requirements of the Colorado AI Act and that require the financial institution to regularly audit their use of high-risk AI systems for discrimination. 

Similarly, the law excludes covered entities under the Health Insurance Portability and Accountability Act that provide healthcare recommendations that (1) are generated by an AI system, (2) require the provider to affirmatively implement those recommendations, and (3) are not considered high risk. 

Separately, the law does not restrict a developer or deployer’s ability to comply with other legal obligations like federal or state laws, regulatory action, or the management of legal claims. Also, it exempts any obligations that would otherwise adversely affect constitutional rights, such as the First Amendment. 

ENFORCEMENT

A violation of the Colorado AI Act constitutes an unfair trade practice, which is enforced by the Colorado Attorney General. There is no private right of action under this law. 

LOOKING AHEAD

Given Governor Polis’ statement regarding his reservations about the law, the Colorado legislature may modify the law prior to its effective date in February 2026. However, even with changes, the law will impose significant new requirements for developers and deployers of high-risk AI systems. 

Also, the Colorado AI Act implicates key governance steps—such as risk-ranking AI; controlling AI testing data; and continuous testing, monitoring, and auditing—that are present in other laws and regulatory frameworks in approximately 100 countries across six continents. 

Given the significance of and complexities related to this new law, Mayer Brown will continue to closely track the law’s progress and has pioneered a six-step governance approach for clients to navigate these trends as well as to avoid AI mishaps. Feel free to contact any of the authors to discuss. 

Related Services & Industries

Stay Up To Date With Our Insights

See how we use a multidisciplinary, integrated approach to meet our clients' needs.
Subscribe