Oktober 28. 2024

New York State Department of Financial Services Issues Industry Letter on Cybersecurity Risks Arising from Artificial Intelligence

Share

Background

On October 16, 2024, the New York State Department of Financial Services (DFS) issued an industry letter, Cybersecurity Risks Arising from Artificial Intelligence and Strategies to Combat Related Risks, providing guidance on the cybersecurity risks associated with the use of artificial intelligence (AI) and strategies for entities regulated by DFS (“Covered Entities”)1 to mitigate these risks.

The guidance reviews the AI-cybersecurity risk landscape and provides a broad overview of controls to mitigate that risk.  Although NYDFS states that this guidance does not impose new requirements, companies would be wise to pay attention. The letter does not impose any new rules, instead addressing how Covered Entities should use the framework in 23 NYCRR Part 500 (“Cybersecurity Regulation”) to assess and mitigate AI-related cybersecurity risks. But, as a practical matter, the guidance will certainly shape how NYDFS evaluates companies’ cybersecurity programs. And as with other early NYDFS cybersecurity initiatives, this guidance will likely influence how other regulators approach AI-cybersecurity risk.

We provide a summary of the guidance below. 

Cybersecurity Risks Arising from AI

The DFS guidance identifies four primary cybersecurity risks arising from AI: two caused by threat actors’ use of AI, and two caused by a Covered Entity’s use of AI.

AI-Enabled Social Engineering: Covered Entities are confronting a surge in social-engineering attacks that use new AI tools to create realistic and interactive audio, video, and text. These attacks often attempt to convince employees to share their credentials, divulge sensitive information about themselves or their employers, or to take unauthorized actions, such as wiring company funds to fraudulent accounts.

AI-Enhanced Cybersecurity Attacks: Threat actors can leverage AI tools to scan for and exploit vulnerabilities, conduct reconnaissance, accelerate malware and ransomware deployment, and evade detection. AI coding tools can help accelerate the speed and scale of cyberattacks, and criminals also offer AI tools with advanced capabilities to relatively unsophisticated threat actors, allowing them to initiate their own cyberattacks.

Exposure or Theft of Vast Amounts of Nonpublic Information: Covered Entities may benefit in several ways from deploying AI. But this creates new risks, as many AI tools require the collection and processing of large amounts of sensitive business and personal information.

Increased Vulnerabilities Due to Third-Party, Vendor, and Other Supply Chain Dependencies: Covered Entities must often rely heavily on third-party vendors to deploy AI. These vendors introduce additional risk for Covered Entities, including vulnerabilities in AI tools and the possibility that a company will be harmed by a cyber incident affecting their AI vendor.

Measures to Mitigate AI-related Threats

The DFS guidance discusses several controls and measures to mitigate AI-related threats based on the requirements of the Cybersecurity Regulations:

Risk Assessments and Risk-Based Programs, Policies, Procedures, and Plans: The Cybersecurity Regulation requires Covered Entities to maintain cybersecurity programs, policies, and procedures that are based on cybersecurity risk assessments. Covered Entities should update risk assessments to include their organization’s use of AI and new risks from threat actors’ use of AI. Covered Entities should also adapt incident response and business continuity plans to include disruptions relating to AI. The senior leadership of Covered Entities exercise oversight of AI-related risks to their company.

Third-Party Service Provider and Vendor Management: The Cybersecurity Regulation requires Covered Entities to maintain Third-Party Service Provider (TSPS) policies and procedures. Covered Entities should consider the threats facing their TSPSs from the use of AI, and how their TSPSs protect themselves from such threats.

Access Controls: The Cybersecurity Regulation requires Covered Entities to implement multi-factor authentication (MFA), and other access controls in case MFA fails. Covered Entities should consider using authentication factors that can withstand social engineering attacks enhanced by AI, such as digital-based certificates and physical security keys. Covered Entities should also limit user access to their sensitive information to the minimum amount of access required for users to do their jobs. 

Cybersecurity Training: The Cybersecurity Regulation requires Covered Entities to provide cybersecurity awareness training for all personnel, including specific training on social engineering. This training should be adapted to account for AI-fueled social-engineering attacks, such as deepfakes.

Monitoring: The Cybersecurity Regulation requires Covered Entities to implement monitoring processes to identify new security vulnerabilities and to monitor the activity of users and web traffic to protect against malicious activity. Covered Entities should adapt such monitoring activities to include monitoring of unusual activity from their AI applications.

Data Minimization: The Cybersecurity Regulation requires Covered Entities to implement a data retention policy. The guidance goes a step further, and explicitly calls for Covered Entities to maintain and update data inventories. Such practices will help entities track their storage and use of sensitive information and minimize the effects of data breaches. 

 


 

1 Defined in 23 NYCRR § 500.1(e) as “any person operating under or required to operate under a license, registration, charter, certificate, permit, accreditation or similar authorization under the Banking Law, the Insurance Law or the Financial Services Law, regardless of whether the covered entity is also regulated by other government agencies.”

Stay Up To Date With Our Insights

See how we use a multidisciplinary, integrated approach to meet our clients' needs.
Subscribe