Hong Kong's Policy Statement on Responsible Application of Artificial Intelligence in the Financial Market
- Gabriela Kennedy,
- Joanna Wong,
- Legal Assistant
Introduction
On 28 October 2024, the Financial Services and Treasury Bureau ("FSTB") of Hong Kong issued a policy statement on the responsible application of artificial intelligence ("AI") in the financial market ("Policy Statement")1. The Policy Statement seeks to balance the promotion of AI development whilst mitigating the associated risks to cybersecurity, the protection of intellectual property rights, and data privacy, under a "dual-track approach".
AI in Hong Kong
The FSTB has focused its analysis on three key attributes for the application of AI in the financial services industry, namely data-driven, double-edged and dynamic.
- Data-driven - Given that the financial services sector is data-driven in nature, the use of AI will improve efficiency and competitiveness.
- Double-edged - Although the FSTB is aware of the opportunities AI can bring, it cautions financial institutions to mitigate potential risks thoroughly and specifically notes that AI should be used as a complimentary tool that strengthens human abilities, it should not replace human analysis and judgment.
- Dynamic – AI will foster innovations and new types of businesses which can serve to advance the financial services industry.
The FSTB recommends that the financial services sector in Hong Kong should adopt a dual-track approach when deploying or using AI to ensure sustainable and responsible use.
Dual-track approach – Capturing Opportunities
The Policy Statement details the different benefits AI applications can bring to the financial services industry, including:
- Data analysis and research - AI can automate research and data analysis to help users with investment decision-making
- Investment and wealth management - Users can capitalise on AI-powered algorithms to improve portfolio management and diversification
- Risk assessment - AI enables large volume data analysis for risk assessment
- Detecting and preventing fraud and other financial crimes - AI's ability to identify patterns or anomalies in financial transactions allows for better detection and prevention of fraud and other financial crimes
- Enhanced customer service - AI virtual assistants and chatbots can be leveraged to provide round the clock customer service with prompt and personalised assistance
- Workflow automation – AI can perform repetitive and routine tasks, allowing users to allocate more time for value-added work
Dual-track approach – Preventing Risks
The FSTB emphasises in the Policy Statement that responsible use of AI requires financial institutions to focus on the protection of data privacy and intellectual property rights, information security, accountability, operational resilience and job security. Financial institutions should develop an AI governance strategy which takes on a risk-based approach throughout the AI-lifecycle, including procurement, use and management of AI, with human oversight in place to mitigate potential risks.
The Policy Statement outlines the key risks associated with the use of AI and sets out recommended mitigation measures.
For data privacy, cybersecurity and intellectual property rights protection, the Policy Statement prompts AI users to ensure robust cybersecurity safeguards are in place to protect the AI model and any confidential and personal information used. As personal data and copyright materials may be used as training data for AI models, AI users must ensure their practices comply with the relevant personal data privacy laws and respect intellectual property rights.
In terms of other risks such as fraud, social engineering attacks and cybercrime, a robust AI detection system is needed to detect and thwart fraudulent activities. The FSTB also calls for industry cooperation in sharing best practices and formulating measures to prevent such risks.
Other categories of risk discussed relate to bias, hallucination, and data and model governance. The FSTB notes that AI users need to ensure the diversity, quality and representativeness of training data to minimise bias in AI generated output. To prevent hallucinations, human oversight is necessary as it allows for humans to assess and correct any inaccurate AI-generated output.
Financial institutions using AI need to disclose and be transparent about such use in order to protect consumers and investors, particularly when AI is used to make business decisions. Transparency in the use of AI allows investors and customers to make informed decisions regarding the use of their personal information and other preferences.
Way Forward
The government aims to collaborate with financial regulators in developing a clear and comprehensive supervisory framework. With the rapid development and evolution of AI, the government will continue to adapt its supervision to market developments accordingly and also draw from international standards. Financial regulators will be responsible for monitoring the deployment of AI in the financial services industry, whilst ensuring the regulatory framework is adequate in view of the latest developments in AI. Recent initiatives include, the Generative AI ("Gen AI") Sandbox launched by the Hong Kong Monetary Authority and Cyberport in August 2024, which encourages banks to embark on their novel Gen AI use cases under a risk-managed framework accompanied by supervisory feedback and technical assistance. In November 2024, Security and Futures Commission (SFC) also issued a circular on the use of Gen AI for licensed corporations.2 The circular echoes the risk-based approach from the Policy Statement, and focuses on four core principles namely, AI model risk management, senior management oversight, cybersecurity and data risks management, and third-party providers risk management.
Conclusion
The Policy Statement makes it clear that through collaboration with financial regulators and industry players, the Hong Kong Government is seeking to foster a sustainable financial market environment which enables financial institutions to leverage AI effectively and responsibly. As more AI-related laws and regulations emerge, businesses are advised to stay informed of the latest legal and regulatory developments and start putting in place robust AI governance now.
The authors would like to thank Charmian Chan, Trademark Assistant at Mayer Brown, for her assistance with this article.
1 Full version of the Policy Statement can be found at: https://gia.info.gov.hk/general/202410/28/P2024102800154_475819_1_1730083937115.pdf
2 Original texts can be found at: https://apps.sfc.hk/edistributionWeb/gateway/EN/circular/intermediaries/supervision/doc?refNo=24EC55