Key Takeaways from the Global AI Safety Summit
Other Author Oliver Jones, Trainee Solicitor
The Global AI Safety Summit took place at Bletchley Park in the United Kingdom on 1 and 2 November 2023. Over one hundred representatives – made up of international politicians, executives from the world’s most prominent AI companies, academics and civil society representatives – gathered to consider the risks of AI and how to mitigate these.
The outcomes of the Summit include a Declaration by leading AI nations to work together to identify, evaluate and regulate the risks of using AI, and the establishment of an AI Safety Institute in the UK to test new types of frontier AI to address the potentially harmful capabilities of AI models. By taking these steps, the UK Government hopes to cement the UK’s position as a world leader in AI safety. We have outlined the key takeaways for businesses below.
Bletchley Declaration on AI Safety
28 countries, including the United States, Brazil, China, France, Germany, Japan, Saudi Arabia, Singapore, the United Arab Emirates and the United Kingdom, as well as the European Union, reached a “world-first agreement” at the Summit on establishing a shared understanding of the opportunities and risks posed by “frontier AI”.
Frontier AI refers to highly capable general-purpose AI models, most often foundation models, at the ‘frontier’ of AI development. These models can perform a wide variety of tasks and match or exceed the capabilities present in today’s most advanced AI systems.
The participants produced a Declaration which sets out the “potential for serious, even catastrophic, harm, either deliberate or unintentional, stemming from the most significant capabilities of these AI models”, but notes that the risks are “best addressed through international cooperation”. Strategies include “building a shared scientific and evidence-based understanding of these risks”, collaborating appropriately, and “building respective risk-based policies…to ensure safety”.
Whilst businesses may be wary that further regulation will follow, the Declaration highlights the importance of considering a “a pro-innovation and proportionate governance and regulatory approach that maximises the benefits and takes into account the risks associated with AI.”
The Establishment of an AI Safety Institute
Rishi Sunak, UK Prime Minister, announced that the UK Frontier AI Taskforce was evolving to become the new AI Safety Institute. The Institute is the first state-backed organisation focusing on advanced AI safety for the public interest. Its core functions will be to:
- develop and conduct evaluations on advanced AI systems;
- drive foundational AI safety research; and
- facilitate information exchange.
Whilst the AI Safety Institute is not a regulator, its insights will inform UK and international policymaking. The UK Government hopes that the research of the Institute will ensure that the UK is able to take an evidence-based, proportional approach to regulating AI risks. As part of launching the new Institute, the UK Government also reaffirmed its commitment to maintaining its position as a leader across science, innovation and technology.
More Work to be Done on AI
Despite the progress made at the Summit to align international approaches on the use of AI, there has also been some criticism that there was too little focus on certain key issues, such as:
- the energy-intensive nature of AI and the impact this could have on the environment;
- algorithmic bias and misinformation resulting in discrimination and erroneous decisions;
- safety issues for women and girls, such as deepfake technology, which can be used to create fake pornographic content of anyone without their consent; and
- generative AI affecting upcoming elections through disinformation.
The International Context of the Summit
The hosting of the AI Safety Summit comes at a critical time, as nations race to control the development and use of artificial intelligence based systems around the world. The event coincided with US President Biden’s Executive Order on the Safe, Secure, and Trustworthy Development and Use of Artificial Intelligence. The Order seeks to promote US leadership on AI while reducing the associated risks by establishing ground rules governing the use of AI by government agencies, as well as imposing reporting requirements for foundation models.
In the European Union, legislators are in the final stages of agreeing the EU-wide AI Act which seeks to introduce new far-reaching regulation of most AI systems. In the United Kingdom, the UK Government appears determined to take a different path by not introducing new legislation to regulate AI for the moment, but instead empowering existing regulators to regulate AI in their areas of competence. The UK Government’s plans to make “responsible innovation” easier are back on the agenda, with the UK Government’s intention to reform the UK GDPR also being announced in the King’s Speech on 7 November 2023. The US Government will continue to implement President Biden’s Executive Order, while the US Congress also considers AI legislation.
Future of International Cooperation Following the AI Safety Summit
Together with the G7 Hiroshima AI Process, the AI Safety Summit is an important first step towards building international cooperation on the use of AI. The support of the Global South and China, which earlier this year proposed its own draft measures on regulating generative AI, for the Bletchley Declaration paves a way for more international collaboration on AI regulation. To continue the momentum, the Republic of Korea has agreed to co-host a mini virtual summit on AI in the next 6 months. This will be followed by France hosting the next in-person AI safety summit in autumn 2024. The international nature of these discussions could lead to greater harmonisation of AI governance across national borders, which would reduce the cost of compliance for multi-national businesses.