Artificial Intelligence

World leaders, top AI companies set out plan as Summit concludes

3rd November 2023
Paige West
0

Countries and companies developing frontier AI have agreed a ground-breaking plan on AI safety testing, as Prime Minister Rishi Sunak brought the world’s first AI Safety Summit to a close.

In a statement on testing, governments and AI companies have recognised that both parties have a crucial role to play in testing the next generation of AI models, to ensure AI safety – both before and after models are deployed.

This includes collaborating on testing the next generation of AI models against a range of potentially harmful capabilities, including critical national security, safety, and societal harms.

They have agreed Governments have a role in seeing that external safety testing of frontier AI models occurs, marking a move away from responsibility for determining the safety of frontier AI models sitting solely with the companies.

Governments also reached a shared ambition to invest in public sector capacity for testing and other safety research; to share outcomes of evaluations with other countries, where relevant, and to work towards developing, in due course, shared standards in this area – laying the groundwork for future international progress on AI safety in years to come.

The statement builds on the Bletchley Declaration agreed by all countries attending on the first day of the AI Safety Summit. It is one of the several significant steps forward on building a global approach to ensuring safe, responsible AI that has been achieved at the Summit, such as the UK’s trailblazing launch of a new AI Safety Institute.

The countries represented at Bletchley have also agreed to support Professor Yoshua Bengio, a Turing Award winning AI academic and member of the UN’s Scientific Advisory Board, to lead the first-ever frontier AI ‘State of the Science’ report. This will provide a scientific assessment of existing research on the risks and capabilities of frontier AI and set out the priority areas for further research to inform future work on AI safety.

The findings of the report will support future AI Safety Summits, plans for which have already been set in motion. The Republic of Korea has agreed to co-host a mini virtual summit on AI in the next six months. France will then host the next in-person Summit in a year from now. 

Prime Minister Rishi Sunak said: “Until now the only people testing the safety of new AI models have been the very companies developing it. We shouldn’t rely on them to mark their own homework, as many of them agree.

“Today we’ve reached a historic agreement, with governments and AI companies working together to test the safety of their models before and after they are released.

“The UK’s AI Safety Institute will play a vital role in leading this work, in partnership with countries around the world.”

Secretary of State for Science, Innovation and Technology Michelle Donelan said: “The steps we have agreed to take over the last two days will help humanity seize the opportunities for improved healthcare, better productivity at work, and the creation of entire new industries that safe and responsible AI is set to unlock.

“Ensuring AI works for the good of us all is a global endeavour, but I am proud of the singular role the UK has played in bringing governments, businesses and thinkers together to agree on concrete steps forward, for a safer future.”

Yoshua Bengio said: “The safe and responsible development of AI is an issue which concerns every one of us. We have seen massive investment into improving AI capabilities, but not nearly enough investment into protecting the public, whether in terms of AI safety research or in terms of governance to make sure that AI is developed for the benefit of all.

“I am pleased to support the much-needed international coordination of managing AI safety, by working with colleagues from around the world to present the very latest evidence on this vitally important issue.”

The UK has already taken a lead in these efforts by launching the AI Safety Institute, to build public sector capability to conduct safety testing and to conduct AI safety research.

The ‘State of the Science’ report to be led by Turing Award winning Professor Yoshua Bengio will help AI policymakers in the UK, and internationally, to keep abreast of the rapid pace of change in AI, alongside a group of leading academics from around the world.

As the most-cited computer scientist in the world, the founder of the internationally renowned Mila - Quebec AI Institute, and an advisor to both the UK Government and the UN, Professor Bengio is uniquely placed to lead this work.

The foundations laid at Bletchley Park over the past two days will be critical in ensuring AI’s enormous potential can be harnessed, safely and responsibly, to unlock a gear-change in what’s possible in terms of economic productivity, healthcare, education and more.

Demis Hassabis, Co-founder & CEO of Google DeepMind said: “AI can help solve some of the most critical challenges of our time, from curing disease to addressing the climate crisis. But it will also present new challenges for the world, and we must ensure the technology is built and deployed safely. Getting this right will take a collective effort from governments, industry, and civil society to inform and develop robust safety tests and evaluations. I’m excited to see the UK launch the AI Safety Institute to accelerate progress on this vital work.” 

Product Spotlight

Upcoming Events

View all events
Newsletter
Latest global electronics news
© Copyright 2024 Electronic Specifier