Artificial Intelligence

Leading AI companies publish safety policies

27th October 2023
Paige West
0

AI firms have disclosed their safety protocols following a request from the Technology Secretary the previous month, with an aim to enhance transparency and promote best practices among the AI community.

The UK Government has introduced a set of provisional safety procedures for these companies, detailing ways they can ensure their models' safety. These measures are expected to inform discussions at Bletchley Park in the coming week.

The government document details procedures for AI companies, including the introduction of responsible capability scaling – a fresh approach to managing frontier AI risks. This would require AI organisations to pre-emptively indicate which risks will be monitored, the responsible parties for notifications upon discovering these risks, and the thresholds at which developers would either decelerate or halt their work until improved safety protocols are instituted.

Additional recommendations encompass AI developers contracting third-party entities to test their systems for vulnerabilities and to pinpoint potential harmful effects. Moreover, they should clarify if content has been AI-generated or altered. Central to these nascent safety procedures is innovation, with the UK Government emphasising the significance of comprehending the risks at the forefront of AI progression.

The Prime Minister verified that the UK is initiating the first AI Safety Institute globally. Its mission is to enhance global understanding of AI safety, meticulously assess and test novel AI types to grasp each new model's potential. Collaboration on AI safety research with international counterparts, policymakers, private enterprises, academic institutions, and civil society is also a priority.

Recent studies demonstrate international endorsement for a government-supported AI safety institute to evaluate potent AI for safety assurances, with 62% of the British populace supporting the proposal. An international public opinion poll on AI safety across nine countries, including Canada, France, Japan, the UK, and the USA, among others, witnessed substantial backing in most countries for robust AI to undergo independent expert scrutiny.

The paper released today (27th October) details procedures and associated practices that some pioneering AI entities have already adopted, while others are under consideration within the academic and wider civil society sectors. While certain processes and practices might be pertinent for diverse AI entities, others, like responsible capability scaling, are expressly crafted for frontier AI.

Technology Secretary Michelle Donelan remarked: “This is the start of the conversation and as the technology develops, these processes and practices will continue to evolve, because in order to seize AI’s huge opportunities we need to grip the risks.

“We know openness is key to increasing public trust in these AI models which in turn will drive uptake across society meaning more will benefit, so I welcome AI developers publishing their safety policies today.”

The paper also emphasises the enduring technical obstacles in constructing secure AI systems, including safety assessments, and discerning their decision-making processes. The UK Government's revelation of these emerging procedures is aimed at facilitating the crucial discourse of secure frontier AI at the upcoming summit.

A recent government discussion paper indicated the swift progression in frontier AI, a trend anticipated to persist. This could result in these models advancing at an unprecedented pace, potentially surpassing human comprehension and even control.

The UK acknowledges the vast potential AI offers to the economy and society. However, without suitable safeguards, such technologies might present considerable challenges. The AI Safety Summit aims to deliberate on optimal strategies to mitigate risks from frontier AI.

Frontier AI Taskforce Chair Ian Hogarth expressed: “We have focused on Frontier AI at next week’s summit very deliberately as these are the models which are most capable.

“While Frontier AI brings opportunities, more capable systems can also bring increased risk. AI companies providing increased transparency of their safety policies is a first step towards providing assurance that these systems are being developed and deployed responsibly.

“Over the last few months, the UK Government’s Frontier AI Taskforce has been recruiting leading names from all areas of the AI ecosystem, from security to computer science, to advise on the risks and opportunities from AI with the Prime Minister yesterday hailing it a huge success.”

Today's disclosure on emerging safety practices aims to guide frontier AI companies in formulating efficient AI safety protocols.

Adam Leon Smith, of BCS, The Chartered Institute for IT, and Chair of its Fellows Technical Advisory Group (F-TAG) stated: “This set of emerging, adaptable processes and practices moves the industry forwards significantly, and sets a new bar for research and development.

“It is challenging to talk about how to manage safety when we are dealing in some cases with systems that are too advanced for us to have yet built – but it’s important to have the vision and courage to anticipate the risks.

“The processes here also provide inspiration and best practices that may be useful for managing the risks posed by many AI systems already on the market.”

The UK is set to host the AI Safety Summit, as the government contemplates the formidable choices essential for a promising future for the subsequent generation, propelled by AI innovations.

Product Spotlight

Upcoming Events

View all events
Newsletter
Latest global electronics news
© Copyright 2024 Electronic Specifier