Artificial Intelligence

The importance of the UK and US AI safety agreement

18th April 2024
Harry Fowle
0

At the start of April 2024, the UK and US sat down to sign a first-of-its-kind AI safety agreement to test future AI models.

The partnership, signed by Technology Secretary Michelle Donelan and US Commerce Secretary Gina Raimondo, will enable the UK and the US to synchronise their scientific methods and collaborate closely. This cooperation aims to expedite the development of comprehensive evaluations for AI models, systems, and agents.

But why is such a feat so important? To learn more Electronic Specifier spoke with experts Eleanor Watson, IEEE member, AI ethics engineer and AI Faculty at Singularity University, and Ayesha Iqbal, IEEE senior member and engineering trainer at the Advanced Manufacturing Training Centre, of the Institute of Electrical and Electronics Engineers (IEEE).

The necessity of an AI safety agreement

AI has become a cornerstone technology impacting numerous crucial aspects of society, from healthcare to finance, and even legal systems. As AI systems grow more complex and integral to critical infrastructure, the importance of implementing stringent AI safety policies and ethical evaluation cannot be overstated.

As Iqbal explains: “AI has significantly evolved in recent years, with applications in almost every business sector. In fact, it is expected to see a 37.3% annual growth rate from 2023 to 2030. However, there are some barriers preventing organisations and individuals from adopting AI, such as a lack of skilled individuals, complexity of AI systems, lack of governance and fear of job replacement. AI is growing faster than ever before – and is already being tested and employed in sectors including education, healthcare, transportation and data security. As such, it’s time that the Government, tech leaders and academia work together to establish standards for the safe, responsible development of AI-based systems. This way, AI can be used to its full potential for the collective benefit of humanity.”

Risk mitigation, accountability, trustworthiness, public confidence, ethical design and development, continuous monitoring, collaboration, and impact assessments are all high-ranking reasons behind the agreement and highlight the necessity of it.

Watson comments: “As ethical considerations surrounding AI become more prominent, it is important to take stock of where the recent developments have taken us, and to meaningfully choose where we want to go from here. The responsible future of AI requires vision, foresight and courageous leadership that upholds ethical integrity in the face of more expedient options.

“Explainable AI, which focuses on making machine learning models interpretable to non-experts, is certain to become increasingly important as these technologies impact more sectors of society. That’s because both regulators and the public will demand the ability to contest algorithmic decision-making. While these subfields offer exciting avenues for technical innovation, they also address growing societal and ethical concerns surrounding machine learning.”

AI ethics and testing

The importance of AI ethics and testing has only been increasing as the technology develops, and it will soon be something that is entirely necessary. The earlier those involved in AI development get their hands of the steering wheel, the better.

“There are a lot of misapprehensions about what AI is, its capabilities, limitations and the ways in which things can go very wrong. AI ethics and safety, which are separate yet interlinked domains, are almost never discussed in the same breath. Understanding both ethics (responsible use of AI) and safety (AI behaving itself) is essential in the age of powerful, agentic AI models which are capable of sophisticated reasoning, planning, and independent action,” says Watson.

“Each of us will soon have daily interactions with systems that are capable of independent action, access to tools, and solving complex challenges. Whilst agentic models are far more capable, they present a number of challenges. A system which can act independently needs to understand the preferences and boundaries of others in order not to cause havoc and aligning values or goals is extremely difficult. Even more challenging is the issue that a benign mission might be fulfilled in an undesirable manner. For example, an AI system tasked with curing a disease might reason that it needs resources and influence to do so, therefore turning to cybercrime. Such dangerous instrumental goals are very challenging to prevent.

“Agentic systems are quite new, and most of the issues that they have raised have only been observed in lab and thought experiments. We have little time to respond to these challenges, at a time when AI is advancing at a tremendous rate. A lot of AI safety efforts will involve rapid response to emerging situations and problems, as well as a deep investigation of incidents so that they can be mitigated in future.”

The importance of UK and US partnership

The US and UK often align their interests on the international stage, and this tradition rings true when it comes to AI, with both nations positioning themselves to be at the forefront of international jurisdictions regarding revolutionary technology.

On the matter, Watson had this to say: “This partnership is so important. The US and UK are currently the leading jurisdictions in AI Safety, thanks to pioneering efforts at universities and nonprofits in both nations. Joint recommendations for AI safety can be written and agreed, with a common reference of tests and benchmarks by which to compare the capabilities and risks of various agentic AI systems. Such tests could include the ability for systems to generalise to new situations and make reasonable inferences, as well as their level of flexibility towards the completion of tasks. These techniques will be very important at a time when the capabilities of AI exceed our ability to reliably steer them.

“I expect that this developing axis of AI Safety can lead to a proliferation of actionable best practices all across the world. Many people have concerns about agentic AI systems and our ability to continue to influence them, but the future seems a little brighter with this alliance. In terms of next steps, let's involve the rest of humanity in championing this struggle which affects us all.”

Featured products

Product Spotlight

Upcoming Events

View all events
Newsletter
Latest global electronics news
© Copyright 2024 Electronic Specifier