How do you standardise AI ethics? A conversation with IEEE

Abiding by the overarching aim of keeping people safe and informed, the IEEE’s standardisation of AI ethics represents how the organisation is thinking about the potential impacts of the technology on the global population.
How do you standardise AI ethics? A conversation with IEEE How do you standardise AI ethics? A conversation with IEEE

Patrick Murray, Senior Manager, Conformity Assessment at the IEEE Standards Association spoke to Electronic Specifier about how it approached standardising a fast-paced and continually evolving technology.

According to Murray, as a member-driven organisation, the IEEE began looking at standardising AI ethics in 2018, when its experts flagged it as an area of interest. This forward-thinking approach exemplifies the IEEE’s strategy.

“We’re all looking for … what is … the next goal,” explained Murray. “How can we benefit society?

“When you have a group of experts that come to you, like when you have a committee, in this case, come to you and say, ‘hey, look … this is a problem that we’re seeing out there, this is something that needs to be addressed’, you have to take that seriously.”

As a result of exploring standardising AI ethics, the organisation now has its IEEE 7000 suite, comprising 14 different standards. The purpose of these standards is to clearly lay out a framework for companies to follow.

IEEE 7000 is made up of four pillars that were the guiding criteria for the organisation to understand what matters most regarding AI: transparency, accountability, bias, and privacy. Accountability, for instance, recognises that all AI technologies are systems designed by people, who must therefore be responsible for all outcomes.

This is particularly prescient in a world where AI chatbots can act erratically. In recent news, Musk’s Grok AI chatbot began pushing out pro-Hitler rhetoric on his platform X. These posts had to be deleted, and Musk wrote that it was an issue that was being addressed.

Alongside IEEE’s suite of standards, it also offers its CertifAIEd certification programme which draws on these four pillars to evaluate companies who apply. Those who are successful, receive a mark verifying that they are upholding these AI ethical principles.

“We had our group of experts identify the growing need for this programme, which doesn’t promote just AI ethics, but it gives individuals, it gives companies, it gives the world the tools for practical implementation of AI ethics,” outlined Murray.

Having a worldwide, not regional, view

The programme is designed for worldwide implementation: “We take regionality out of it, and we look at the picture and the problem as a whole,” said Murray. “So what is going to be an issue that we see in China, that we’ll see in North America, that we’ll see in [the] EU and how can we develop a programme and a product surrounding the best way to support and supplement the industry as a whole?”

This is a slightly different approach to cybersecurity legislation, for instance, where different regulation for different regions has led to some confusion about what applies and what to follow; such as the EU’s Cyber Resilience Act, which applies to products with a digital element sold into the EU, or the US’ Cyber Trust Mark, which is a voluntary labelling scheme.

There are circumstances where regionality is important for technology standards, said Murray.

“With electrical standards, there might be regional variances based on the … technology of the region,” he explained. “So most of the world is 50Hz. The US is 60Hz, that in and of itself … forms a problem with a globalisation standard.”

However, with something as universally applicable as AI technologies, a global mindset is important.

It also helps with developers wanting to get their AI products over the line and accelerate time-to-market, where having it audited and approved is a key part of the process, and if there is confusion over standards, it will only slow things down.

“What we aim to do is make our programmes globally fit with the rest of the world’s frameworks or identify ways that we can benefit their framework,” added Murray.

How do you standardise AI ethics?

Governments are currently in a “wait-and-see” period regarding AI adoption, Murray said, epitomising a reactive approach – where it needs to be proactive, which is where the value of engaging with a programme like the IEEE’s becomes clear, because it eliminates regionality.

“Being able to tie something together, where we can take the best parts of the regional differences of our standards, utilising global knowledge and global opportunities to develop something where we can sit down and say, ‘Alright, everybody’s on board with this is what we need to do’,” he said.

IEEE’s work with governments doesn’t take that of a lobbying role, but instead one of engagement and education. “Our work with governments in particular is really more along the lines of … so what is it you’re doing, and how is that we can best be reflective of your long-term goals and try to coordinate that into a manner that will help not only … build the industry … but really, how can we make it so that … the barrier for entry is not as high.

“The individuals and organisations that are proactive tend to be the global drivers in how we do this right,” continued Murray. “The people that are reactive are left … in the dust.”

Take-home messages

Murray’s take-home message was not to be fearful of adopting AI ethics, but to recognise these are there to look out for society’s interests and general wellbeing.

Adoption of AI is not going to slow down, but only speed up – so putting in place safeguards are incredibly important. As an example of where AI is being applied and needs to abide by a framework is in CV screening.

“Imagine if there is ethical bias in that. Imagine if an individual [is] getting filtered out based on a name or some variable that … like race or gender … that poses problems,” stressed Murray. “We need to be ahead of these problems so that … in the long run, we will keep society moving as it needs to.”

These are the big-ticket questions IEEE poses itself.

I pointed out phishing and deepfakes are examples where AI-generated content has become so sophisticated, it can be used to fool people into thinking a phone call from a friend asking for them to transfer money is real, for example, and trick someone into parting with their money.

“15, 20 years ago … you would know something was photoshopped … but now … you need to sit there and question yourself, is this real?” said Murray. “That’s going to be a problem [we’re] going to have to deal with in the future. Is something real or is something fake?

“In the grand scheme of things, if someone has to sit there and question themselves, is this real? Is this fake? How does this line up with my personal ethos? That’s the grey area that we’re hoping that the 7000 [suite] and CertifAIEd programme addresses.”

Keep Up to Date with the Most Important News

By pressing the Subscribe button, you confirm that you have read and are agreeing to our Privacy Policy and Terms of Use
Previous Post

Auto antennas suit vehicular gateways, routers

Next Post
OpenAI and UK government sign strategic partnership

OpenAI and UK government sign strategic partnership