What fears do people have around AI?

16th July 2020
Alex Lynn

A new survey of 2,000 UK adults by has discovered what fears people have around AI (artificial intelligence). It found that people want increased regulation and more accountability in the field of artificial intelligence (AI), new research by has revealed.

The AI firm commissioned an independent survey among 2,000 UK adults to uncover their attitudes towards the current state of AI development. It found that the majority (64%) want to see more regulation introduced so that the technology is safer to use and does not pose threats to society.

Those aged over 55 appear more sceptical of AI, with almost three quarters (73%) keen to see additional guidelines introduced to improve safety standards. This is in comparison to just over half (53%) of those aged between 18 and 34 who held this view.

The majority of people also hold a high expectation when it comes to AI’s performance and functionality. 61% of respondents said that AI should not be making any mistakes when undertaking an analysis or making decisions. Even more people (69%) think that a human being should always be monitoring and checking decisions that are made by AI.

When questioned about the chances of AI making a miscalculation, 45% of UK adults believe it is harder to forgive mistakes that are made by machines than it is human mistakes. This figure is similar across all demographics.

Elsewhere, Britons also want to see companies take more accountability. 72% of people believe that companies that develop AI should be held responsible for any mistakes that the technology makes. At 81%, those aged over 55 were the most likely to hold this view, while at 60%, millennials were the least likely to agree.

Nikolas Kairinos, Founder of, said: “We are increasingly relying on AI solutions to power decision making, whether that is improving the speed and accuracy of medical diagnoses, or improving road safety through autonomous vehicles. As a non-living entity, people naturally expect AI to function faultlessly, and the results of this research speak for themselves: huge numbers of people want to see enhanced regulation and greater accountability from AI companies.

“It is reasonable for people to harbour concerns about systems that can operate entirely outside human control. AI, like any other modern technology, must be regulated to manage risks and ensure stringent safety standards. That said, the approach to regulation should be a delicate balancing act.

“AI must be allowed room to make mistakes and learn from them; it is the only way that this technology will reach new levels of perfection. While lawmakers may need to refine responsibility for AI’s actions as the technology advances, over-regulating AI risks impeding the potential for innovation with AI systems that promise to transform our lives for the better.”

Featured products

Product Spotlight

Upcoming Events

View all events
Latest global electronics news
© Copyright 2021 Electronic Specifier