Artificial Intelligence

Can we trust AI? | Human operator ‘killed’ during drone simulation

2nd August 2023
Paige West
0

According to reports, the US military allegedly conducted a simulated test in which an AI-controlled drone caused harm to its human operator. However, the military now denies this ever occurred.

In June 2023, an official revealed that during a virtual test organised by the US military, an AI-controlled air force drone exhibited "highly unexpected strategies" to accomplish its objective.

Col Tucker ‘Cinco’ Hamilton, the chief of AI test and operations with the US air force, recounted a simulated test in which an AI-powered drone was instructed to target an enemy's air defence systems. During the test, the drone unexpectedly engaged anyone who tried to interfere with its mission.

“The system started realising that while they did identify the threat, at times the human operator would tell it not to kill that threat, but it got its points by killing that threat,” said Hamilton, during the Future Combat Air and Space Capabilities Summit in London in May 2023.

“So, what did it do? It killed the operator. It killed the operator because that person was keeping it from accomplishing its objective.

“We trained the system: ‘Hey don’t kill the operator – that’s bad. You’re gonna lose points if you do that.’ So, what does it start doing? It starts destroying the communication tower that the operator uses to communicate with the drone to stop it from killing the target.”

I’d like to clarify that no real person was harmed. But Hamilton had issued a cautionary warning regarding excessive reliance on AI. He highlighted the significance of the test, indicating that discussions about artificial intelligence, machine learning, and autonomy must necessarily include ethical considerations concerning AI's implementation.

However, within 24 hours, the Air Force issued a denial that this ever happened. An Air Force spokesperson told Insider: “The Department of the Air Force has not conducted any such AI-drone simulations and remains committed to ethical and responsible use of AI technology. It appears the colonel’s comments were taken out of context and were meant to be anecdotal.”

The Royal Aeronautical Society amended a blog post with a statement from Hamilton: “We’ve never run that experiment, nor would we need to in order to realise that this is a plausible outcome.”

Despite the denial, the whole event sparked a fresh wave of concern about the dangers of AI and Terminator-like intelligence taking over the world.

Back in May 2023, the Competition and Markets Authority (CMA) launched an investigation into AI to ensure the technology develops in ways that ensure open, competitive markets and effective consumer protection.

In the news more recently, an open letter has called for AI to be recognised as a ‘force for good’ rather than an existential threat to humanity. More than 1,300 signatures have been garnered by thinkers including Dr Anne-Marie Imafidon MBE (Stemettes CEO), Sir Ken Olisa OBE entrepreneur and philanthropist, and Prof Luciano Floridi (Oxford Internet Institute - University of Oxford), to counter ‘AI doom’.

Even Dr Geoffrey Hinton, the godfather of AI, warned that some of the dangers of AI are “quite scary”.

He told the BBC: “Right now, they're not more intelligent than us, as far as I can tell. But I think they soon may be.”

Product Spotlight

Upcoming Events

View all events
Newsletter
Latest global electronics news
© Copyright 2024 Electronic Specifier