Automotive

The ethical dilemmas of autonomous driving

7th December 2020
Lanna Deamer
0

The trolley problem is a classical philosophical dilemma that is used to illustrate the moral conundrum surrounding how to program autonomous vehicles to react to different situations. However, this particular thought experiment may be just the tip of the iceberg. Here Kostas Poulios, Principal Design and Development Engineer at Pailton Engineering, takes a closer look at the ethical dilemmas of fully autonomous vehicles.

In July 2020, Tesla founder Elon Musk boldly announced that his company would have fully autonomous vehicles ready by the end of the year. In the UK, the government has pledged to have fully driverless cars on British roads by 2021. To the untrained eye, it may seem like a driverless world is just around the corner.

Let’s be clear what we are talking about here. There are five levels of autonomous vehicles. Level one includes driver assistance like cruise control. The consultation the UK government announced this summer, on Automated Lane Keeping Systems (ALKS), would be level three. The driver needs to remain ready to regain control in an emergency. Level five, where no human intervention is involved, is a fundamentally different ball game.

Aside from any technological or regulatory hurdles on the path to full autonomy, level five raises profound ethical dilemmas. The trolley problem is one thought experiment that is inevitably raised in this context. A classic philosophical dilemma, the trolley problem asks respondents how they would act if a vehicle was on a path toward harming a pedestrian.

In scenario one, you do not intervene and allow events to run their course, harming two or more individuals. But in a second scenario you intervene to steer or redirect the vehicle onto a second path, where only one person is killed. Different variations abound, but the basic structure of the ethical dilemma is roughly the same.

One problematic assumption underlying much of the discussion is the notion of a universal moral code. If we could just find the right answer, or at least agree on one, then we could program the AV to respond accordingly.

Unfortunately, as a major study in the scientific journal Nature has recently demonstrated, there is no universal moral code. Researchers surveyed people in forty different countries and, using variations of the trolley dilemma, they showed that our moral judgements and intuitions were culturally contingent, not universal. Put simply, people in different parts of the world reached different moral conclusions.

Furthermore, these simplistic thought experiments might not be the most appropriate analogies when it comes to AVs. Unlike in the trolley problem, AVs will be programmed to react in situations defined by high levels of uncertainty. The programmers cannot determine in advance what the right thing to do will be, because the answer will be context-specific.

AVs will use deep learning algorithms and rather than automatically having the right answer to a given moral dilemma, they will learn this through exposure to thousands of situations and scenarios. So, in an emergency situation where an AV has to decide whether to swerve to avoid something in the road for example, it is not making a single isolated decision but instead a series of sequential decisions.

As an engineer, I spend my time designing and building bespoke steering parts for specialist vehicles. From electric buses to remote controlled military vehicles, I’ve dealt with enquiries for all types of vehicle. I’m ready for enquiries about autonomous vehicles, but I’m not expecting many in any time soon. The world of fully autonomous vehicles is an exciting prospect, but profound ethical dilemmas remain. These dilemmas are important enough that they shouldn’t remain the exclusive domain of the engineers and programmers.

Product Spotlight

Upcoming Events

View all events
Newsletter
Latest global electronics news
© Copyright 2024 Electronic Specifier