The Ethics of Driverless Technology

The news of the first fatality involving Tesla’s Autopilot technology, followed by reports of two further crashes allegedly involving the self-driving system has reanimated a debate surrounding driverless vehicles – were does the responsibility lie?  In each case, the level of driver versus technological fault is still under investigation.  However, commentators have already begun falling on either side of the debate: the driver was recklessly inattentive or the technology was dangerously flawed.

There is no doubt that Tesla’s mass-release of its Autopilot technology was a riskier move than more small scale, controlled testing like Googles Self-Driving Car Project.  However, Tesla’a chief executive Elon Musk added a caveat on its release to the technologies use.  He stated that “driver[s] cannot abdicate responsibility” when using the self-driving system.  Like many other driver-assist technologies currently available, the Autopilot system is there to assist, not replace the driver.  So it seems simple, the driver is still responsible, so any incident is their fault.

But when presented with an actual fatality, the allocation of blame is far more complicated.  Tesla have openly acknowledging that their system – currently in beta mode – is by no means flawless. Does that mean the company has to take some responsibility for using their customers as test drivers on public roads?  In response to the most recent incident, Musk stated that Tesla’s “taking the heat for customer safety” was the “right thing to do”.

A Driverless Future?

But what does this mean for the future of driverless technology?  And when cars do become fully autonomous, as Tesla is aiming for, how do you assign culpability?  To the driver, for not remaining focused on the road? Or the manufacturer, for creating the software controlling the car?

Driverless Accident Scenario

The hypothesis often used to demonstrate this moral quandary is as follows.  A driverless car is travelling towards crossing.  Along one side of the road is a busy pedestrian path; on the other a wall.  A mother and child suddenly steps onto the crossing without looking and there isn’t enough time to stop without hitting, and possibly killing, them.  If the car swerves to avoid them, it will either go into the crowd on the pavement – possibly killing pedestrians there – or into the wall, killing the driver.  In this scenario, how should the car be programmed to react?  There are two main solutions:

  • Prioritise the driver. The car should respond in the safest way for the driver.  So in this case, hitting the mother and child.
  • Prioritise least loss of life. The car should assess the potential number of casualties, and pick the lowest number.  So here, swerve and hit the wall.

From a business perspective, the first option makes the most sense.  Who would want to buy a car that is programmed (in certain situations) to injure or kill you?  But choosing this option could be interpreted as condoning higher injury or loss of life.  And who should decide which option is programmed in? The company? The driver?  The government?  Because whoever makes that decision may ultimately be held liable for the results.

Gavel

Legally Grey

A less extreme legal and ethical quandary surrounding driverless technology is that of smaller, but far more common, incidents.  For example, who would be held responsible for a car not stopping in time at a junction?  In this case, at least, some parallels could be made with current driver assist technology like Suzuki’s Radar Brake Support.  While this system is designed to assist – and ultimately override – the driver, it does not remove their responsibility.

The reality of driverless vehicles is that they are statistically safer.  Unlike humans, I computer can’t be distracted, tired or drunk, so attentiveness is always at 100%.  Since its launch Google’s self-driving fleet of 23 vehicles have test driven a total of 1,725,911 miles.  In that time it has only been involved in 25 incidents.  That equates to one incident every 69,036 miles. Of these, the majority were minor incidents and just one was due to fault by the self-driving car.  However, the issue companies like Tesla and Google, who are trying to push the self-driving revolution is that, despite all statistical evidence, the average driver is going to take a lot more convincing that a machine can interpret the road better than they can.

Leave a Reply