We came across a very interesting article in Fortune recently, and it gets at a certain “Minority Report” element to self-driving cars.
As we all know, self-driving, or autonomous, vehicles have been in development for many years, and their promise certainly seems to be fewer accidents, less injuries for vehicle occupants and, generally speaking, safer roads. So much focus is placed on their lack of a human element when it comes to driving. But what is rarely discussed about these vehicles is the human element to their programming.
Consider the following hypothetical situation: a self-driving car is blocked on all sides by other vehicles while traveling 40 miles per hour down a highway. The car in front of the self-driving vehicle is carrying cargo, and by a stroke of unfortunate luck, the cargo falls off and into the path of the self-driving vehicle. So what happens? Does the self-driving car slam on the brakes and get rammed by the vehicle behind it? Does it crash into either vehicle on the sides of it? Does it just crash into the cargo?
And more importantly: how did the programmers tell the vehicle to act in such a situation?
Many legal experts are pondering this very difficult question. If the self-driving car gets into an accident in this type of a situation, did the programmers actually “premeditate” an accident by telling it to do something in an avoidable accident situation? Self-driving cars could help in many ways, but there are complicated legal questions that need to be answered first.
Source: Fortune, “Can You Crash An Autonomous Car Ethically?,” Andrew Nusca, Nov. 16, 2016