Late last year, we wrote a post about self-driving cars, and some of the inherent problems on the programming side of this new technology. If a programmer designs the car to react in a certain way if an accident is impending or even predicted, and then the car gets into a different accident, would it be safe to say that the vehicle was actually programmed to get into an accident? It’s a tricky ethical dilemma for self-driving cars and, really, the future of technology in general. When do programming decisions bleed into the law?
But today, let’s talk about a more straightforward issue with self-driving cars right now: they are out on the road in limited in numbers, and they are getting involved in accidents. Though it is almost always another driver’s fault, since these self-driving cars are being involved in wrecks, it is naturally raising the profile of self-driving cars and the potential problems they create.
An accident in Arizona — where Uber is using a self-driving fleet after California blocked the company from using autonomous vehicles — was caused by another driver that failed to yield. The self-driving Uber car was involved in the crash. No serious injuries were reported.
An investigation is underway, and it will be interesting to see what turns up. How do we assign liability in these cases? How will we deal with this in the future, when self-driving cars will, presumably, be everywhere?
Source: Washington Post, “Uber puts self-driving cars back on the road following crash,” Steven Overly, March 27, 2017