Tuesday, December 30, 2014

The Ethics of Autonomous Cars

The Wired.com article that was read in class discussed several scenarios regarding the ethics of programming autonomous cars to crash into specific targets, in the case of an accident. This is done in an attempt to minimize the most harm possible during the course of a car crash. The first scenario involved an autonomous car programmed with the decision of either crashing into a Volvo SUV, or a Mini cooper. The article was biased towards programming the vehicle to crash into the Volvo SUV, as it is the heavier vehicle and therefore will better absorb the impact of the crash, and is known for its passenger safety. This scenario questions the concept of whether or not it is "fair" or ethically correct to program autonomous cars, to target specific vehicles in the case of an accident, even if it is done in an effort to minimize the most harm possible.

I believe that autonomous cars should not be programmed to crash into certain vehicles in the course of an accident. I believe that it is ethically incorrect and not "fair" to any individuals involved in the situation. Although programming autonomous cars to crash into larger vehicles may seem to be the most reasonable option, there is no way to determine how many people are in each of the target vehicles, nor the age of those individuals. Although in this scenario the Mini Cooper is smaller than the Volvo SUV, there may be a family of six to seven in the SUV compared to one person driving the Mini Cooper. That family of six to seven may include young children or possibly even infants. In this scenario the amount of harm is not being minimized, as there are even more people put in danger of being severely injured or killed. As SUVs are designed to transport larger families, there is a very likely chance that there will be more people in the Volvo SUV than in the Mini Cooper. By programming the autonomous car to target the larger vehicle in the course of an accident, you will most likely be endangering the lives of even more people. If in the course of an accident the autonomous car is programmed to crash into the larger vehicle, the individual(s) in the autonomous car are also put in greater danger than if they were to crash into the smaller vehicle. I believe that all vehicles on the road should have an equal opportunity at safety, and an equal chance of not being hit in an accident.

The second scenario deals once more with a programmed autonomous car faced with an imminent accident, and given the chance to select one of two targets: a motorcyclist wearing a helmet or a motorcyclist without a helmet. The purpose of programming the autonomous car with the capability to make this decision is to minimize the most harm possible, similar to the previous situation. In this scenario, that would mean crashing into the motorcyclist with the helmet. To begin with, I do not believe that autonomous cars should be programmed to crash into specific targets in the course of an accident. However, if a decision had to be made I would say that the motorcyclist without the helmet should be prioritized. The motorcyclist wearing the helmet and making the responsible decision to protect themselves should not be the ones targeted in an accident. The motorcyclist that would most likely be breaking the law should be the one penalized. Although more injury would be inflicted upon the individual not wearing the helmet, programming the autonomous car to crash into them would encourage helmet wear in the future. This would in turn minimize harm in the long run. Prioritizing the motorcyclist with a helmet would encourage motorcyclists to do just the opposite: not wear a helmet.

The article also discusses programming autonomous cars to make decisions through a random-number generator. I believe that this is a much better decision than programming the autonomous vehicles to crash into specific targets. As the article explains, human driving and accidents constantly involve random decisions. We are surrounded by luck, both good and bad. Programming autonomous cars to make random decisions in the course of an accident would be very similar to having a human being drive the car, and how accidents occur today. This would also avoid any selective targeting, which could lead to several controversial issues,

"If the driver is not making control decisions, should the driver be responsible for any outcomes at all?" ~ Alexander Karasulas

I am a strong believer of the fact that if the driver is not in control of the vehicle, then they should not be held responsible for any outcomes in any scenario. If the driver is in control of their car and has an impact on the outcome of an accident, then they should definitely be held responsible for the damage that they have inflicted. However, if a machine is in control of the vehicle and the autonomous car has been programmed to make certain decisions, then the "driver" should not be held responsible. In fact, they should not even be titled the "driver" as they were never in charge of the vehicle and the decisions that it made in the accident. The autonomous car was being run by programmed software, not by the rationality of a human being.




No comments:

Post a Comment