Assume for a moment that you are driving a car and suddenly see some debris in the road. If you veer to the side into a crowd of pedestrians, you can avoid injuring yourself.
Harm yourself or someone else? As the debris approaches, the decision is yours to make.
Where are we going? To self-driving cars that make decisions for us.
AV Ethical Contradictions
In a survey on the most moral way to program AVs (autonomous vehicles), the results were somewhat inconsistent. When asked if in an emergency situation, the car should kill its occupant or 10 pedestrians, most respondents selected the passenger.
Similarly, many consistently selected minimizing casualties whenever the choice was the passengers or avoiding others outside the car.
All changed though when survey respondents were personally impacted. Then they would not buy a car that was programmed to sacrifice their safety or their family’s well-being.
As a result, autonomous vehicle designers will have philosophical decisions that have an economic impact. How they reconcile our aversion to self-harm with the need for “the greater good” will impact sales.
Our Bottom Line: Externalities
For AVs, ethical programming is far from abstract. Creating positive and negative externalities, the decisions that AV programmers make will affect pedestrians and passenger, regulators, insurance companies…the list is endless.
For now though, when you see one of Google’s self-driving cars, you can wonder whether it is ethical.