Germany has given us a list of ethical programming guidelines for driverless cars. Since the issues are complex, I thought we could start with the trolley problem.
The Trolley Problem
One variation of the trolley problem has an out-of-control streetcar hurtling toward five people. Sitting at the controls, you can do nothing and assume those five individuals will die. Or, you can hit a switch that changes the streetcar’s direction. You know that if you act, you will cause one person’s death. Do nothing and there are five fatalities.
What to do?
The German Guidelines
The report from Germany’s Ethics Commission on Automated Driving said that all lives are valuable. Cars cannot distinguish between the old and the young, the disabled and the healthy. When programming the car’s decisions, the main consideration should be doing the least harm.
I guess the German answer to the trolley problem is simple. Minimize harm with one fatality. You can tell from the report’s 20 guidelines though that it is never that easy.
“Less harm” is more complicated than it initially appears. Faced with hitting a child or an adult, an AV (autonomous vehicle) could have an equal chance of a fatality. However, it is possible that you could do less harm to a child than an older person or to a healthy individual rather than someone who is ill. Who then to strike?
Thinking of vehicle design, assume we create a moral car. It knows to dive down a mountain and kill its passenger when the alternative is a collision that results in five deaths. But many of us won’t buy the moral car. We give top priority to our own survival. If those moral cars don’t sell, then we have fewer AV sales and more accidents. Pollution and congestion increase.
So that takes us to soaring sales for amoral cars. If a college savings plan owns amoral car securities, more kids can go to school. Also, that amoral car could save a surgeon who will invent a heart valve that saves millions of lives after she survives an “amoral accident.” As a result, are amoral cars okay?
The Moral Machine
You might want to see how you would program your “ethical” car by taking MIT’s moral machine quiz (linked at the end of this post). Moving from one dilemma to the next, you make a series of choices. Once you see your answer analysis, you become very much aware of your “moral” biases.
Similar to the moral machine quiz, this video presents several dilemmas:
Our Bottom Line: Externalities
For self-driving cars, ethical programming has become a reality. Necessitating countless choices, the decisions will create positive and negative externalities that impact pedestrians, passengers, urban planners, insurance companies…and you and me.
For now though, the trolley problem can be the beginning.
My sources and more: A 99% Invisible podcast reminded me that it was time to return to self-driving cars. As for the German report, BusinessInsider had a good summary while two more recent articles on ethical programming are here and here.