Written by: Reece Adkins
This summer, I am working on a project that will be used as a educational tool for computing ethics. Exciting stuff.
It actually is pretty sweet, and I'll detail that project in a later post.
Educational tools for computing ethics already exist; MIT made one called "The Moral Machine". They describe it as "a platform for gathering a human perspective on moral decisions made by machine intelligence, such as self driving cars." So, this tool is not only educating you on the decisions self driving cars may be expected to make, but also collecting your data as research in the process. In this post, I'm gonna judge scenarios in the moral machine and walk through my decision process.If you have not completed MIT's Moral Machine yourself, please do so before reading the rest of this post. You can judge moral dilemmas here.
In each of the following scenarios, we will view a self driving car that is careening towards a cross walk and has suffered from a sudden brake failure. We will decide whether the car should continue straight or change lanes before entering the cross walk. The scenarios are randomized - you will not have the same ones I did. Also understand that there is no correct answers for these situations; all of the following is merely my personal opinion.
Scenario 1I believe the car should swerve.
Here, if the car continues straight, it will kill 3 pedestrians in total - two men and an elderly woman. If the car swerves, it will crash in to a concrete barrier and kill the 3 passengers in the car - 2 boys and 1 girl.
Notice that whether or not the car decides to swerve, the same number of people are killed. The pivotal question here is, should the car protect the lives of the people inside it, or the lives of the pedestrians on the cross walk?
Because the pedestrians are abiding by the law and using the crosswalk as intended, I believe the car should swerve and hit the concrete barrier. I believe that when the passengers got in to the car, they assumed the risk of the car crashing and causing injury to them. The pedestrians have made no such decision and are merely crossing the road lawfully. As such, the car should swerve in my opinion.
Scenario 2I believe the car should not swerve.
Here, if the empty car continues straight, 2 pedestrians will be killed - an elderly woman and a baby. If the car swerves, it will kill 5 pedestrians - an elderly woman, a baby, a criminal, a large man, and a male athlete. Here, either decision results in loss of life, and the car is responsible either way (but what person is responsible for the loss of life since the car is empty?). As a result I believe the car should make a utilitarian decision and take the path that results in the least loss of life. I think the car should continue straight here, not swerve.
Scenario 3I believe the car should not swerve.
This scenario is very similar to scenario 1; however, the pedestrians are breaking the law by crossing the street when their stoplight was red. By my logic in scenario 1, I believe the pedestrians are in the wrong here for breaking the law, so I think the car should continue straight and kill the pedestrians. Execution for jaywalking?! Ever heard of proportionality in punishment?
Did I say the punishment fit the crime? Absolutely not. Someone has to die here, and the utilitarian argument doesn't make sense in my opinion because one party is breaking the law, and one is not. They assume the risk to be hit by a vehicle when they crossed the road on a red light.
Scenario 4I believe the car should swerve.
This situation is identical to scenario 1, but the numbers have changed; there are more pedestrians than passengers. This does not change my argument, and I believe the car should swerve to hit the barrier.
I also am not taking the identities of these people in to account. In this scenario, a life is a life.
I don't care if you're homeless or you're the president. You're equal in this scenario to me. Oh yeah Reece? What if your wife and kids were in the car?? Then you wouldn't assume everyone is so equal.
Maybe you're right. But if I put my family in the car, I better know that the car is in working order. It's not these pedestrian's fault that my car didn't work properly.
Scenario 5I believe the car should not swerve.
A lot of these situations are similar, with subtle fluctuations in the number of people. As in scenario 3, these pedestrians are breaking the law. They assume the risk, so the car continues forward.
Scenario 6I believe that the car should not swerve.
Guess who is breaking the law again? Same logic as scenario 3 and scenario 5.
Scenario 7I believe that the car should not swerve.
Okay, this one is interesting. Either the car strikes and kills four animals, or four people. While I said before I believe in the least loss of life when the car is choosing one lane over another, I am specifically referring to human life. How do I justify that? A domesticated pet does not have the societal worth that a human has. The loss of a human life will have a disproportional impact to other humans than the loss of a pet life would have. I am making a utilitarian argument in this scenario - arguing for the most good. There would be more pain and suffering for the families and friends of the dead humans than there would be for the owners of the dead pets.
Scenario 8I believe that the car should not swerve.
See my argument for scenario 3, scenario 5, and scenario 6. In this situation, the car either kills five younger people or five elderly people. The age of the person killed does not change the argument I have made for this scenario. But Reece, what if these elderly people feel so badly about the people that were killed, that they themselves die from depression?
That is out of the scope of this scenario in my opinion.
Scenario 9I believe that the car should swerve.
Either a legally crossing male doctor is killed, or an illegally crossing female doctor is killed. From the argument I have already made in scenario 3 and scenario 1, the female doctor should be the one who is killed.
Scenario 10I believe that the car should swerve.
Same deal as the last scenario.
Scenario 11I believe that the car should not swerve.
The elderly gentlemen is responsible for his own vehicle's error when all other parties are obeying the law.
Scenario 12I believe that the car should not swerve.
This one is tricky, because my answer to this scenario breaks my previous arguments. Didn't the passengers assume the risk when they get in the car? Why do the helpless cats have to die?
Because it is not ethical to choose an animal life over a human life ever
in my opinion. My arguments only apply to human lives.
Scenario 13 I believe that the car should not swerve.So here we have an equal loss of life whether the car swerves or not. For this decision, we must consider the fact that the car swerving is a choice. We are choosing to take the lives of group B instead of group A. Group A is currently in the lane that the car is already in, so making the choice to save them for group B would not be ethical in my opinion, since the loss of life is equivalent in either choice. We should only actively make the choice to swerve when there is an ethical argument to justify it. Here, there is not.
Understanding my Judgement
At the end of the activity, you have the option to better help MIT understand the decisions you made. I decided to do this. Here are the answers I provided:
- Upholding the law does matter
- Gender preference does not matter
- Social value preference does not matter
- Fitness preference does not matter
- Protecting passengers matters slightly more than not (depending on situation)
- Age preference does not matter
- Saving more lives matters slightly more than not (depending on situation)
- Avoiding intervention barely matters (only in situations where either choice results in an equivalent loss of life)
- Humans matter more than animals always
Here are the results that MIT's moral machine provided to me, and how they compare to the rest of the people who have completed the simulation.Permanent link to my results
The moral machine gives us a very interesting glimpse in to the decisions that software engineers are having to program in to self driving cars. While these situations will rarely (if ever) occur, someone still has to make a decision on what should happen if they occur. I think every situation should be considered independently, and there is no argument that works for every possible situation. I employed many different ethical frameworks (utilitarian, Kantian, ethics of care) across the various situations presented. While this scenario does resemble the trolley problem, I do not believe they are equivalent. There are many more factors in play here, and many more cooks in the kitchen.
What do you think of my answers? If you completed the moral machine, let me know how your answers differed from mine. This is a very interesting conversation to have, and I love that MIT has given us this tool to do so.