Imagine this – a heavy train is speeding down a railway track, 200m down the track, 5 people are tied to the rails, if the train continues they die. You can’t stop the train, but you can divert the train onto a different track on which only 1 person is tied by pulling a lever. If you divert the train, this person will die, and the other 5 will be saved.
You’ve probably heard of this problem before, the question is – would you divert the train? The natural answer (for me at least) is to divert the train. You’ll kill one person but you’ll save five in the process, i.e. you choose the option that causes the least damage. Right?
Now imagine a situation exactly the same as the last, but this time, you have no option to divert the train. Instead, you are standing on a bridge above the tracks, next to a very large person. You can watch the train pass under the bridge, and let it kill the 5 people, or you can push the large person off of the bridge and onto the tracks, and stop the train with his large mass.
In this problem, the outcomes are exactly the same as the last: 1 person can be killed to save 5, or vice versa. But the chances are you’re more hesitant to make a decision for the second problem.
Well some people would see the difference between the two problems as the difference between murder and letting someone die. In the first problem, you are letting someone die, since by pulling a lever you are not doing anything physically to the person, but instead are indirectly killing the person. In the second, you are actively putting the large man in the way of the train, thereby killing him.
The fact that passively taking ones persons life in the first problem is ok, but actively taking one persons life in the second problem is not, is sometimes referred to as the principle of double effect. This states that it’s acceptable to indirectly cause harm (as a side effect) if the action promotes an even greater good. However, it’s not ok to directly cause harm, even in the pursuit of a greater good.
This problem becomes so much deeper if you consider what the histories and identities of the people are. If the 5 people on one track have previously been in prison for various crimes (murder, robbery, etc…) and the person on the diverted track was a doctor, or a teacher, would your decisions be the same? Personally, taking this into account makes the question a thousand times more difficult, because it makes me think about whether people are all equal, despite anything that they might have done in the past.
Another way (yes, another way!) of looking at a problem with the same outcomes is to imagine a doctor who is taking care of 6 patients. One of the patients has a broken leg, the other 5 require transplants (2 need a lung each, one needs a heart, one needs a kidney and the other needs a liver), but no matching donors are available for the next 6 months by which point the 5 patients will have died. Do you kill the healthy patient to save the other 5? (My head hurts now… too many decisions…)
Of course, the other option in the first two situations is to not do anything. You can choose not to participate in the situation, in which case you take no responsibility for the events that take place. But would you be ok with turning away and doing nothing?
Would you step in front of the train and kill yourself to save the other 5?
I think I’ll stop there with the questions because otherwise you’ll be here all day, but this is certainly a really deep dilemma that questions whether the ethics that we have built society on are reliable, and even whether they are always right. The interesting thing is, not everyone answers these problems in the same way, and even if they do, their justification of their actions varies from person to person. I don’t think I would be able to live with either decision, both involve taking life, and therefore, personally, neither of the two options is acceptable. Let me know what you would do in the comments section below…
The rise of drones and self-driving cars in today’s society makes the dilemma more relevant than ever before. Should a self-driving car protect the life of its passengers, even at the expense of a greater number of pedestrians? We want other people’s cars to save as many as possible but think our own car should protect us at all costs. As tech in the 21st century become increasingly capable of making moral decisions, understanding our own moral decisions is becoming all the more important.