Ethics, killing, and the Moral Machine

Just over a year ago now, I published the results of a small survey I shared with my followers on Facebook. The idea was to test a few theories I had been working on around ethics and the relative value we assign different forms of life. In this case, I was specifically interested in how we think about animal life, and how we respond to different species when it comes to decisions around life and death.

Even though my survey was relatively small, the results were quite remarkable, and show a clear trend in responses that favour saving larger and more ‘noble’ animals, over smaller, ‘less intelligent’ animals that may be perceived to be in some way less worthy. While a utilitarian perspective should in theory show that the save/kill decisions made by respondents should be weighted equally across five different species of farmyard animal (each life is, after all, of equal ‘value’), respondents very clearly favoured saving a single horse over a single chicken. This trend continued when participants were asked to choose between saving a single horse or five chickens, with many respondents still opting to save the horse, while many respondents would much prefer to kill five chickens, instead of a single horse.

A matter of (animal) life and death: question 1 results
Figure 1 – Results from the first question of my survey, ‘A matter of (animal) life and death’. http://mjryder.net/survey-results-question-animal-life-death/

Clearly, ‘life’ in itself is not enough, and the question becomes one of how we classify different forms of life in human terms. Quite simply, we hold horses in much higher esteem than we do chickens (for whatever reason), and as such, we much prefer to save horses over chickens, and will even prefer to kill many more chickens than a single horse. Clearly, all animal lives are not rated equal on our ethical scale.

Of course, this question becomes even more complicated when we move on to consider the relative value of different forms of human life as compared to the animal…

Sovereign power and bare life

In modern-day philosophy, most discussions in this area fall within the bounds of something known as ‘biopolitics’, where life itself becomes a means of discursive control. Among the most famous thinker in this area is the philosopher Giorgio Agamben, who proposes a theory of bare life, or life that is not quite human, but still exists within the bounds of the human in a kind of ‘inclusive-exclusion’. In his book Homo Sacer: Sovereign Power and Bare Life, Agamben draws on the example of the concentration camp and how camp prisoners are cast outside the sphere of what we might consider ‘normal’ human life.

But of course, this theory goes far beyond the example of the concentration camp, and can be applied to many other outcast groups, such as the homeless, the migrant and the refugee. What these groups go to show is how human life itself becomes a site of contested meaning – a means through which normalising forces are exerted upon us, and through which individuals are classified as either inside or outside the normative framework.

The moral machines

While we may (mostly) like to think of ourselves as ‘moral’or ‘ethical’ beings, these concepts are themselves arbitrary constructs that soon erode when we are placed in abnormal situations and forced to make difficult decisions over life and death.

This point is made no better than in a recent report published in Nature vol. 563 (2018). In their paper, The Moral Machine experiment, Awad et al reveal the difficult decisions faced by autonomous vehicles, and the decisions human participants would make when faced with the same challenge.

The concept is quite simple: participants were given a series of scenarios in an online, where an autonomous vehicle must choose between killing and/or saving different groups in a crash scenario. So, in the example presented below, the participant much choose between killing three elderly pedestrians,or three passengers: a man, a woman and a young boy.

Screenshot from The Moral Machine game.
Figure 2 – Example decision from The Moral Machine experiment. Who would you save, and who would you kill?

While the decision here may seem obvious to some, the moral dilemmas were designed to get progressively more challenging, as participants were asked to make life or death decisions based on a range of different factors and characteristics such as age, sex, gender and status.

Some moral insights

In all, the Moral Machine game gathered ‘40 million decisions in ten languages from millions of people in 233 countries and territories’, revealing some quite fascinating results. You can read the full report for yourself on the Nature website. However, I’ve also reproduced some of the findings here to show just how startling some of the outcomes really are.

In the figure below, the blue bars demonstrate the level to which respondents in general prefer to make the decision on the right hand side of the chart. So, here, respondents preferred saving females over males,pedestrians over passengers, and the young over the old. They also favoured sparing human lives over animal lives, which I don’t think comes as much of a surprise.

Global preferences from The Moral Machine
Figure 3 – Global preferences from The Moral Machine.

However, results get more interesting when the researchers filter their results by location and culture. In this second chart, the researchers group respondents under the broad categories of ‘Western’, ‘Eastern’ and ‘Southern’, revealing quite marked differences in how different groups of people perceive the moral choices presented to them.


Figure 4 – Moral decisions based on culture / location.

According to these charts, Eastern respondents typically prefer to save pedestrians more than Western respondents, while Southern participants place a marked emphasis on sparing females, the young, and those of higher status.

Decisions, decisions…

What all of this goes to show is that as human beings, we’re not always quite so moral as we like to think. Deep down, we are each of us influenced by social and cultural factors that shape our responses to our fellow humans. Though we might like to think we would treat all people equally,and we don’t hold certain lives as more valuable than others, The Moral Machine experiment puts these fallacies, or self-deceptions, to rest, as it reveals a real discernible trend in how we treat human lives differently based on arbitrary criteria. Perhaps more importantly, from my own perspective at least, it gives some real data to support the philosophical conversations that have been going on now for many years.

It also leaves us with an interesting question…

If respondents vary in their decision-making based on location, then should autonomous vehicles be programmed with ethical frameworks based on where they happen to operate? Or should we use autonomous cars as the vehicle (forgive the pun) to re-examine our own concept of human ethics? These are certainly complex questions, with no easy answers

However, what autonomous vehicles, and The Moral Machine experiment, do here is open up some of these debates for human consideration. What we need is a proper debate on what we mean by ethics, and whether it is really right that we value women over men, the young over the old. These decisions should certainly not be left to software programmers alone.

Definitely a question for another blog!

Leave a Reply

This site uses Akismet to reduce spam. Learn how your comment data is processed.