With the introduction of self-driving vehicles, developers behind their decision-making AI have had to re-think some age-old ethical dilemmas, specifically who should the car choose to save in the event of a crash.
An article published by Nature: International Journal of Science details the results of The Moral Machine experiment, which confronted over two million participants with a variety of hypothetical moral dilemmas as faced by an autonomous vehicle, its passengers and nearby pedestrians.
For instance, participants were presented with the graphic shown below and asked which of the two choices would be preferable in the event of brake failure: the death of three elderly pedestrians illegally crossing the road, or the death of the young family in the car.
Through the recording of almost 40 million decisions via this experiment, the researchers focused on nine distinct factors:
- sparing humans versus pets
- staying on course versus swerving
- sparing passengers versus pedestrians
- sparing more lives versus fewer
- sparing men versus women
- sparing the young versus the elderly
- sparing legal pedestrians versus jaywalkers
- sparing the fit versus the less fit
- sparing those with higher social status rather than lower
From all of the responses, no matter which country or demographic they came from, the strongest preferences were to spare human lives rather than pets, save more lives versus fewer, and saving younger lives rather than the elderly (in that order).
While this may seem obvious, the decision to implement these preferences into autonomous driving software isn’t as straightforward. The ability to detect an animal rather than a human and judge the value of life accordingly can be relatively simple, but when it comes to comparing the value of human life based on attributes such as age, gender, or social status, the line becomes rather blurry.
For instance, if we’re to preference children over adults, and adults over the elderly, we’ll need to draw some definitive boundaries around these age brackets, and that decision isn’t an easy one to make on a global scale.
Real world impact
The Moral Machine experiment has been running since 2016, providing us with the most comprehensive poll of what people around the world think should happen in certain clear-cut situations, but the reality isn’t as clean.
In the experiment, the certainty of a character’s death is known, as is their relative age, social status and more, but much of this would either be impossible or unethical to determine in reality.
The article cites the 2017 rules put in place by the German Ethics Commission on Automated and Connected Driving as the only example of an official guideline on the issue, but the rules are at odds with the Moral Machine’s findings.
For instance, the German Ethics Commission’s rules on human versus animal life is clear, prioritising humans in all circumstances, but the rules are unclear on when to sacrifice few to spare many, and they explicitly prohibit the distinction of any personal feature such as age, gender or social status.
With the release of these findings, we can hope that ethicists, developers and manufacturers responsible for self-driving cars will have a better perspective on who to preference in these situations, but the moral dilemmas are far from solved.
- Google self-driving car rear-ended in first injury accident
- Google self-driving car pulled over for going too slow
- University of Michigan opens test ‘city’ for self-driving cars
- Daimler’s self-driving big rig makes big entrance in Nevada
- Boundurant driving school hires Neri to take school global
- Ronda Rousey selling car she once lived in on eBay
- Introspection drives MLS through design process for new logo
- Friend to drive car of the late Kevin Ward Jr.
- Brian Vickers to drive car honoring FSU title team at Daytona
- Georgia’s new quarterback studies up on Tom Brady