With the introduction of self-driving vehicles, developers behind their decision-making AI have had to re-think some age-old ethical dilemmas, specifically who should the car choose to save in the event of a crash.An article published by Nature: International Journal of Science details the results of The Moral Machine experiment, which confronted over two million participants with a variety of hypothetical moral dilemmas as faced by an autonomous vehicle, its passengers and nearby pedestrians.For instance, participants were presented with the graphic shown below and asked which of the two choices would be preferable in the event of brake failure: the death of three elderly pedestrians illegally crossing the road, or the death of the young family in the car.Through the recording of almost 40 million decisions via this experiment, the researchers focused on nine distinct factors:sparing humans versus petsstaying on course versus swervingsparing passengers versus pedestrianssparing more lives versus fewersparing men versus womensparing the young versus the elderlysparing legal pedestrians versus jaywalkerssparing the fit versus the less fitsparing those with higher social status rather than lowerFrom all of the responses, no matter which country or demographic they came from, the strongest preferences were to spare human lives rather than pets, save more lives versus fewer, and saving younger lives rather than the elderly (in that order).While this may seem obvious, the decision to implement these preferences into autonomous driving software isnt as straightforward.
The ability to detect an animal rather than a human and judge the value of life accordingly can be relatively simple, but when it comes to comparing the value of human life based on attributes such as age, gender, or social status, the line becomes rather blurry.For instance, if were to preference children over adults, and adults over the elderly, well need to draw some definitive boundaries around these age brackets, and that decision isnt an easy one to make on a global scale.Real world impactThe Moral Machine experiment has been running since 2016, providing us with the most comprehensive poll of what people around the world think should happen in certain clear-cut situations, but the reality isnt as clean.In the experiment, the certainty of a characters death is known, as is their relative age, social status and more, but much of this would either be impossible or unethical to determine in reality.The article cites the 2017 rules put in place by the German Ethics Commission on Automated and Connected Driving as the only example of an official guideline on the issue, but the rules are at odds with the Moral Machines findings.For instance, the German Ethics Commissions rules on human versus animal life is clear, prioritising humans in all circumstances, but the rules are unclear on when to sacrifice few to spare many, and they explicitly prohibit the distinction of any personal feature such as age, gender or social status.With the release of these findings, we can hope that ethicists, developers and manufacturers responsible for self-driving cars will have a better perspective on who to preference in these situations, but the moral dilemmas are far from solved.3NT6XSEaTHqbETH6BLsNMN.jpeg#
Music
Trailers
DailyVideos
India
Pakistan
Afghanistan
Bangladesh
Srilanka
Nepal
Thailand
StockMarket
Business
Technology
Startup
Trending Videos
Coupons
Football
Search
Download App in Playstore
Download App
Best Collections