Should a self-driving car ever run people over on purpose?

In a world of self-driving cars, collisions will be rare, but occasionally unavoidable. How do we program them to ensure that they are making the most ethical decisions?

|
Google/AP/File
This image provided by Google shows a very early version of Google's prototype self-driving car.

Imagine that you’re driving through a residential area when your brakes fail. Directly in your path is a group of five jaywalkers. The only place to swerve is onto the sidewalk, where a pedestrian is waiting for the signal to change.

Who do you run over, the five jaywalkers or the one law-abiding citizen?

Such stark choices are rare, if they occur at all, and, in a world of human drivers they would be made in milliseconds. But in a future where cars drive themselves, the choices will be coded in the operating systems of millions of cars, highlighting a paradox of a technology that is expected to save countless lives: The cars may also have to be programmed to run people over.

“There is a common misconception that because it’s an automatic system it’s automatically infallible, and will simply brake in time when a critical situation develops,” says Leon Sütfeld, the lead author of a paper on the ethics of autonomous vehicles published Wednesday in Frontiers in Behavioral Neuroscience. “This unfortunately just isn’t realistic. A self-driving car is subject to the same laws of physics as a manually driven car.”

Writing code for autonomous vehicles will require us to take our moral intuitions – those nebulous and often contradictory feelings that color our perceptions of human behavior – and package them into precise instructions for millions of cars we set loose on our roads. That raises what philosophers call Big Questions: Can you quantify morality? Whose set of morals do we use?

Globally there are an estimated 1.25 million traffic fatalities each year, with 40,000 in the United States. And in the US, 94 percent of traffic deaths are attributable to human error. Elimination of human error on our roads would be a boon to public safety.

But before the public is comfortable having software take the wheel, consumers and regulators will need assurances that the cars are programmed with the moral responsibility that comes with a drivers license. This risk-management programming is not just for the one-in-a-million Trolley Problem event where a crash is unavoidable, but for the routine operation of the vehicle.

“I just don’t see a lot of these forced-choice scenarios occurring in actual traffic,” says Noah Goodall, a researcher at Virginia’s Department of Transportation who specializes in the ethics of autonomous vehicles. “The idea with this kind of work is to figure out how people assign values to different objects.”

In an effort to measure those values, Mr. Sütfeld, a doctoral candidate at the Institute of Cognitive Science at the University of Osnabrück, Germany, and his colleagues asked 105 participants to don head-mounted virtual-reality displays that placed them in the driver’s seat of a virtual car traveling down a two-lane road. A variety of obstacles, including adults, children, dogs, goats, trash cans, and hay bales, were placed in the lanes, and drivers had to pick which obstacle to strike and which one to spare.

The participants were given either one second or four seconds to decide. The one-second trials showed little consistency, suggesting that participants didn’t have enough time to deliberately choose what to strike. But when the time constraints were eased, a pattern emerged. In the four-second trials, drivers were more likely to spare the lives humans over animals, children over adults, pedestrians over motorists, and dogs over livestock and wild animals.

These consistent choices, say the researchers, could be used to develop a one-dimensional “value-of-life” scale that could be used to determine whose safety autonomous vehicles should prioritize. Such a scale has an advantage over more sophisticated models, such as those that rely on neural networks, in that it is straightforward and transparent to the public, potentially leading to a quicker acceptance of driverless vehicles.

But a strict hierarchy may not be enough to capture the moral complexity of balancing risks while driving.

“If human well-being is always a priority, does that mean a self-driving car may not avoid a dog that runs into the street, if there is an ever so little chance of mild injury to a human in the process?” asks Sütfeld. “We would argue that there needs to be a system that is able to make reasonable decisions even in complex situations, and categorical rules often fail this requirement.”

Iyad Rahwan, a professor at the Massachusetts Institute of Technology who researches the ethics of self-driving cars, cautions that no formula will be truly satisfying for everyone.

“There is too much focus on identifying the correct answer to the rare ethical dilemmas that a car might face,” says Professor Rahwan. “I think there is no right answer in an ethical dilemma, almost by definition. Instead, we need to come up with a balance of risks that is acceptable. We need a social contract that constitutes an acceptable solution to an ethical dilemma that is unsolvable in any objective sense.”

You've read  of  free articles. Subscribe to continue.
Real news can be honest, hopeful, credible, constructive.
What is the Monitor difference? Tackling the tough headlines – with humanity. Listening to sources – with respect. Seeing the story that others are missing by reporting what so often gets overlooked: the values that connect us. That’s Monitor reporting – news that changes how you see the world.

Dear Reader,

About a year ago, I happened upon this statement about the Monitor in the Harvard Business Review – under the charming heading of “do things that don’t interest you”:

“Many things that end up” being meaningful, writes social scientist Joseph Grenny, “have come from conference workshops, articles, or online videos that began as a chore and ended with an insight. My work in Kenya, for example, was heavily influenced by a Christian Science Monitor article I had forced myself to read 10 years earlier. Sometimes, we call things ‘boring’ simply because they lie outside the box we are currently in.”

If you were to come up with a punchline to a joke about the Monitor, that would probably be it. We’re seen as being global, fair, insightful, and perhaps a bit too earnest. We’re the bran muffin of journalism.

But you know what? We change lives. And I’m going to argue that we change lives precisely because we force open that too-small box that most human beings think they live in.

The Monitor is a peculiar little publication that’s hard for the world to figure out. We’re run by a church, but we’re not only for church members and we’re not about converting people. We’re known as being fair even as the world becomes as polarized as at any time since the newspaper’s founding in 1908.

We have a mission beyond circulation, we want to bridge divides. We’re about kicking down the door of thought everywhere and saying, “You are bigger and more capable than you realize. And we can prove it.”

If you’re looking for bran muffin journalism, you can subscribe to the Monitor for $15. You’ll get the Monitor Weekly magazine, the Monitor Daily email, and unlimited access to CSMonitor.com.

QR Code to Should a self-driving car ever run people over on purpose?
Read this article in
https://www.csmonitor.com/Technology/2017/0707/Should-a-self-driving-car-ever-run-people-over-on-purpose
QR Code to Subscription page
Start your subscription today
https://www.csmonitor.com/subscribe