This is the future: How a driverless car will decide who dies in accidents

Some things take a little longer to reach those inhabiting the southern tip of this continent. For many in South Africa, a future where driverless dominates the transport sector hasn’t progressed beyond science fiction. The conventional thinking goes that even if they did work elsewhere, SA’s lawless road-users would quickly kill them off. So why bother about key questions like the destruction of motor insurance as we know it; the end of a now flourishing panel-beating sector; or the impact on hospitals of massively lower accident victims no longer clogging up their wards? If you’re in the not-in-my-lifetime camp, this article is sure to change your mind. Driverless cars are coming. In developed economies, a mass takeover by these highly sophisticated computers on wheels is accepted as a question of when not if. Motor manufacturers are taking the debate to the next level – what happens when the self-drive car is forced into making ethical decisions? This is the future. One that raises all kinds of new questions. – Alec Hogg

A Toyota Prius which has been converted by Google into a driverless car. Note the LIDAR sensor ("lights" and "radar") on the rooftop.
A Toyota Prius which has been converted by Google into a driverless car. Note the LIDAR sensor (“lights” and “radar”) on the rooftop.

By Keith Naughton

(Bloomberg) – The gearheads in Detroit, Tokyo and Stuttgart have mostly figured out how to build driverless vehicles. Even the Google guys seem to have solved the riddle. Now comes the hard part: deciding whether these machines should have power over who lives or dies in an accident.

The industry is promising a glittering future of autonomous vehicles moving in harmony like schools of fish. That can’t happen, however, until carmakers answer the kinds of thorny philosophical questions explored in science fiction since Isaac Asimov wrote his robot series last century. For example, should an autonomous vehicle sacrifice its occupant by swerving off a cliff to avoid killing a school bus full of children?

Auto executives, finding themselves in unfamiliar territory, have enlisted ethicists and philosophers to help them navigate the shades of gray. Ford, General Motors, Audi, Renault and Toyota are all beating a path to Stanford University’s Center for Automotive Research, which is programming cars to make ethical decisions and see what happens.

“This issue is definitely in the crosshairs,” says Chris Gerdes, who runs the lab and recently met with the chief executives of Ford and GM to discuss the topic. “They’re very aware of the issues and the challenges because their programmers are actively trying to make these decisions today.”

Automakers and Google are pouring billions into developing driverless cars. This week Ford said it was moving development of self-driving cars from the research lab to its advanced engineering operations. Google plans to put a “few” of its self-driving cars on California roads this summer, graduating from the test track.

Social Robots

Cars can already stop and steer without help from a human driver. Within a decade, fully automated autos could be navigating public roads, according to Boston Consulting Group. Cars will be among the first autonomous machines testing the limits of reason and reaction in real time.

“This is going to set the tone for all social robots,” says philosopher Patrick Lin, who runs the Ethics and Emerging Sciences Group at California Polytechnic University and counsels automakers. “These are the first truly social robots to move around in society.”

The promise of self-driving cars is that they’ll anticipate and avoid collisions, dramatically reducing the 33,000 deaths on U.S. highways each year. But accidents will still happen. And in those moments, the robot car may have to choose the lesser of two evils — swerve onto a crowded sidewalk to avoid being rear- ended by a speeding truck or stay put and place the driver in mortal danger.

“Those kinds of questions do have to be answered before automated driving becomes a reality,” Jeff Greenberg, Ford’s senior technical leader for human-machine interface, said during a tour of the automaker’s new Silicon Valley research lab this week.

Asimov Laws

Right now, ethicists have more questions than answers. Should rules governing autonomous vehicles emphasize the greater good — the number of lives saved — and put no value on the individuals involved? Should they borrow from Asimov, whose first law of robotics says an autonomous machine may not injure a human being, or through inaction, allow a human to be harmed.

“I wouldn’t want my robot car to trade my life just to save one or two others,” Lin says. “But it doesn’t seem to follow that it should hold our life uber alles, no matter how many victims you’re talking about. That seems plain wrong.”

That’s why we shouldn’t leave those decisions up to robots, says Wendell Wallach, author of “A Dangerous Master: How to Keep Technology from Slipping Beyond Our Control.”

“The way forward is to create an absolute principle that machines do not make life and death decisions,” says Wallach, a scholar at the Interdisciplinary Center for Bioethics at Yale University. “There has to be a human in the loop. You end up with a pretty lawless society if people think they won’t be held responsible for the actions they take.”

Disobey Laws

As Wallach, Lin and other ethicists wrestle with the philosophical complexities, Gerdes is conducting real-world experiments. This summer on a racetrack in northern California, he’ll test automated vehicles programmed to follow ethical rules to make split-second decisions, such as when it’s appropriate to disobey traffic laws and cross a double yellow line to make room for bicyclists or cars that are double-parked.

Gerdes is also working with Toyota to find ways for an autonomous car to quickly hand back control to a human driver. Even such a handoff is fraught with peril, he says, especially as cars do more and driving skills degrade.

Ultimately, the problem with giving an an autonomous automobile the power to make consequential decisions is that, like the robots of science fiction, a self-driving car still lacks empathy and the ability to comprehend nuance.

“There’s no sensor that’s yet been designed,” Gerdes says, “that’s as good as the human eye and the human brain.”

[ted id=1109]

Visited 56 times, 1 visit(s) today