How much do you trust a computer to drive a car without failing?
It's an important question, but I wonder if it's really the one that we should be asking. Instead, I think the question ought to be, "Can a computer drive as well as a human?" And most of us would probably agree that the majority of humans are terrible drivers (not us personally, of course, but other people), so in this respect the bar is actually set quite low for a self-driving car. They almost can't go wrong, when you approach it in this way.
But there's one roadblock standing in the way of self-driving cars that I keep coming back to, and I really don't envy the engineers who are faced with tackling it. The trolley problem.
The Trolley Problem (AKA: Kobayashi Maru)
The trolley problem is a thought experiment devised by Phillipa Foot in 1967, and it goes like this:
There's a runaway train barreling down the tracks toward a team of five railroad workers. There's no way to alert the workers, who are absorbed in repairing the tracks, and they've no idea the train is coming. The train is, without question, going to kill them.
But you could throw a switch that would divert the train into a siding before it gets to them. The only trouble is, there's a worker in the siding, too. Do you throw the switch so only one worker is killed, or do you refrain from action and leave the five workers to go under the wheels instead?
Firstly, there's obviously no right answer to this question. What it's really asking is what type of person are you? Could you take dramatic action that directly results in one death, or do you prefer to avoid any involvement in such difficult situations, even if doing so would indirectly cause a five-fold increase in fatalities?
The trolley problem has evolved over the years to build on the moral quandary. For example, what if the guy in the siding deliberately put the other five in danger in the first place? Would that change your feelings on action/inaction? And, of course, it's been adapted to great sci-fi purpose in the Kirk-thwarted Kobayashi Maru training exercise in Star Trek, to train starship captains in facing no-win situations.
Ultimately you and I will never have to solve this ethical conundrum. It's not really meant to be solved anyway, so much as highlight the relativity of morality. And should we find ourselves in such a bizarre situation, we can answer for our action/inaction emotionally, and that answer will likely be a fine one. A self-driving car, however, can make no such argument.
The Ethics of Engineering
A self-driving car is travelling down the road when an accident takes place in front of it. The car's sensors determine that there's not enough room to stop. Does it crash into the swerved car in front that's carrying five people, or does it mount the pavement where a single pedestrian is walking?
The car itself has no ethical dilemma to worry about here. It's just another set of parameters that are easily programmable, and as far as the car's concerned, no different to stopping at a red light or turning left. The onus, instead, is on the engineers and programmers who designed the car and its auto-driving system in the first place.
Thus far, the trolley problem hasn't required a formal resolution, but someone might finally have to answer it -- in a very literal fashion -- if we're to have self-driving cars on the road. It could be argued (somewhat ironically) that the problem is circumnavigated by programming the car so it'll always avoid being so close to another vehicle. Stopping will always be possible, and that's probably pretty fair. But the trolley problem can never be ruled out entirely, so this becomes a semantic matter.
Neither is it entirely a problem that engineers and designers have much experience in facing. Someone who builds an ordinary car could be considered as the person standing beside the railroad tracks, waiting by the switch. It's not for them to cover every crazy eventuality that some weirdo might do with the car once it's in their possession. As long as it's built to certain safety standards and provides the expected functions of a car, their responsibility is very, very limited.
I'm not sure that same claim can be made for a self-driving car that's facing the trolley problem head on. The car isn't making decisions -- not even bad decisions, or decisions made in a panic, or decisions with unintended consequences -- it's simply following its programming, and it won't have to explain itself afterwards.
The engineer and coder might, and therein is found a brand new take on this 50-year-old thought experiment. How will self-driving cars solve the trolley problem, and should they be allowed on the roads if they can't?