Nearly 2 months ago I listened to the RadioLab podcast called Driverless Dilemma. It discusses the classic ethics thought experiment known as “The Trolley Problem” in the era of self-driving cars.
As a designer and engineer, I have to say that the entire premise of this sinister, Rube Goldberg thought experiment—while perhaps interesting for philosophers and brain researchers—would simply never happen in the real world. The first flaw in it is that as humans making split-second decisions in the real world, we don’t have the time to weigh all possible variables and outcomes.
Even if we did, the outcome of any decision or action can never be 100% certain; we aren’t Minority Report precogs. How could I know for certain that pushing a guy onto the tracks would actually stop the trolley and save the other 5? I know my physics, and a trolley weighing 30 tons moving at 35 mph has so much momentum that it will barely register when hitting a human. And it is quite possible that the diverting a speeding trolley through an unplanned track switch could cause it to derail.
But more to the point, the podcast are fails to account for the previous sequence of events that lead up to that particular set of circumstances in the first place. Driverless cars have a huge number of sensors looking as far out from the car as practical, specifically to look for and react objects to prevent them from ever getting into such a situation in the first place. A well-designed autonomous car would have sensed a problem and braked or changed direction long before some “lesser of evils” ultimatum ever comes up.
In situations where something happens too fast for the car to react, then the fault lies not with the car but with the person or object that somehow avoided detection until it was too late. How do you think a human driver would have fared in that exact scenario? At least as bad, if not worse. The real litmus test here is to take all of the real world vehicle collision data where a vehicle injured or killed someone and use each of those initial conditions as test scenarios for the self-driving vehicle. (Of course you have to first filter out all of the ones with a drunk/distracted/sleepy/altered driver since those situations have no self-driving analogue.) I can guarantee you that the self-driving vehicle is going to injure and kill significantly fewer than in the actual outcomes.
I generally enjoy RadioLab, and can appreciate them wanting to tell a good story. But the false conflation of this fictional thought experiment with real world situations of technology that will ultimately save hundreds of thousands of lives every year around the world does a disservice to their listeners.
Comments