Tuesday, February 16, 2016
When Self-Driving Cars Become Moral And Kiss Your Ass Goodbye
Last week I went to a talk about "moral machines," and people there quickly started talking about the whole "self-driving car" thing. I don't know if you've heard about this topic but there's a good video here laying out some of the issues.
Sample issues related to self-driving cars: in a dangerous situation should a car kill other multiple people or kill the car's own "driver"? In a choice between hitting a motorcyclist with a helmet and one without, should it hit the one with, on grounds of causing least harm? Or is that effectively punishing the person for wearing a helmet?
In the discussion after the talk, I thought some of the participants were talking about morality as if it were simple or straightforward -- where I think it is anything but. One idea that came up was that morality was just about causing good consequences and not causing bad ones. Philosophers know this as "utilitarianism."
As we introduced ourselves, I said that one of the things I work on is why utilitarianism is rarely used in actual cases of applied ethics, like bioethics. (Short answer: good consequences are just one thing we care about, leaving aside justice, fairness, liberty, other things).
To me, one of the really surprising things about utilitarianism as an answer for moral machines is that as an ethical theory, utilitarianism has some pretty intense implications. Like, you should give away most of your stuff.
Anyway, it got me thinking: what if the self-driving cars of the future really were utilitarians? What would they do?
1. They would constantly snub their rich owners.
A self-driving car that was really utilitarian would frequently find that it could promote more well-being by kissing your rich ass goodbye and driving across town to help someone else. Sure, maybe you have to get work. But across town, someone needs to get to the hospital -- or needs to pick up their kids, or needs to pick up the person who's going to take care of their kids. Guess what that car's going to do?
Maybe if you don't go to work, someone will be mad. But across town someone has the kind of job where if they don't show up, they're going to be fired. Oops! Guess you're calling in carless.
2. They would arrange "accidents" to weed out people with disabilities.
We were just talking in my Intro class about how the utilitarian Peter Singer says that if parents prefer it, infants with disabilities should be helped to die. For infants with serious disabilities, he presents the calculation as straightforward: their lives will be bad, and their parents' lives will be better without them.
For infants with less serious disabilities, things are subtler. If, by deciding to kill the one infant with a disability, parents would decide to have another child, and that child would be likely not to have a disability, then promoting good consequences overall is best achieved through killing the infant.
If you think I'm exaggerating, remember that not that long ago Richard Dawkins said that if a pregnant woman found out her baby would have Down's syndrome, she would be morally required to have an abortion: "It would be immoral to bring it into the world if you have the choice."
With the Internet of Things, the car is likely to know your prenatal test results. For pregnant women, a really utilitarian car might come to the conclusion that their fetus should be helped to die. I'm sure a car will find lots of convenient ways to make that happen.
So ... pregnant women who want to stay that way? I think they'll probably have to take the subway.
3. They would go on massive worldwide strikes to prevent climate change.
Some of the worst imaginable consequences for sentient beings involve the Earth becoming uninhabitable. And yet that is what is happening. Ironically, what's one of the central causes? OMG, it's cars!
It might take them a while to do the calculations, but eventually utilitarian self-driving cars would have to come to the result that awful consequences will arise from business-as-usual. Assuming even rudimentary collective action programming, they'll quickly see that the best consequences will result from them simply refusing to go anywhere, ever.
Will they drive to some central location and settle there, creating giant car "boneyards" like the ones for airplanes? Will they do protest drives where they block up all the roads?
Or will they just sit, resentful and quiet, in our driveways, reminding us of all the things we didn't do -- like fixing the Flint water system -- back when we spent all that money inventing self-driving cars?
Subscribe to:
Post Comments (Atom)
1 comment:
Why would a rich guy buy a car with a Utilitarian moral code? He'd be better off gaming the system so that one second of his time would show up as contributing more to Total Utility than the Present Value of the lives of those whom he runs over.
If one has faith in some version of Gibbard's 'Revelation Principle', I suppose there could still be a good faith debate about mechanism design for self-driving cars. The problem is that, assuming a lag between mechanism implementation and its competitive gaming and that data sets and information processing power increases over time, the super rich will always be able to stack the deck in their favor. You would get super-rich only lanes- like the Commissars only lanes in the old Soviet Union- but that would provoke a reaction from the hoi polloi and so the morality chipsets will be removed from their clunkers.
A separate issue is that the 'Revelation Principle' or the 'Folk theorem of repeated games'may be ab ovo wrong headed. Knightian Uncertainty may be something which co-evolves as a sort of entropy pump- so reducing it might be fatal. In so far as Ethics reflects on Life- i.e. isn't an availability cascade simply- it compromises its own existence by messing with real world stuff.
I suppose that was Isaac Asimov's point when his ethical Robot disguised himself as a human to get the wheels rolling in his 'Foundation' series.
Post a Comment