Monday, July 27, 2020

Trade-Offs Versus Optimization: Is Everything An Optimization Problem?

In my work on ethics, I'm what I think of as a "trade-off" person rather than an "optimize" person. In the informal sense, this means that I see conflicting and competing considerations and values all around, and I think the ethical task is often to figure out how to prioritize among various considerations, instead of thinking that the ethical task is to figure out what is good and then bring about as much of that as possible.

If your first thought is "Wait, how are those really different?" then you are right in lock step with what a lot of other people are thinking.

To me, on the face of it they seem very different. For example: when the pandemic led to conditions where not everyone could be treated because there weren't enough resources, ventilators, and so on, one system of decision-making would be to "maximize overall health by directing care toward those most likely to benefit the most from it." For example, give the resources you have to the people most likely to survive, in ways that maximize the additional healthy years they will live. This is an optimizing strategy as it identifies a good -- healthy years of life to come -- and frames choices as maximizing that good.

A problem with this optimizing strategy is that it leads to discriminatory effects. "Health years" of life is usually understood as meaning years of life without a disability, so other things being equal, a person with a disability would be less likely to be treated than a non-disabled person. Because of social injustice and oppression, Black people in the US often have worse health than white people; if they were therefore less likely to have good outcomes, they would be less prioritized for treatment. Poor people are much more likely to have underlying health conditions and thus would be less likely to be treated. 

The "trade-off" perspective, on the other hand, frames the problem as one in which there are a variety of considerations that have to be weighed and balanced. Producing good effects in the sense of future years of life might be one consideration, but fairness and justice would also be a consideration. You might decide to use subjective measures of quality of life in which having a disability does not make a life less good; you might explicitly bring-anti-racism into the picture. You have to come up with a way of proceeding that weighs multiple considerations against one another. It might be complicated, and you might have to use your judgment.

When I have talked about these issues in classes or at conferences, defending a trade-off approach, occasionally someone will say to me: "If you frame it properly, everything is an optimization problem." I take it they mean something like this: while maximizing healthy years is one way of optimizing, it is not the only way; whatever value you think is good you can run a maximizing strategy on it. For instance, if you think future years of life, fairness, justice and equality are all important, you can create some concept like "overall goodness" that incorporates all of these. Then you can just maximize that. So there isn't really any difference; trading-off is not a separate and different kind of thing; it's more just what you're trying to maximize.

In harmony with this idea, there is a technical result that any set of ethical judgements can be "consequentialized" -- that is, expressed as the result of an optimizing procedure.

So if you were thinking, "Wait, how are those really different?" the answer is that in some deep conceptual sense, maybe they are not really different.

OK. But then I think: what about the other senses -- the ones that are not the deep conceptual senses? Even if you *can* frame your approach in optimizing terms-- should you?

I think the answer to this question is often "No." The details are tricky and probably boring for most people, but here is a short version:

1) Both methods require moral judgment, in the sense of figuring out what is important and how important it is, but "optimizing" has a veneer of objectivity to it, like we're just number-crunching. News flash: we're never just number-crunching. Talking about "trade-offs" reminds us constantly that we're using our human judgment and our values to figure out what to do.

2) "Trade-off" reminds you immediately that no matter what you do, you may have lost something, so that even if you get the right balance something bad happened. The language of "optimizing," however, has unsettling connotation of "everything is all for the best." If you have to prioritize one person over another, and you make a good decision, but the other person dies, do you really want to say "well, that was optimal"? In fact, noticing that it wasn't optimal may prompt you to think or plan differently in the future -- e. g., trying to prevent people from getting sick in the first place.  

3) "Optimization" lends itself to methodologies where the inputs are easily measurable. Yes, you can optimize for things like justice and fairness and anti-oppression, in the sense that you can come to a judgment about what to do that honors those values in the way you think best in the circumstances. But, especially given 1), once you're in the optimization frame of mind, it's natural to start thinking that you're going to be more objective, precise, and accurate if you have numbers to put in -- something like, I don't know, estimates of "healthy future years lived." When those don't reflect the values you wanted to use, you'll end up coming to the wrong answer.

The pandemic and our responses to it are full of massively complex challenging questions: How should we balance protecting our health with the losses that come from lockdowns? How should we express our valuing of children's schooling with protecting everyone from harm? How far should we go in trying to eliminate COVID as opposed to just flattening the curve?

These questions have no easy answers and that's one reason we're all in dismay and disagreement about them. Talk of optimizing, even if conceptually sound, makes it seem like some of us are right and some of us are stupid, and makes us want to invest in computer science. Talk of trade-offs reminds us: honoring multiple values in complex circumstances is difficult and fraught, and it's values all the way down.