Sunday, October 24, 2010

Life, Death, and Procrastination

The recent book review in the New Yorker of the book The Thief of Time starts with the arresting story of a Nobel prize-winning economist who can't seem to get it together to mail a box of clothes from India, where he is living, to the United States.  He just puts it off and puts if off.  He never gets it done. Finally someone helps him out by bringing it back with him.

Nobel prize-winners:  they're just like us!

The book is about the philosophy of procrastination, a phenomenon that cries out for some philosophical reflection if ever there was one.  As the reviewer, James Surowiecki, explains, procrastination is puzzling in part because it involves "not doing what you think you should be doing" -- an idea that is confusing in itself.

Socrates thought it was impossible not to do what you thought you should be doing.  That is, if you chose to do something, it was because you thought it was the best thing to be doing overall.  If that thing seemed ultimately boneheaded -- like failing to mail the box day after day -- that wasn't because you failed to do what you thought you ought to do, it was because you were mistaken about what you ought to have done.  You must have thought it was best not to mail the box, since you didn't.  And you might have made a mistake in your thinking about what was best to do. But that's not the same as failing to do what you did think was best. Which is what Socrates thought was impossible.

It's a powerfully counter-intuitive conclusion. Because it sure feels like what you're doing when you procrastinate is failing to do what you think you ought to do.  And yet it's not like procrastinating makes you feel better, like you're having a better time.  Usually it makes you feel worse. So WTF is going on?

One way to understand procrastination is through its relation to what is called "hyperbolic discounting," which is basically the tendency we have to put off painful experiences and fail to wait properly for pleasurable ones.  We are biased toward the present.  An hour at the dentist today, or two hours at the dentist in a few months?  We put it off.  Get 100 dollars in a year, or 110 in a year and a day, we choose to wait for 110.  But choosing between 100 dollars today and 110 tomorrow?  We want 100 dollars now.

I've always thought hyperbolic discounting and procrastination must have something to do with mortality.  I always thought if someone asked me, "Why put off going to the dentist, when you know you're going to have to?"  that part of any honest answer would have to be "Well, maybe I'll die before I have to go."  Hey, it's always possible.

Of course, the fact that you might die before you have to do some unpleasant thing, or before you get a chance to enjoy some far off benefit, does make some "discounting" absolutely rational, and not puzzling at all.  If you could factor the likelihood of death into your calculation, you could come up with some way of knowing just how much putting things off and just how much impulsivity makes sense.

What makes "hyberbolic discounting" a kind of irrationality isn't that you are biased toward the present; it's that you're way too biased toward the present.  That is, for most of us, the odds of dying before the root canal are so slim that it makes no sense at all to put it off.  So to make sense of our reasoning procedure in this direction, we'd have to assume that most people are dramatically inaccurately assessing the likelihood of their own deaths.

But now we get to the very weirdest thing of all about understanding hyperbolic discounting this way:  it suggests that we err on the side of death.  That is, our choices make sense only under the assumption that our immanent death is much more likely than it actually is.

This strikes me as extremely strange.  Because if most people err in thinking about their own deaths, it's to assume they're never going to happen, or that they're way way far in the future.  They don't err on the side of thinking they're going to die.

This means one of two things must be true.  Either the way we deal with our own mortality is so strange that we can psychologically overestimate its likelihood and underestimate its likelihood at the same time, or, contrary to what I'd thought, hyperbolic discounting and procrastination have nothing to do with mortality and the possibility of death.

Both are weird.  It's weird to think that underneath it all, and despite our appearance of obliviousness, we have our own mortality frequently present to mind.  But it's also weird to think that putting things off is something that an immortal being would have trouble with.

Or maybe I just think that because when I think of immortal beings I think of gods.  Maybe you can be immortal and a procrastinator. It's funny to think of the new post-humans, god-like, unable to feel pain, living forever, but unable to get their shit together to get their boxes to the post-office, to buy birthday presents on time, and to file their income taxes.

Sunday, October 17, 2010

No, I'm Sorry, Doing Moral Philosophy Is Not Like Falling Off A Log

Call it the Wikipedification of ideas.  The slogan is "Well, how hard can it be?"

I got nothing against Wikipedia, which I use all the time.  Using Wikipedia doesn't have to lead to the Wikipedification of ideas.  But some of the basic elements of Wikipedia ... well, let's just say that some people seem to get a little overly enthusiastic about them.  Like the idea that everyone has equally good "information" about a topic, that it's pointless to think we need "experts," that complex expressions of ideas are just obfuscation, that every question has either an uncontroversial answer or, at worst, an uncontroversial set of plausible answers.

This just isn't true.  Especially when it comes to abstract ideas and ideals.  Like thinking about right and wrong.  I work some in this area -- on moral philosophy -- and I can tell you:  it's hard.  How should we trade off the ending of one life against the preservation of others?  How do you know when inequalities are unfair?  How do you reason with people whose judgments are very different from your own?  Are moral judgments objective or are they just fancy kinds of emotions and tastes?  It's a difficult subject.

So it's infuriating to have it presented as if moral philosophy is actually easy.  Like, "Gee whiz, if everyone would just calm down and be nice -- and stop listening to those obfuscating philosophers! -- we'd be all set."

In the New York Times today Robert Frank talks about income inequality.  I'm roughly in agreement with his broad conclusion -- that income inequality is bad.  But the way he goes about explaining it is frustrating.

Focusing on fairness, as moral philosophers have done, he says, isn't getting us anywhere, because there's too much disagreement on how fairness should be understood and what it comes to in this context.

That's right:  moral philosophers don't agree about fairness and inequality.  One reason for that is that the issues are complex, there are several ways of seeing things all of which seem somewhat reasonable, and even the question of how to decide among competing views is a vexed one.

Frank says that instead of trying to sort these issues out, we can look at a cost-benefit analysis.  Like, we know high income inequality has costs, and we don't see any offsetting benefits, so clearly it's bad.

But there are reasons we don't just apply cost-benefit analysis to figure out the answers to complex problems.  The reasons are familiar from the known difficulties with "utilitarian" reasoning in moral thinking.

Utilitarianism says that you should do the thing that brings about the best consequences for all, where everyone counts for the same amount.  It sounds promising, but it leads to some surprising results.  Suppose five people are in need of five different organs to live -- one guy needs a liver, another a heart, and so on.  Should we kill one person and distribute his organs?  Save five lives, end one, cost-benefit-wise, sounds like the right thing to do.

But obviously no one thinks this is the right thing to do.  And the reason it's not the right thing to do has nothing to do with how high or low the "costs" are.  Imagine the guy you kill is really unhappy.  Imagine he has no friends.  The "cost" of killing him is now low.  Does that make it better?  No.  Plausibly, it makes it worse.

You can argue -- as moral philosophers do! -- about what the right explanation is.  One plausible answer goes something like this:  what's wrong with killing the guy has to do with something outside of costs and benefits, and has instead to do with his rights, his freedoms, his autonomy to live his life as he wants, even if it's an unhappy one.

At one point Frank says that the increased wealth of the rich hasn't made them very happy.  But as we've just seen, the happiness of the person isn't the only thing you have to think about.  People have the right to the pursuit of unhappiness as well as the pursuit of happiness.

The point is that even when the costs are low and the benefits high, you're don't have a simple answer about what to do.  There are other things to consider.  Because, well, moral philosophy is complicated and not simple.

The same problem arises in the new fad for explaining morals with science.  The new neuroscientists, like Sam Harris, want to tell us that science can tell us about morality, because science can tell us what makes us flourish and feel happy and what doesn't.

As the philosopher Kwame Anthony Appiah points out in this excellent review, knowing what will increase well-being tells you little about what to do.  How should you weigh one person's well-being against another?  Is it average well-being or total well-being that matters?  What about the problems with cost-benefit analysis, already mentioned?

Furthermore, is it only conscious well-being that matters?  Does that mean that if your spouse is cheating on you it would be better not to know?  And if you know the truth will hurt someone or make them feel bad, should you lie?  Neuroscience can plausibly tell you how much less happy you'll be when you find out the truth about things, but I don't see how knowing the answer to that question is ever going to help you figure out what to do in life. Even if the truth sucks, even if it reduces your well-being and leaves you in tears, don't you sometimes want to know it anyway?

I guess when the philosophy departments all disappear because of funding cuts to the humanities, no one will have to worry about these problems any more.  We can just kill the guy, distribute the organs, and lie about it after.  Questions?  I hear the Wikipedia entry on "cost-benefit analysis" is excellent.

Monday, October 4, 2010

Yeah, But I Also Did A Minor In Facebook Privacy Settings

Another humanities program "deactivated"at a major university, again primarily because of a low major-to-faculty ratio. 

Here's what I don't get.  Whatever you think about the value of the humanities, are people seriously suggesting that the choices of 18- to 22- year-olds, who haven't yet encountered any of the wisdom or perspective of a university education, or of, you know, life, should determine what's on the curriculum?


Sunday, October 3, 2010

Confusion And Distrust, In Colonialism And In The University

I just finished listening to E. M. Forster's A Passage to India the other day.  It's an amazing book, and one of the many things that make it amazing is the way it shows what is ordinarily so hard to describe:  the way in which mutual distrust poisons community life.

The story is set during the British colonial rule of India.  The book is masterful in its depiction of the racist and condescending attitudes the British take toward their subjects.  But what makes it so sophisticated, it seems to me, is the way it shows how basically well-meaning and reasonable people get drawn into terrible situations, situations whose terribleness is created and exacerbated by the inherently screwed up -- and immoral -- way the British regard the citizens they hope to govern.

The novel has a big dramatic event at the middle of it, but there are many small events that show this with subtlety.  There are so few shared expectations.  One guy tries to have a party to bring together some British guests and some Indian friends, and it totally fails as a party:  in the absence of shared expectations about who is supposed to go and talk to whom, and who is supposed to make what kind of conversation, and how seriously offers and future plans are to be taken, the whole thing becomes a mass of confusion and hurt feelings.  Because there is mutual distrust, confusion and hurt feelings turn immediately into anger and disrespect.

As I was listening, I was reminded that this aspect of power-imbalance and difference is not restricted to imperialist contexts.  Mutual distrust poisoning relationships, in an atmosphere of power imbalance:  it's one of the things that makes racism and sexism so very destructive.

Laurence Thomas, a philosopher, wrote an essay called "What Good Am I"? -- meaning, What Good am I as a black philosophy professor, in particular? -- about why it matters to have people of different races, and of both sexes, as professors.  The answer goes beyond role models, he says, and is more about mutual understanding and trust.  Learning, he argues persuasively, can only happen in an atmosphere of trust, and racism and sexism are a bar to that trust.

Think about it.  In a classroom setting, learning involves being evaluated and criticized, even corrected, by someone else.  In at atmosphere of distrust, it doesn't make sense to make oneself vulnerable in that way.   Either you feel antagonized, or you feel like a dupe for interpreting the evaluation as well-intentioned.

And then, it's in the nature of things that people with different backgrounds will find one another sometimes hard to interpret, making that trust especially hard to establish.  I know I've experienced this difficulty of communicating in academic life a ton:  in my male-dominated field of philosophy, the kinds of things people think are obvious to assume, and the kind of things they say to establish a friendly but professional relationship, just often don't feel to me like the kind of things it's obvious to assume, or the kind of things one would say to establish a friendly but professional relationship.

This wouldn't really matter if there were lots of women and lots of men, but when there's lots of men and few women, it's difficult:  a woman ends up always feeling a little destabilized, a little uncertain, a little like a foreign visitor to another country, trying to figure out the codes.  Who's supposed to talk first?  Is small talk about family nice, or a waste of someone's time? -- or worse, an invasion of privacy?  Is complimenting someone's clothes considered friendly or peculiar?  What about dark humor?  I know everyone has to figure these things out, but for whatever reason when I'm around a lot of women, even in a professional setting, the answers seem to me pretty obvious -- small talk about family is nice -- but when I'm around a lot of men in a professional setting, they don't.  And then there's the complicating factor that what seems nice coming from a fellow guy might seem peculiar or intrusive coming from a woman.

As Forster so aptly shows, misunderstandings which might come to nothing in an atmosphere of trust become toxic in an atmosphere of distrust.  In his paper, Laurence concludes his reflections by saying something like this:  the importance of minority professors is that their existence represents the hope that the university is a place where trust and gratitude are possible among people of all races.  I've always thought this an apt observation, and I think about it often when the question comes up of why, and how, it matters to take active steps for diversity in all the academic disciplines. 

If philosophy is exceptional for being lots-of-guys, it's truly outrageous for being lots-of-white-people. I don't know how to solve this problem, but these thoughts are one of the many reasons, at least, for why it's a problem.