Gregor Reisch, Margarita Philosophica, 1504, via Wikimedia Commons |
By "knowledge pretense" I mean the idea that it's important to have "an answer" even if you don't know whether it's the right answer and maybe even if you know it's the wrong answer.
It's like, no matter what problem you are facing, in our era of metrics and optics, you hear constantly about the importance of gathering more data. Gather data. Plot it visually, and run it through some software. Some numbers will come out.
But with a lot of modern problems the issue isn't that we don't have enough data, it's that we're trying to measure and what we have data on are two completely different things. But no one wants to admit we just can't know. So we gather more data.
For example, everyone wants to improve K-12 education. And we keep coming up against the problem that what we want to improve is really really hard to measure. "How much a student learned" just isn't the kind of thing you can go around easily quantifying.
But instead of acknowledging that, and admitting there's a lot we don't know, there's this relentless rhetoric about the importance of data, gather more data, it's important to get more data so we can understand, make rankings, evaluate. Then when that doesn't work everyone freaks out. But of course it doesn't work. The answers measure what we don't want to track and so can't help but be wrong.
I was first alerted to this problem in my research in ethics. The approach I favor involves acknowledging that there are multiple values -- such as justice and benevolence and respect for others' autonomy -- and then thinking about how we should weigh those values against one another when they conflict, as they so often do.
It's a common knock on this kind of approach that to do that last bit -- think about it, weigh values against one another -- you have to make a judgment call. The theory itself doesn't give you an answer. Often, the explicit implication is that a more unified ethical approach, like simple cost-benefit analysis, would allow you to avoid this problem, by giving you an answer in every case. One principle, a complete set of answers. VoilĂ ! No judgment required!
But this line of thought has always really bothered me. It's no advantage that your theory gives you an answer if you have no reason to think it's the right answer. If there really are a plurality of values, unified approaches like cost-benefit analysis give you the wrong answer. How is it any improvement to get an answer if you know it's wrong?
Isn't a judgment call better than an answer you know isn't right?
Here, I believe, we get to the deep cultural nub of the matter, which is that for some reason in our modern era nobody wants to make a judgment call.
Some people who want to improve education find it alien that the answer might partially involve attracting and retaining people with really good judgment who might exercise that judgment in making decisions. The suggestion that we should use our collective judgment to sort out tricky issues about distributive justice or the environment is scorned as touchy-feely, old-fashioned -- not the kind of objective data-generated answers we've come to know and love.
It's like everyone wants everything to run by algorithm or something. WTF? Why is this?
I'm sure there are many reasons, but I suspect lurking in there are the following. There's the anti-elitism of "who gets to decide?" There's the fear that someone is looking out for their own interests in an unfair way. And mostly, I think, there's the sense that somehow a judgment call is arbitrary. What's a judgment call but just what some person happened to think about something?
I get these are concerns. But honestly, they don't seem weighty enough to me to avoid the alternative, given that that alternative is knowingly preferring the wrong answer, just because it looks like "science," which seems to me an exercise in utter perversity.
No comments:
Post a Comment