Monday, June 29, 2020

Anti-Racist Values And Decision-Making On Campus

Like a lot of other universities around North America, my university has been talking over the last few weeks about anti-racism and what universities need to do to do better. Among other things, events included a workshop I attended last week. I've been thinking about an important point that the speaker made, which is if you say that you have anti-racist values (which universities do), then you have to put those values into practice, otherwise it's just talk. Success at putting those values into practice is manifested in practical outcomes, and can thus be seen and measured.

This point got me to thinking about the different ways that university systems work to create the outcomes that we do, in fact, experience and see. One thing that happens a lot in universities, to one degree or another, is that decisions are driven by undergraduate enrolment statistics. Departments and faculties get resources if they attract more students and majors. Departments and faculties die if they fail to attract students and majors. Individual classes run, or don't run, based on whether they attract students. As you can imagine, this can influence big decisions, like who gets hired to do what, and vast numbers of smaller decisions, like what gets on a syllabus.

This way of proceeding has always seemed to me a bit bizarre. Are we really going to let the decisions of a bunch of 18-22 year-olds -- and, the narrow slice of them who happen to go to university -- determine the direction of scholarly research and the ideas that a community invests in? This is nothing against young people -- it's just weird to have this tiny cross-section of society wield this enormous power over something that is quite important and complicated.

And even from an abstract point of view, you can see how this way of proceeding might tend away from, rather than toward, teaching and research focused on anti-racism and anti-oppression. Young white people may not want to confront their place in an unjust system. Almost all young people are pressured to study practical subjects. In universities without breadth requirements, students in STEM majors may feel they don't have time in their course schedule for other things. These pressures don't come just from anxious parents, they also come from the way our world is -- hyper competitive, capitalistic, etc. etc.

If I understand correctly, one way of framing decision-making based on enrolment goes something like this: undergraduate tuition pays the bills, so that is the income; a sensible organization of a system lines up the income and the expenses so that the one pays for the other in some linear kind of way.  I've even heard of universities where they say "you eat what you kill": the idea being that self-sufficiency market-based norms coordinating input and output should undergird university decision-making.

There is much to say about this, but what I want to focus on here is the veneer of objectivity and neutrality sometimes placed on this framing. Apportioning resources in a way that seems to line up supply with demand can seem like you are avoiding these problematic value-laden judgments. It may seem like you're taking a step back -- *we* aren't the ones making these decisions. It's just how things shake out when you look at the numbers.

But all ways of making decisions are value-laden and non-neutral. If you do a cost-benefit analysis, you're making judgments about how to weigh everyone's choices and what other values -- like justice -- you're ignoring. If you base everything on consent and individual liberty, you're making judgments that privilege the status quo, and that rule out rectification of historical injustice. The metaphor of the market rests on assumptions that what your customers want and need is what should be created, and that their sense of worth should inform yours.

As racialized people have been saying for a long time, the social structures in place that feel neutral or objective to those in the dominant social group are anything but, and often work to reinforce the injustices of the past.

Of course universities should factor into their decision-making what students are looking for. When they do so, they pay respect to certain values, including respect for student needs and student autonomy. The point here is just that other values matter too -- values that are distinct from these, and may conflict with them. If you say you care about these other values, you have to find a way to make room for them in practical decision-making at various levels, which can mean bringing judgment calls back into the picture.

Monday, June 15, 2020

Science, Judgment, And Authority In The Time Of Pandemic

The Coronavirus moment is reminding us all of the problems with the way we normally do things. Some of these problems have to do with the place of science in our practices, the way we talk to one another, and why we do what we do. These are just a few items that I have found extra personally irritating.

Masks of confusion
I know a lot of people are irritated by the way that we were told not to wear masks, because they were pointless and we'd all fuck it up and wear them wrong and cause mayhem, only to learn later that wearing masks actually works. And sure, I can spare a thought for that annoyance.

But for me, this been massively eclipsed by my feelings about the bizarre communication style about masks right now. Almost everything I read says something like "Here's what to do about masks" or "Here's where masks are mandatory" or "Here is the updated health policy on masks -- without explaining the reason people are being asked to wear masks.

Every public communication about masks should include a basic explanation that the use of basic non-fancy masks works because it prevents asymptomatic infected people from spreading the disease around. People do not know whether they are asymptomatic. So if they're going to be near people, they should wear a mask. Sure, it might help you avoid infection yourself, but that is not the main point.

There still seems to be massive basic confusion about this. I keep seeing people in comment sections saying how it's their choice how much they want to protect themselves, or that they're personally not worried about getting sick, or that only infected people should have to wear masks. Are health communicators being deliberately obscure about the collective responsibility angle, because they think people will assume it's self-interest and thus follow the rules? Are they leaving out the explanation because they think people will just follow the rule? Bad news for you, guys.

"Listen to the science"
This one is trickier, because of course, yes, I think we should base our decisions on the best scientific information that we have. But science alone tells you almost nothing about what to do in a pandemic, because everything you do is going to have complex ripple effects and you have to trade those off against one another. BCE (before Coronavirus era), I used to constantly bore people talking about how many people die every year from car crashes -- in 2016 alone, around 1.35 worldwide and over 37,000 in the US. But no one ever seriously suggests giving up driving.

Please note that I am not saying that the virus is comparable to driving! Clearly, it is much more dangerous. The point is just that structurally, we're always making collective and personal judgments about how much risk is OK for the things we want to do. One thing that's challenging in the Coronavirus case is that different people have different risk tolerance, and yet in the nature of a pandemic, we have to act together. That is a very difficult situation, but it's also one that isn't helped by saying "listen to the science."

Amateur epidemiologists around every corner

These fall into two categories: the data watchers and the microbiology obsessives. The data watchers are checking out the Johns Hopkins site to follow the numbers and see whether their preferred policy response is working and whether countries with leaders they hate are suffering. I'm guilty of this myself, relying on this cool visualization site to compare stats, form hypotheses, and rationalize my existing prejudices. As this Guardian article reminds us, though, there are lies, damned lies, and statistics: massive variation in how cases are counted, and when they are counted, and what counts as "dying from Coronavirus" means it will be years before we have any clear picture of what is happening.

Then there are the people who keep up to date on things like what size of particle travels by aerosol transmission. Whatever floats your boat, I guess -- but, as with most science, a few papers you download from a preprint server is probably not enough for a non-expert to make an informed opinion.

While these are my personal irritations, I will say one thing they have in common is that science, while crucial, is never the whole story: the world still needs judgment, communication, shared deliberation, and all those murky things you find over in the Arts and Humanities departments. So please, please don't destroy us and leave everything to the STEM people.

Monday, June 8, 2020

Policing Practices, Law and Economics, And The Values Of Justice And Efficiency


In the full story of how things in American policing became so completely fucked up, I would like to read an analysis that explores connections among 1) theoretical issues in the framework known as "law and economics," 2) local legal structures that appear to use policing to generate revenue, 3) policing practices, and 4) racism.

For those not up on these things, law and economics is a legal framework that understands laws through the lens of efficiency: good laws bring about good consequences. For example, laws related to civil wrongs could be crafted with an eye toward what would work most productively moving forward, rather than thinking about background rights and values like fairness. This framework emerged around the mid-twentieth century out of work by neo-classical economists (many at the University of Chicago) and legal theorists like the influential Richard Posner, and has a wide range of contemporary applications.

"Positive law and economics" is about explaining and predicting laws, with the hypothesis that, other things being equal, laws that produce efficiency will be adopted. "Normative law and economics" says that such laws not only would be adopted but should be adopted -- so that existing laws can be improved by being made more efficient.

What "efficiency" means here can be complex; it can be the maximizing efficiency of utilitarianism, in which the thing to do is the thing that brings about the best consequences overall, but more typically it is "Pareto efficiency" that is used -- a set up is Pareto efficient when there is no way to one person better off without making another person worse off.  (I wrote about various forms of efficiency here and here.)    

You might be thinking that it's odd to have a legal framework based on efficient future consequences rather than justice and fairness. I do too, though we won't have time to get into that here. If you're interested I recommend this excellent book review.

One can apply the theoretical approach of law and economics in a wide range of ways: even when it comes to something like "efficiency" and the "good" in "good consequences," for instance, you might be trying to promote preference-satisfaction or well-being or you might be trying to create, you know, actual money.

This last bit brings us to 2): legal structures that appear to use policing to generate revenue. This book review by the always brilliant Moe Tkacik explains the idea in vivid detail: the sanctions for crimes are set up so the accused have to pay; the state then raises money while leaders claim not to raise taxes. Judges become like tax-collectors whose subjects are in no position to complain.  

The theoretical connections can be a bit complex, but as I understand it, the reasoning goes  something like: if the fine for driving without a license is X dollars and you drive without a license, you must have in some sense preferred to drive over losing X dollars; the state can set the fine in such a way that it reaps more from the fine than it lost from the crime being committed. In this way the crime is disincentivized but the interaction is kind of a win-win, and is efficient all around.

And thus to 3: I remember after Michael Brown was killed in Ferguson, I kept seeing references to the ways that the over-policing of the citizens there could be traced partly to policing as a way to raise revenue. This post gives a great overview and explains: "In its 2015 report on policing in Ferguson following the killing of Michael Brown, the Civil Rights Division of the United States Justice Department concluded: “Ferguson’s law enforcement practices are shaped by the City’s focus on revenue rather than by public safety needs. This emphasis on revenue has compromised the institutional character of Ferguson’s police department, contributing to a pattern of unconstitutional policing, and has also shaped its municipal court, leading to procedures that raise due process concerns and inflict unnecessary harm on members of the Ferguson community.”

And 4) now you add both individual and structural racism into the mix. Because of structural racism, Black people are much more likely to be poor and powerless than white people. The poorer and less powerful people are then over-policed and abused into becoming ATMs for the government's revenue needs. Among other things, modern algorithms for crime prediction and sentencing actually factor in past arrests so that the original injustice is perpetuated further. And, of course, individual racist police then have a framework for their abusive actions.

I don't know how all of these interrelate -- theoretical law and economics is complicated and I don't know how its theoretical development has impacted practices of policing-as-revenue. But I hope to have shown here why I see them as conceptually interconnected and mutually supporting.

Anyway, if you want to read something else on racialized impacts of framing laws in terms of future consequences instead of past actions, I cannot recommend enough this searing personal essay by classicist and political scientist Danielle Allen about her cousin Michael, who enters the criminal justice system as a result of minor crimes at age 15, gets derailed in life, and ends up dead -- murdered at a young age.  

From a theoretical point of view, proponents of efficiency-based reasoning sometimes cast "justice" as a kind of artificial virtue, something to be explained away, something that reflects prejudices of an evolutionary past, where punishments were needed to keep people in line and bring about good consequences. The implication is that once we see this, we can go right to the consequences and skip the justice part altogether. I don't know all the ways that 1)-4) interrelate, but I'm sure the part about skipping justice altogether must be wrong.

Monday, June 1, 2020

In Which I Venture Into The Thickets Of Data Science And Hume's Problem Of Induction

One of the things I started doing in the middle of lockdown was courses at Data Camp. I started with Machine Learning for Everyone, then moved on to Python for Beginners. In case this isn't your universe, Python is a programming language that is often used for data science.

I want to emphasize that I did not do this because I suddenly had "extra time" on my hands or because I was casting around for something to do. There are different lockdown experiences out there, and the "extra time" experience has not been my experience. For one thing, everything to do with my work seems to take four times as long as it did before.

Rather, the way my emotional life works, I often have a background sadness that I keep at bay through doing things. In normal life, the bustle of activity and the feeling of accomplishment are central to that process. With lockdown, there is no "bustle of activity." So accomplishing things -- or feeling like I am accomplishing things -- has become a huge thing. So why not learn something about data science?

The classes are excellent, with lots of examples and exercises. On encountering these, I immediately started thinking about data science and Hume's problem of induction.

One of the first examples that my course used to illustrate machine learning concepts had to do with predicting how much money a movie would make based on input factors like star power, budget, advertising, and so on. And I was like, "Wait, what"? Is the idea supposed to be using the data of the past to predict earnings in the future? But isn't the popularity of works of art always shifting and changing? Isn't art frequently based on novel ideas? Also, I thought the popularity of films was regarded as wildly unpredictable.

If you've studied philosophy, you won't be surprised to hear that my next thought was, "What about Hume's problem of induction"?

If you haven't: briefly, Hume's problem of induction is that inductive reasoning -- in which we go from past cases to generalities and the future -- always rests implicitly on an assumption that the future is going to be like the past. And yet we have no logical reason to believe that the future is going to be like the past. So inductive reasoning, which is at the core of basically all empirical science, has no justification. You might try saying "Hey, but the future has always been like the past." But to use that to solve the problem would mean applying the past to the future, and so would be induction, and so would be circular.  

You can see right off the bat that these are deep waters we are getting into, and I have to warn you that this is going to be the Phil 101 level version of things because I'm not a specialist in this area, I'm just a person thinking about data science. But I do remember from teaching Phil 101 that the point with Hume isn't just about a lack of certainty. It's no help to say that while we're not sure the future will be exactly like the past, we have reason to believe it will probably be like the past. Because whatever version of "probably" you come to, that judgment relies on thinking that in the future, things will occur with the likelihood that they did in the past. In other words, we're back with the circularity problem.

Anyway, I'd been wondering vaguely for a long time about social science and the problem of induction, and then I started thinking about data science and the problem of induction. In the context of social reality, Hume's problem starts to take on a practical urgency. Because when it comes to people, when is the future ever like the past? Our current moment seems designed to hammer this point home. Ha ha, you thought the future was going to be like the past? Guess again, suckers.

So like anyone else, I then googled "data science," and "Hume" and "problem of induction." (This is where I have to admit that my usual searching via Duck Duck Go got me nowhere and so I was forced to recall to mind the superiority of Google as a search engine).

I found this discussion, which gives a good overview, but which ends by saying that "instead of strictly rejecting or accepting, we can use inductive reasoning in a probable manner." But I didn't understand this, as I thought the problem applied to probabilistic reasoning as described above.

I also found this piece, which covers a lot of interesting territory but which concludes that AI works because "the problem of induction can be managed," which again, I didn't understand. 

So then I was like, Do I not know what is going on? So I went to the Stanford Encyclopedia of Philosophy entry on the Problem of Induction. Yes, there are attempts to get around the problem of induction via "Arguing for a Probable Conclusion." Not surprisingly, the matter turns out to be very complicated, though I note that each subsection seems to end with the author of the article basically saying "this is why that doesn't really work."

Noticing that the entry points the reader also to "Philosophy of Statistics," I went there, and was fascinated to see in the first section:  "Arguably, much of the philosophy of statistics is about coping with this challenge [of the problem of induction], by providing a foundation of the procedures that statistics offers, or else by reinterpreting what statistics delivers so as to evade the challenge... It is debatable that philosophers of statistics are ultimately concerned with the delicate, even ethereal issue of the justification of induction. In fact, many philosophers and scientists accept the fallibility of statistics, and find it more important that statistical methods are understood and applied correctly." 

So at this point, I guess figuring out what I think about data science and the problem of induction will require some intense intellectual effort. I think it will be worth it though. The most interesting item I found in my searching argues that the real challenge that the problem of induction poses for data science is that people "change and grow morally and socially in non-transitive, non-linear ways."   

I agree, and I would add that social institutions and practices also change in complicated ways. We now get into the debate over whether there are simple and uniform laws that lie beneath what looks like social chaos, or whether people and their doings create novelty in ways that are inherently impossible to pin down. You may not be surprised to hear I tend toward the latter view, not because I think free will lies outside the laws of the universe, but more because the creativity and complexity of humans isn't susceptible to that kind of generalizing thinking.

The topic is complex. But on my side can I present the wild success of Parasite, surely a film whose budget and star power would never have led to predictions for its success?