Monday, August 5, 2024

Puzzles In Choice Judgmentalism: The "Consistency" Model Versus The "Budget" Model

There was a commentroversy in the NYT recently over whether it matters that decaf coffee may have small amounts of carcinogens. The article explained that some chemicals used to make decaf coffee may be dangerous in large quantities, but that the small amounts left in decaf coffee are believed to be safe. But some commenters were angry. They said the article was ridiculous: we're constantly ingesting toxic chemicals, so what kind of dope is worried about this trivial level of exposure?  

For me, it was a new manifestation of a cultural trope that since 2020 I have come to think of in terms of the "consistency" versus "budget" problem. In some mid-range phases of COVID, advice included strategies for doing some things you want to do while still decreasing the risk of getting sick: wear your mask on transit; socialize but avoid larger gatherings; go to the gym but try to stay further apart, etc. etc.

Some of this advice pissed people off. Obviously there were several dimensions, but the one I'm interested in here is the idea that somehow combining high-risk things and low-risk things is irrational and inconsistent. "Why wear your mask on transit if you're going to socialize with a bunch of people anyway?" people asked. There was special rage about the idea someone might wear their mask to go out to a restaurant, and then take it off to eat. What, did they think COVID would magically stay away while they were eating? How stupid.

At some point, I read something illuminating about the concept of a risk-budget and how the framing of "budget" versus "consistency" made sense of these strategies. With something like risk, consistency is irrelevant. Instead, what you want to do is think about apportioning your total exposure. So you might go out for drinks, but stay away from the gym, or you might go to the gym, but avoid going out for drinks -- both good ways of constraining your total risk while doing something you want to do.

The budget model illuminates wearing a mask to go to a restaurant and taking if off to eat. Wearing your mask on the subway? Easy and requires low sacrifice of something you wanted to do. Wearing your mask while eating? Impossible. So -- you can reduce your risk by wearing your mask while you're not actually eating, and still get to go out for dinner.

For some reason I don't understand, even though the budget model obviously makes sense, the consistency model is hard to let go of. I fall into it myself, despite consciously trying to avoid it. Even these days, if I'm weighing wearing my mask on transit, I find myself thinking: but that would be pointless and stupid, given the number of other high-exposure activities I am doing. The budget model shows that's not right, but it always comes into my mind.

Evidently I am not the only one, and there's a surprising kind of negative judgement of personal choices people perceive to be inconsistent -- even when those inconsistencies affect only the person in question. It just makes people mad that someone is worried about chemicals in their decaf, if they're also eating foods like Dortitos.

Why this angry judgment? I don't know. One guess I have is that it's misplaced moralizing: we know moral consistency is important, and we don't like hypocrisy; when we implicitly moralize the choices in question, we trip the "consistency" model wire and our brain goes into hyperdrive along that route. Another guess I have is that we don't like the way others are budgeting, and we wish they had other priorities, but it's easier to accuse someone of "inconsistency." But the matter is mostly opaque to me.

Since this is a full-service blog, I will end by telling you that if you want to avoid carcinogens in decaf coffee, you can buy coffee that is decaffeinated using a "Swiss Water Process" (SWP) instead of other methods that use methylene chloride or ethyl acetate. As other commenters pointed out, the article was all about safety for consumers and did not touch on the question of safety for workers in the coffee production industry, who are presumably in contact with larger amounts of toxins -- a good reminder that sometimes we don't need a consistency model or a budget model to figure out a better thing to do.

Sunday, July 7, 2024

How Goal-Oriented Thinking Almost F***ed Me Up

In the most recent phase of my life, a constant source of unhappiness for me has been about accomplishment: what am I doing, and why am I not getting more of it done? I fuss about my work accomplishments like publishing academic papers, but I also fuss about my extracurricular ones. Why can't I learn Italian more seriously, or read more books? Why can't I be an accomplished amateur musician, or someone who knows how to cook amazing food?

At some level I know this fussing is silly and pointless, because ultimately who cares what boxes you're checking? When I was young, I had a chaotic life, but I also had a healthy sense of just living -- being alive and experiencing things. Plus, as longtime readers know, I've always thought life is essentially a mutual aid association, where connecting with other people is the thing, and connecting with people is not a box-checking activity. So how did I get into this loop of relentless self-evaluation?

One thing I've noticed is that spending more time on the internet seems to make the loop harder to get out of and the sense of pressure more intense. I've always assumed the mechanism for this was obvious: that other people on the internet are accomplishing things. If I go on Twitter, I see a stream of posts like "So humbled to be awarded the X prize!" and "Just ran a personal best for my 10-mile run!" and "check out my new paper on X topic!" Even perusing the news, I see people reflecting on how they became internationally known ballroom dance stars or amateur astronomers or whatever in their spare time.

But since I wrote about the age before the internet last week, I came to think the connection is more subtle. Because the post was about recapturing a sense of mild boredom, I tried a tiny experiment: forcing myself to engage in activities like "looking out the window" during a work-break,"staring at the ceiling" while at home, and "checking out the scene" while on a transit ride -- all instead of "looking at the internet."

It's been good. And I came to think the mechanism of the connection between internet and the accomplishments self-evaluation was not quite the "obvious" one and had more to do with my relationship to my life activities. At some point, I got jolted into a realization: things that are actually just life I have been treating as "accessories to goal accomplishment."

That is, at the risk of stating the obvious, I came to think that activities like going from place to place, putting away clean laundry, eating, etc. are not just "things I have to do to meet my goals" but are actually life. It doesn't even matter if you "enjoy" them or they're a pain; either way, you're experiencing them and that's what's up.

Conceptually, the problem was I thought "X is what I am trying to do" and "A, B, C are what I have to do to get to X." No wonder my obsessions with X got me into self-destructive loops where A, B, and C had to be as efficient as possible and where not getting to X led to a chorus of self-criticism. I was shaping everything into goal-oriented thinking and downgrading all the things that are just, I don't know, being a person.

Our entire culture has adopted a goal-oriented structure for life so it's not a surprising mistake to fall into. If an activity isn't perfectly pleasant, there's an assumption you're doing it to get something else or because you have some "project" you are working on. You can't get a gym membership without being asked to state your goals. My running app, which I use just to remind myself what I did when, is constantly nudging me to frame my activities as accomplishments. Even e-reading apps are set by default to help you keep moving toward your "reading goals." WTF?

The goal-oriented approach is so pervasive, it's part of the standard theory of how people make decisions. In rational choice theory, there are things you want, and there are things you're will to do or giving up to get the things you want, and that's just how it is. From a theoretical perspective, it's always bothered me that the "things you want" category has to be so unambiguous: are we saying that, say, cooking has to be either a "want" -- a pleasure, and a goal -- or a "do not want" -- a payment, a cost you're willing to bear? Couldn't it be a bit in-between? Or neither? And now I'm wondering, does everything even need an evaluation that fits it into a set of ordered preferences? Can I just stop evaluating?

Don't worry, my plan isn't to give up on goals. As we've discussed, when I lived a much less structured life, I got into habits like eating cake for lunch everyday, so this post is not leading up to some radical anti-planning manifesto. I'm just hoping to dial it back, to where living my life is living my life, and not something I do in order to achieve something else.

Friday, June 28, 2024

The Textural Experience Of Life Before The Internet. What Was It Like?

Life before the internet: what was it like? I'm often surprised by how difficult it is for me to recall the textural experience.

I don't mean the small things we used to do, like reading paper maps and calling people. Those things I remember pretty well. I have a vivid memory of driving with my then-boyfriend from Buffalo, where his family was, to New Orleans, where I was in graduate school -- a nineteen-hour drive. We went to AAA and got a TripTik -- a series of paper maps showing the entire route. We didn't have credit cards, so his mom made us a hotel reservation for a place about half way through and we paid in cash when we got there.

What I have trouble remembering is more just what it was like just being at home and not having anything to do and not having the internet for on-demand connection and entertainment. Specific memories tell me this was a big part of life, but it's hard for me to recollect what it felt like. Obviously it sometimes felt boring or dull, but of course it didn't feel strange or surprisingly dull, because it was totally normal.

I remember one evening being home alone before dinner in the 90s and feeling totally like "uhh, now what"?  I had probably just finished rereading all the Jane Austen books or something, and didn't have another novel I was into, and I had probably exhausted the interesting news in the newspapers -- which I typically bought two of everyday (one NYT and one local wherever I was). Then I realized I hadn't read that week's New Yorker, which was sitting on a table. And I was like "Oh my god, The New Yorker! Thank god for The New Yorker."

Obviously it's impossible to recreate or recapture the experience, because if we try to life disconnected from the internet now, that is life disconnected from the internet, which bears no relation to life in a world where the internet just never existed.

It's easy to fall into subtraction: to try to picture what it's like by taking what it's like now, and taking things away. No social media, no email, no watching and downloading content because you feel like it. But subtraction just leaves things out, it doesn't tell you what it was like. What was it like to live without the itchy feeling that you could be -- or should be -- checking what is going on on the internet? That things are happening there, even if you're not engaging it?

I'd especially like to know what it was like to feel the mild boredom of having "nothing to do" for a while. What was it like to feel that kind of mild boredom, but to be so accustomed to it that it felt like regular life rather than an aberration?  

I'm especially interested in that because even though I complained quite a bit about boredom in my younger days, I wish I could recapture this feeling, because I feel like my whole motivational set-up was different. My internal bar for an activity being "engaging" or "interesting" seemed so much lower. Reading books that were pretty good but maybe not very stimulating, listening to people talk about slightly dull subjects, writing a letter to a friend -- I did those things all the time, easily, and it was good to do those things.

For me, there is no way to regain that textural experience, because even if I wanted to cut myself off from the internet -- which I don't -- staying away from the internet, and thinking about that choice, and thinking about what is going on there would still use up half my mental energy.

Maybe some of you remember the textural experience of life before the internet better than I do. Maybe some of you are too young to have experienced life before the internet. If you're too young to have experienced it, all I can say is that if you picture being at home now but with your router destroyed and your phone disconnected -- that is really not what it was like. I can't really remember what it was like, just that it wasn't like that.

Friday, June 21, 2024

Complicity, Moral Ambiguity, and The Hunger Games Prequel (Spoilers!)

When the Hunger Games prequel opened with a narrator from the Capitol, I was like -- "genius!" Finally some literature engaging moral ambiguity and complicity.

If you read the original trilogy, you may remember the basic set up: a bleak future country where the US used to be; a wealthy Capitol and twelve poor districts; to punish the districts for previous acts of war, the capitol carries out the annual Hunger Games. Two adolescents from each district are selected at random to participate in a televised battle to the death. The point is to punish the districts, remind them who is in power, and entertain the Capitol.

And if you read the original trilogy, you may remember that it was narrated by Katniss Everdeen, a teenager from one of the districts selected to be in the Games. She is crafty and intelligent and has a developed moral sensibility whose contours emerge through the books.

This is in no way a criticism of the original trilogy -- which I liked a lot -- but I feel like telling the story from Katniss's point of view is telling a story more on easy-mode. It is just more straightforward to tell a dystopian story of violence and oppression from the point of view of the oppressed than from the point of view of the oppressor. You get to reel your reader in on the side of the sufferer and the injustices done to them. The reader's sympathies all line up: we like the narrator, we like her cause, we want her to win.

But what if your narrator is on the other side?  

One of the good things about the original trilogy is that in addition to just being good as books, the novels force the reader to engage with moral ambiguity. Katniss is a heroine, but she must perpetrate violence herself as well. Katniss is trapped in situations where all her options are bad.

For me, that ambiguity is such a relief from the modern deluge of entertainment with Good People and Bad People. If you were an alien engaging with US cultural products these days, you'd think humans lived in a world where the Team of Sweet Kindness battles the Forces of Darkness and Pain, and where victory for Team Kindness would lead to a peaceful, verdant utopia.

To me, that bears no resemblance to our world. In our world, almost everything you do in life enmeshes you in dysfunctional global systems with someone on the losing end. If you buy a phone, you're supporting violence where conflict minerals are mined, often by desperate children. If you eat meat, forget it, but even if you eat almonds, or avocados, you're screwing up the ecosystem; and in the US and Canada, even local produce is picked by migrant agricultural workers often forced into situations with no rights and very low pay.

All that is to say that we people in wealth countries are complicit in a range of menacing and even murderous systems.

So -- when I saw the narrator of the prequel was in the Capitol, I was excited, because I thought the book would engage a reflection on wealth and complicity. It does start off in that direction: the narrator Coriolanus is a sympathetic character, an eighteen year-old from a wealthy family reduced to materially poor circumstances. His parents are both dead, and he needs to get into a good university to earn enough to survive and to provide a bit of security to his aging grandmother and hard-working cousin. It's been so long since I read the original trilogy, I didn't even notice, but Coriolanus is Coriolanus Snow, the president from the original books, and this is his backstory. So it's the prehistory of a guy you know will be a central to carrying out the future Games.

I found Coriolanus a sympathetic character at the start -- by which I mean partly that when he was put in bad situations with no good options, I thought those choices were ones I could imagine making or at least understand. I thought centering on a complicit and sympathetic figure was interesting, and something you don't see in literature all that often. We know Coriolanus will end up socially evil, but he starts off personally pretty typical.  

As the plot develops, though, Coriolanus starts acting less like a relatable person in difficult circumstances and more like a familiar old Bad Guy. In the final scenes of the book, he not only throws over his love interest, he tries to hunt her down to kill her, changing his whole mind and plan in a span of a few hours, which doesn't feel like a relatable person facing difficult circumstances but more like an old-fashioned bad guy. I don't know, because I'm not an author, but it seems like it would have been easy to make his story more complex and subtle. Was it more engaging to make him a bad guy? Was it more likely to make for a popular story? I don't know.

In some ways the more interesting subplot in the prequel is that of Sejanus Plinth, who opposes the existence of the games and seems almost like a Good Guy, but whose choices keep leading to terrible outcomes and whose efforts to do good constantly backfire. Now there's a relatable hero for our modern times.

I know there's a fifth book coming. Here's to hoping it centers the moral ambiguity of the trilogy and forces the reader into a bit of discomfort with respect to social evil and the many ways people can be complicit in it.

Friday, June 7, 2024

Terrestrial Radio FTW: Are Data Centers The Grimmest of the Grim Climate Issues?

One of the great cognitive dissonances of modern life is the clash between the metaphors of "the Cloud" and the reality of material infrastructure for the internet. The metaphors suggest that the internet is frictionless: no need to put a book or CD on your shelf, or even download any actual data to a place in your actual home; no need to deal with paper or printers or vinyl or cash or AM/FM antennas.

I can't remember when I first learned that far from being frictionless, data centers actually use a huge amount of energy and create a ton of carbon emissions. This article I read recently says that "the Cloud now has a greater carbon footprint than the airline industry." It also describes vividly how data centers create massive amounts of heat, use a ton of air conditioning and water, emit massive noise pollution, and rely on minerals that are unethically sourced and impossible to dispose of properly.

Unlike flying or eating meat, data use gets almost no attention as an ecological issue. Is that because each person's contribution is so small? Is it because we can't imagine life with less internet? Is it because the Cloud metaphors of non-materiality are so embedded in our culture? Is it because tech companies have invested in our ignorance about this topic?

I came face to face with the breadth of non-acquaintance with the issues of data centers recently, when I tried to buy a "terrestrial radio." To explain, first let me back up. When I was young, I used to listen to the radio -- like, regular AM/FM radio. It was a format I loved. I loved the sense that, unlike with records, what I would hear might surprise me. I loved that when something was on the radio, it was shared -- a lot of people would be having the same experience at the same time. I loved that radio had local personalities talking about music and also about other things happening in the specific place where the station was located -- because old school radio was, of course, definitively local.

I thought I would try to reconnect with that textural experience so I went to audio stores looking for good ways to listen to the radio. I quickly learned "radio" now means not the old AM/FM system but rather a thing where data goes from a data centre through an app and into your speakers. I learned that the concept of my youth -- where a local signal goes through the air into an antenna -- is now called "terrestrial radio."

I also learned that no one listens to terrestrial radio and that to be a woman in an audio store interested in "terrestrial radio" is to invite scorn and condescension. Sales guys assumed that I didn't understand technology, that I didn't like apps, or that I thought old fashioned radio was better. They slowly and patiently explained to me that listening to the radio through the internet was better in every way: the sound quality is better, you can listen to any station in the world, there's no fussing about antennas and signal strength. Their message was clear: 'Please stop being ridiculous."

It was too difficult to explain my inchoate sense of the charm of a signal going through the air -- bypassing the internet with all its ridiculous surveillance -- and the strange attraction of being able to choose from among a handful of local stations rather than any station on the planet. So instead of talking about that, I moved straight on to the environment. "But it's more eco-friendly," I said. "The data centers," I said. "Sustainability."

Total blank stare. No one seemed to know about that or have thought about it at all, and their gut reaction was that I was some kind of conspiracy theorist. 

I'm not blaming them -- it isn't even something you would know about if you didn't go out of your way to know about it. I just say this to emphasize what an outlier view it seems to be to care about this issue.
 

Anyway, on a personal emotional level, I find the climate impact of data centers extra grim -- even beyond the extremely high bar of grimness for any climate topic, which is really saying something. I think part of the reason is that so much of what we are getting out of data centers seems relatively pointless.

When food, heating, and transportation contribute to climate change, that is alarming, but the visceral importance of these things is obvious to me. Eating, staying warm, traveling to see people or to see the world -- these are human activities we need and want. 

But when I think about data centers I think about email and bitcoin and Google's new ridiculous AI replacement for search. I grew up in the 80s so I know about things that pre-date data centres -- like writing letters, using a card catalogue, and going to the bank. I guess those things were a bit inconvenient, but overall it was fine. As we know, along many metrics, like housing affordability, it was way better.

Radio-wise, I just had a flirtation with Sirius XM. My friend recommended BPM, for dance music, which is, in fact, great. Like old-school radio, it has stations with people talking and playing music. 

But then I remembered the data centers heating up the landscape, stealing all the water, and driving the people away. All that so I could be occasionally surprised by a fun new dance track? I dunno, but it doesn't seem worth it.

Sunday, June 2, 2024

I Learned About Self-Discrepancy Theory, But I Didn't Like It Very Much

I was at a conference last week where I encountered the idea of self-discrepancy theory, which somehow I had never heard of before. In the version I heard about, there were three items which might fail to be in harmony: the person you are (actual), the person you would  like to be (ideal), and the person you feel a responsibility to be (ought). Roughly speaking, the idea is that if you harmonize among these, it improves your well-being.

Like a lot of well-being frameworks that focus on coherence, it gave me pause. My textural experience of life has always been more a managed conflict one than a harmony one. Things that are good in the long run are often not the things I feel like doing at a given moment. Even for things I enjoy I often have to fight through some inertia to get into it.

When I was in my twenties I did a lot more of what I felt like doing in a given moment, and there was a lot of skipping class, extreme drinking, and cake for lunch. It wasn't good. And even though I actually love exercise once I am doing it, it's been years of habit forming to get past the feeling of ugh, this time I don't feel like getting started, maybe I'll just lie down.

I don't think of these as deficiencies or as mental ill-health. It just feels normal to me to engage in a lot of keeping-myself-on-track as a way of doing a) the things I want to do and b) the things I should be doing. Do I really need harmony to be well? 

Also, when I started trying to slot my preferences, attitudes, and actions into the three categories, I was struck to find an almost empty second category of "the person you would like to be."

Those conflicts I just mentioned like cake for lunch versus feeling like a healthy person all seem in the "actual" category. And I have a very full third category of "the person you feel a responsibility to be," with varying interpretations of responsibility. There are commitments to honor, ethical values to uphold, political causes to support, solidarity activities to engage in. Like a lot of people, I feel disharmony between myself as an actual person and the person I feel a responsibility to be. Even in a decent and gentle world, it's not always easy to be a good person, and god knows we do not live in a decent and gentle world.

So while disharmony between "actual" and "ought" is obvious to me, I actually found myself a bit unclear on the concept of the person I would like to be when that ideal is separated from responsibility. What kind of ideal is that?  

I see both an optimistic and a pessimistic interpretation of my empty second category. An optimistic interpretation is that I like myself the way I am. Self-acceptance for the win!

A pessimistic interpretation is that I've taken all the items normally in the second category and moved them into the third -- essentially moralizing all my ideals. Instead of just aspiring to be a certain kind of person, I have put a responsibility spin on it. For example, I don't like cooking, and I often think it would be so nice to be the kind of person who likes to cook, and who does cook. But I tend think of that in highly moralized terms -- of nurturing/caring for others, and not being wasteful, and saving resources.  

Well, as we say around our household, "the one doesn't exclude the other." It can be self-acceptance and moralizing all at the same time.

In any case, while I can see the appeal of some harmony between the person you are and the person you feel a responsibility to be, I'm not sure I want to just be the person I feel a responsibility to be. And from a practical perspective, it doesn't really matter, because I don't know how to increase that harmony anyway. So I guess I'll just keep muddling along with my inner conflict management strategies, and forget about harmony altogether. 

Friday, May 24, 2024

Like Google Search, But For Faces: Clearview AI And The Dreams of Facial Recognition

I was at a talk recently by a computer scientist, and the speaker ended on an ominous note, explaining that she does everything she can to keep pictures of herself off the internet. On her slide was an image: a book called Your Face Belongs To Us.

Your Face Belongs to Us is a non-fiction book by NYT reporter Kashmir Hill about Clearview AI -- a facial recognition company -- which I decided to read immediately. I figured I would learn that facial recognition is possible, widespread, and creepy, and I did. It's no secret that Clearview's algorithm matches faces to a database of more than 20 billion images collected from the Internet -- including from Facebook et al. Upload a picture of a face, and it'll not only tell you info about that person and who they are, it will find other pictures of that person -- including pictures where that person is in the background, or much younger, or wearing a mask.

But a few things surprised me. It surprised me -- though it shouldn't have -- that some of the players in facial recognition have beliefs rooted in eugenics. The background dream is not just about recognizing faces, but also about the phrenological predictive possibilities: that the face will reveal the character, so we can prevent crime, and improve the human race, by identifying "degenerates" through their features and eliminating those people from existence. 


Chapter 2 of the book gives a short overview of the history of this idea in western culture -- which we know appeared not only in Nazi ideology but also in the work of "progressive" political thinkers since at least the Victorian era, and has persisted. If you want to read more about what Hill describes as "racism masquerading as scientific rigor" and its connections to AI, check out "The TESCREAL bundle: Eugenics and the Promise of Utopia through Artificial General Intelligence

I was also interested to learn that while Clearview is currently used mostly by law enforcement agencies in the US to identify people from surveillance footage or in crowds and protests, a central aim of its creators has been to market the product to private users -- especially business titans and large companies. Not only could you efficiently determine, for example, who should be allowed into a building, based on their face, the marketing possibilities are extraordinary: every time someone walks by, an ad could appear tailored specifically to them, based on their identify through recognizing their face.

That hasn't happened, but it's not because it can't. There have been lawsuits, and strategic restraint, and social pressure from privacy activists making that not happen. But the world in which a billboard is tailored to your Instagram is closer than you think.

Surprisingly often, people bring up a dream of having a pair of glasses that will tell you people's names to avoid embarrassment or seem suave. A Facebook guy describes the "universal experience" of being at a dinner party and seeing someone you know and forgetting their name. They could fix that, Facebook could! If they put their own facial recognition into virtually reality glasses.

It kind of blew my mind that people want to harness a vast, unpredictable, and potentially unethical technology to avoid a moment of awkwardness. I guess that shows what we've all known all along: that innovation happens where money can be made, which means solving the small problems of wealthy people. The tech has not been made available in that way yet, partly for the obvious reason that having people be able to identify strangers in any context is creepy and terrifying: imagine you're a woman in a bar and a guy can look up everything about you, even your address, just from your face. 

Overall, while I knew people -- and especially police -- were identifying faces, and while I knew that was a troubling, potentially evil thing, I hadn't thought of all the ways people might think identifying a face might be useful. Once I saw them, the world of more facial recognition seemed even bleaker and more dystopian than I thought.

I'll just leave you with one final random item. In terms of faces, the biggest surprise to me was about gender and hair: " if two ears are showing," one expert says, "there is an 85 percent chance that the person is male."  As a woman with long hair, who always wears my hair up, so my ears are always visible, I just found that interesting along various dimensions.

Friday, May 17, 2024

NYU Ethos Integrity Series: WTF?

Gothamist reported on Wednesday that some student protesters as NYU were being required complete "modules in a 49-page Ethos Integrity Series' that seeks to teach them about 'moral reasoning' and 'ethical decision-making.'" 

The story calls attention to the most egregious aspect of the whole thing, which is that the activity as a whole has a "forced confession" aspect. They also point out that to complete the exercises, students must "rank a list of 42 values, including patriotism, family, and security and safety, in order of importance to them," and that they have to watch and analyze a Simpsons episode.   

I wanted to check it out so I clicked the linked document. It is true that even though each section says "We are not looking for any particular answers to the following questions," the structure of the exercise assumes that the person has done something wrong and that this wrongness is why reflection is required.

To me, this becomes most concrete and obvious in the sections toward the end about "neutralization" techniques, which they say people use to “explain away” their unethical behaviors with excuses like "Well, I didn’t think it was bad because..." These "excuses," the document says, are masking a root problem or a lack of experience and knowledge with regard to ethical decision-making." Their use indicates that a person has "not yet fully committed to always acting with integrity."

The document's discussion of this topic is somewhat confusing, because they go on to "combine" "neutralizations," "justifications," and "rationalizations" in one list of eighteen items. But each of these "18 types of neutralization techniques," they say, "is used as way to explain unethical behavior without having to view oneself as being unethical."

One of those eighteen "neutralization" techniques is:

11. It is Necessary; The Ends Justify the Means; It is for a Good Cause (admits act and takes responsibility – does not see act as bad, conversely see act as good).

This seems to be saying that if you do what you think is best in a circumstance, and it violates a rule, then you are unethical.

But isn't breaking a rule in service to a higher good often seen as a component of admirable moral behavior? Isn't there a whole ethical theory, consequentialism, based on the idea that ethical actions are ones that promote the best overall outcomes -- so if the ends are good enough, the ends always justify the means? Aren't endless wars and things justified by saying things like "You can't make an omelette without breaking a few eggs"?

There are other absurdities. Of course it is silly to have to rank 42 vague items in an ordered list, but more importantly, ethical values just don't work that way. No one puts patriotism over family all the time or vice versa, or puts security or anything else over every other consideration all the time. If you value honesty, you might think it's OK to tell a trivial lie for an important purpose and not OK to tell a lie about something important for a trivial purpose. If you value security, you might lock your door when you go to bed, but are you really going to build a safe room with locks that you never leave? Anyone who prioritized one value over 41 others all the time would be thought to have lost the plot.

I also object to their idea that "'Unexamined values are 'bad' values" because "If you do not know how you got your values or why you (still) have them, how do you actually know that these are your values?" Most people get their values from family, culture, and peer group. The idea that thinking and reflecting somehow elevates you into a higher plane of being is unjustified. Plus, what do you reflect on, exactly? If you were raised to be honest, and you are honest, what is the reflection question? Is it "Do I really value honesty"? Or -- "Is honestly really valuable"? Neither is a recipe for being a better person.

Some of the sources for the document seem to be institutes designed to help organizations improve compliance from their members, which I guess is not surprising under the circumstances, but you'd think a university could do better.

One of their linked sources -- and the one they cite for the inspiration that "the ends justifies the means" is a "neutralization" -- is an institute whose linked website lists five "reasons to be ethical." Those five reasons are: inner benefit, personal advantage, approval, religion, habit. I'll just leave those here for you all to ponder.

Friday, May 10, 2024

What Can A Person Wear?

I wear a lot of athleisure-wear -- because I like the way it look and fits, because it lasts forever, and because I do, in fact, engage in a lot of athletic activities. The main problem with most athleisure-wear is that it's made of plastic, and we now know that plastic clothes are causing tiny plastic fibers to pollute everything from oceans to breast milk.

A few weeks ago I had a minor freak-out about my plastic clothes and the plastic microfibers. There are so many horrible things happening, but I think my brain fastened on this one because the causal link between me and the outcome is so direct: I wear the clothes, I wash the clothes, the tiny fibers fly out into the world and ruin everything. I could stop doing that.

If you follow this issue at all, you know it is not simple. Producing cotton uses a ton of natural resources and energy; raising sheep for wool is an eco-disaster as well as often cruel to the sheep. I looked up silk and OK, how did I not know that making silk requires dissolving massive numbers of silkworms in boiling water?  

Of course, you can get around production problem by buying used items. I could go to thrift stores and consignment shops and try to find the few things with all natural fibers, and just wear those, and I could learn how to mend and patch them so they don't have to be tossed as they start to wear out, as natural fibers so easily do.

I could do that, but I have not done that. Why not? I could say I've been busy, a stock answer that is also true, but I think the real reason goes deeper. The truth is that I avoid natural fabrics because I think they won't look good on me.

Natural fibers are mostly non-stretchy. I'm not super curvy, but I am moderately curvy. My experience with natural fabric clothing is that it either hangs like a giant pillowcase over my body or it bunches and binds in the ugliest way around my breasts, hips, and stomach.

Now, I know the answer to this as well: tailoring. You read any serious piece about fashion and fit and they will tell you that to look good, you have to get your clothes tailored to fit your body by someone who knows what they are doing.

I can imagine a world in which that is a standard activity that I could engage in, but our world is not that world. Last time I wanted pants made shorter, most places I went wanted me to have pinned them up myself beforehand. I found a place that would do the fitting part, and it took weeks, a couple of follow-up nudge calls, and several trips there to get it all done. Plus, what if I gain or lose a few pounds? Am I going to get things perfectly fitted around my torso then be unable to wear them a month later? Ugh.

Put in starkest terms, where we end up with this is that I could dress more sustainably by buying and wearing used, shapeless items, and just not looking good.

If you put it that way, my choices seem ridiculous and monstrous. Am I seriously choosing to contribute to the destruction of the natural environment because I want to look cuter?

But I am obviously not alone in making these choices. Almost all clothes now have plastic in them. This morning I dug an old denim jacket out of my closet -- genius, I thought, not even requiring a purchase! And not only did it not look good -- it turned out to be part cotton and part elastane.  

In the short term, I decided to buy a couple of Guppyfriend bags -- bags you can wash your plastic clothing in. You put the clothes in the bag in the washing machine, the fibers get caught in the bag, like little pieces of lint, and then you can collect them and put them in the garbage so they won't go into the wastewater. At least, that is the idea. The bags are taking forever to ship, so I don't know how they will work.

It's a lame solution, like so many modern solutions. The plastic fibers will still be out there -- they'll just be in the marginally more appropriate place of a landfill rather than our drinking water.

Friday, May 3, 2024

The Dream of Predictability and Control: I Don't Like It

It took me a long time to realize that there are people who not only believe that human social life is determined by a few, simple, underlying principles, but actually want that to be true. Paul Krugman, for example, told The New Yorker years ago that he was influenced as a young person by reading Asimov's Foundations series. I haven't read those books, but my friend described them as centering on a set of heroic historians "armed with math skills and big computers" who can see where their society is heading centuries ahead of time. Not surprisingly, when Krugman took up a history major at school, he was disappointed. So many details, no grand narrative.

Leaving aside the philosophical question of whether it is true or useful to think that way, let me speak from the crudest most basic emotive perspective. On a gut level, I don't like it. It seems boring and dull. Who wants to live out a life watching predictable people do predictable things you've already predicted? But it also it seems vaguely frightening. If it's other people who have the knowledge, their control over you is vast. Forget being the wily underdog in a fight: your wiles count for nothing against these people.

So gut-instinct-wise, I've always been on the side of unpredictability and complexity. One of my early feelings in this regard was in reaction to Gödel's Incompleteness Theorems. The Second Theorem says "no consistent system of axioms whose theorems can be listed by an effective procedure (i.e., an algorithm) is capable of proving all truths about the arithmetic of natural numbers. For any such consistent formal system, there will always be statements about natural numbers that are true, but that are unprovable within the system."

In other words: if you try to write down the basic assumptions of math, no matter how you choose them, there will always be statements that are not provable or disprovable from those basic assumptions.

It took me a long time to realize that there are people who find this result disappointing and even "nihilistic": that "because there were truths that weren’t provable, nothing mathematical was truly knowable." Because I have always found it not only beautiful but also inspiring and even comforting. So did Gödel, evidently. This biography review says he "drew optimistic inferences ... choosing to emphasize that there would always be new mathematical truths to discover."  

Admittedly, since Gödel probably thought the mathematical objects were "out there" waiting to be discovered, and I don't, our joy probably takes on a different tinge. Rather than "building new paths to the truths ... out there, waiting to be found," I'm more likely to think that when you're choosing fundamental axioms and none of them is more obvious than any other, you're getting into real human judgment and culture territory. And I love that sense that you go far enough into math, what you get is people being like "wait, do the cardinal infinities and the ordinal infinities relate to each other this way? or that other way?"

The past few years, there is a new version of the things I don't like: instead of "a few, simple, underlying principles," it's Big Data. Get enough data points, the thinking goes, and you're going to finally figure stuff out. Predictability, prediction, control. No longer will we have the forces of chaos and who-knows-what's-going-to-happen.

Leaving aside the philosophical question of whether it is true or useful to think that way, let me  let me speak from the crudest most basic emotive perspective. On a gut level, I don't like it. Like the "few, simple principles," thing, it seems boring but also frightening. But unlike the "few, simple principles" thing, it feels like a con: like a thing people will try to get you to believe, even when it isn't true.

Obviously I am pro-using data to solve problems. And I'm sure we will solve problems -- like how to treat cancer patients more effectively. Yes, bring it on, please.

But the weird excitement and optimism about how everything is going to change because now we've got a real handle on things -- that is what freaks me out. We don't, and thinking we do feels not only annoying and creepy, but also disturbing and menacing.