I don't know if you saw this recent Guardian article about how conspiracy theorists are hounding the victims of shootings and making their lives unlivable. People who survived the Vegas shooting, parents whose kids were shot in the Sandy Hook shooting, a guy whose daughter was shot live on TV -- for some reason a sizable number of people take it on themselves to harass, threaten, demean and degrade these victims. They create video commentary with inane remarks about whether the person had a weird expression on their face; they make post endless speculation about alternative theories; and they target victims on social media with comments like "you fake asshole, I hope you die soon, you deserve to get shot for real."
This turns out to be a difficult problem to solve. Social media companies like Google, who owns YouTube, are trying to maximize engagement with the site, while the law regards content providers as not legally responsible for the content shared on their websites. Tech companies are trying to craft ethical guidelines, but the work is massive and difficult to scale.
For example, a person can flag a video and recommend it be taken down, but this is described as being "like trying to kill roaches with a fly swatter." The guy whose daughter was shot on live TV decided to try to flag the annotated videos of his daughter being shot -- but he just couldn't do it. Even looking at thumbnail after thumbnail of the video was too much for him.
Not unreasonably, he and other victims would like Google to do something about it. Instead of waiting for victims to flag the videos, couldn't they be proactive? Couldn't they, perhaps, hire people to search for a delete these harassing videos?
The answer is no. As an MIT computer scientist put it: "would they need to hire someone else to handle all the white supremacist harassment, and someone else to handle all the gender harassment? It’s an issue of scale."
To me, it's not surprising that this is a difficult problem. Distinguishing harassing content from non-harassing content is not easy, and there are reasons it's difficult to automate. One reason is that it's a matter of judgment, and judgment reflects a point of view. For instance, posting a video calling the Vegas shooting a hoax and the heroic survivors who saved lives "lying cunts" (as people did) is harassment. But posting a video about whether the government lied about weapons of mass destruction in Iraq is not -- it's political speech. And it's political speech whether or not you use words like "cunt."
These judgments reflect points of view on the world and involve actual normative and ethical judgements. They reflect judgments about what is true, but also about what matters and how much and why, and about when being called a liar is a harm and how harmful it is. For complicated reasons in our society we tend toward the desire for meta-normative principles instead of normative judgments. Like, we seem happy debating the norms of engagement and debate -- "free exchanges of ideas" versus "words can harm people" -- but seem reluctant to acknowledge that the hard cases often come down to actual judgments about actual particulars.
For example, the Guardian article mentions how "YouTube .... has no policy against conspiracy or hoax theory videos in general." As if the judgment call -- this hoax theory/conspiracy theory versus that one -- is irrelevant and the issue can be decided by metanorms -- hoax theory/conspiracy theory versus not that.
But that doesn't seem right to me. Actually, for YouTube to have a policy against hoax theories and conspiracy theories in general would be outrageous. Governments, corporations, and individuals lie all the time. Of course we should be allowed to call them liars. It's not an issue you can decide with metanorms like hoax theories versus no hoax theories.
Among all the heartbreaking things in this article, like Vegas guy who saved someone's life whose name now autocompletes with "crisis actor" in the search bar, one of the most discouraging is the woman who, when contacted by the Guardian, expresses her regret. On a GoFundMe page for the Vegas victim, she called him "beyond fake," saying the he was was guilty of the "worst acting" she had ever seen. When contacted by the Guardian a few weeks later, she said she "had never attacked the family and said she was just searching for answers." Reminded that she had posted a meme on the Facebook page of the victim's brother that called the victim a "lying cunt," she said she didn’t remember and must have been caught up in the moment.
“I do feel bad," she said. "They are people, just like everybody else. Who am I to be calling anybody any kind of names?” she said. Asked if she regrets the attacks, she said: “I 100% do, and if I could apologize to them, I would."
Often when I disagree with people, I feel I can sort of see inside their worldview, or understand where they are coming from. But this left me utterly perplexed, sad, and bereft. How can a person be so hurtful and awful, just for no reason?
1 comment:
How can a person be so awful and so hurtful for no reason? This is a good question. If this person lives in a society where hurting others is considered a win - is she a psychopath or a shallow vessel just swallowing media memes? How responsible is an individual in a sick society? She is responsible and so are the drivers of social values.
Post a Comment