When I heard that Facebook was going to try to "do something" about fake news, my first thought was "Oh yeah, that'll end well. What could go wrong"?
It's not that lies, hoaxes, and misinformation aren't a problem (and by the way, what was wrong with "lies" "hoaxes" and "misinformation"?) -- it's just that truth and factuality are not simple problems, they're not algorithmic problems, and they're not problems you can take a "neutral" stand on. Yet you know Facebook will try to treat them as if they are.
The first kind of example that came into my mind on thinking about this was about the aftermath of the protests in Ferguson after the shooting of Michael Brown. I got a lot of my news from following people on Twitter who were there -- some, like Jelani Cobb, professional journalists, and others people who were not.
Often, what I read on Twitter did not match up with what I read in the mainstream press. The press interviewed the police or asked government officials to comment on things; in the nature of things they had a vested interested in portraying the protestors as initiators of violence. Reports from people there emphasized the militarized police response and also the number of peaceful protestors doing things like protecting property and cleaning up.
When the reports of the citizens on the ground don't match up with official reports or reports in the news, who are you going to trust to tell you the truth? And when, and why? These are difficult questions. But do you really want Facebook answering them for you?
Maybe you might say the is for simpler, more straightforward cases (like, you know, "lies," "hoaxes" and "misinformation"). But that's not how the response to "fake news" has been shaking out so far. Maybe you've heard about the "B. S. Detector" that claims to "alert users to unreliable news sources." One of the first things that happened was the site Naked Capitalism got incorrectly tagged as a "fake news" site. In fact, what Naked Capitalism is is in-depth analysis of current events that sometimes diverges from official positions and mainstream media.
Are we really so far down the rabbit hole that we want social media companies to pronounce on what is and is not a legitimate critique of government statements or the New York Times?
Facebook, in fashion characteristic of the tech industry, wants to be address the problem of "fake news" while also maintaining "neutrality." As we've discussed before, the dream is to off-load judgments onto users so that algorithm's can solve all problems and no value judgments have to be explicitly endorsed. And as we've discussed before, this is impossible: there is no "value-free" way to offload judgments about what is and is not acceptable speech, or what does and does not constitute unacceptable forms of discrimination, or what is or is not sexist, racist, and so on. You let users decide you're often going to get an outcome that goes horribly wrong.
I had to laugh when I learned that the term Facebook is going to use for hoaxes, lies, and misinformation is "disputed." For one thing, could anything be a more obvious attempt to sound "neutral?" It's like, "WE'RE not saying there's a problem. But SOMEBODY out there is disputing this."
In a more sinister vein, when it comes to actual hoaxes, lies, and misinformation, doesn't "disputed" actually seem like it would add an air of legitimacy? One of the more interesting things I read about "fake news" was how that guy in California created a ton of fake news -- like "FBI Agent Suspected In Hillary Email Leaks Found Dead In Apparent Murder-Suicide" -- and earned a ton of money. You can read an interview with him here. But then it turns out that teens in Macedonia (and presumably people all over the world) are creating fake news just for profit.
Isn't using the term "disputed" to describe "FBI Agent Suspected In Hillary Email Leaks Found Dead In Apparent Murder-Suicide" heading off in the wrong direction entirely? Doesn't that make it sound like some obscure point about Benghazi, where some partisans think one thing happened and some partisans think another thing happened, but who really knows? Doesn't it make it sound like a possibly legit thing? When, in fact, it is just a hoax, a set of lies!
The problem of truth and belief goes way beyond algorithms and neutrality and involves complicated issues of community and trust. When the New York Times ran a whole article explaining why "pizzagate" was based on a set of lies, do you think people who believed in pizzagate said to themselves "Oh, I guess that pizzagate wasn't true." Of course not. They went and wrote articles debunking the debunking. Just a few days ago there was a protest in DC with people demanding an inquiry.
Looking up pizzagate on Wikipedia, I see a journalist quoted as saying that pizzagate is "two worlds clashing. People don't trust the mainstream media anymore, but it's true that people shouldn't take the alternative media as truth, either." This is aptly said. People trust different sources. No algorithm is going to deal with that problem.
If Facebook's proposed solution is to add a "disputed" tag to posts, potentially undermining citizen reports that contradict official news, and legitimating things that are lies and hoaxes in the first place -- well, it seems to me this may well do more harm than good. Maybe Facebook should stay out of the social epistemology business altogether.
No comments:
Post a Comment