Jay Stanley,
Senior Policy Analyst,
ACLU Speech, Privacy, and Technology Project
Share This Page
December 12, 2016

There has been a lot of discussion lately about “fake news,” which appears to have circulated with fierce velocity on social media throughout this past election season. This has prompted calls for the likes of Facebook and Google to fix the problem.

What are we to think of this from a free speech and civil liberties perspective?

With Facebook, which has been a particular subject of calls for reform, there are actually two issues that should be thought about separately. The first involves Facebook’s “Trending News” section, which was the subject of a flap earlier this year when it emerged that it was actually edited by humans, rather than being generated by a dumb algorithm that simply counted up clicks. A former employee alleged that the human curators were biased against conservative material. In the wake of that controversy, Facebook took the humans out of the loop, making the “Trending News” more of a simple mirror held up to the Facebook user base showing them what is popular.

As I said in a blog post at the time, I’m ambivalent about this part of the fake news controversy. On the one hand, it can be valuable and interesting to see what pieces are gaining circulation on Facebook, independent of their merit. On the other hand, Facebook certainly has the right, acting like any publisher, to view the term “trending” loosely and publish a curated list of interesting material from among those that are proving popular at a given time. One advantage of their doing so is that crazy stuff won’t get amplified further through the validation of being declared “News” by Facebook. A result of the decision to take human editors out of the loop is that a number of demonstrably have subsequently appeared in the “Trending News” list.

But Facebook plays a separate, far more significant function than their role as publisher of Trending News: it serves as the medium for a peer-to-peer communications network. I can roam anywhere on the Internet, get excited by some piece of material, brilliant or bogus, and post it on Facebook for my Friends to see. If some of them like it, they can in turn post it for their Friends to see.

The question is, do we want Facebook in its role as administrator of this peer-to-peer communications network to police the veracity of the material that users send each other? If I don’t post something stupid on Facebook, I can telephone my friends to tell them about it, or text them the link, or tell them about it in a bar. Nobody is going to do anything to stop the spread of fake news through those channels. Facebook doesn’t want to get into that business, and I don’t think we want them to, either. Imagine the morass it would create. There will be easy, clear cases, such as a piece telling someone to drink Drano to lose weight, which is not only obviously false but also dangerous. But there would also be a thicket of hard-to-call cases. Is acupuncture effective? Are low-carb diets “fake”? Is barefoot running good for you? These are examples of questions where an established medical consensus may have once been confidently dismissive, but which now are, at a minimum, clouded with controversy. How is Facebook to evaluate materials making various claims in such areas, inevitably made with highly varying degrees of nuance and care—let alone politically loaded claims about various officeholders? Like all mass censorship, it would inevitably lead the company into a morass of inconsistent and often silly decisions and troubling exercises of power. It might sound easy to get rid of “fake news,” but each case will be a specific, individual judgment call, and often, a difficult one.

The algorithm
It is true that in some ways Facebook already interposes itself between users and their Friends—that unlike, say, the telephone system, it does not serve as a neutral medium for ideas and communications. If Facebook got out of the way and let every single posting and comment from every one of your Friends flow through your newsfeed, you would quickly be overwhelmed. So they use “The Algorithm” to try to assess what they think you’ll be most interested in, and place that in your feed. The company says this algorithm tries to assess content for whether it’s substantive, whether you’ll find it relevant to you personally based on your interests, and also how interested you are in the Friend who posted it, based on how often you click on their stuff (Facebook actually assigns you numbers for each of your Friends, a “stalking score” that indicates how interested you seem to be in each of them).

Facebook provides some details on how its algorithm works in its “” blog. Some of those mechanisms already arguably constitute censorship of a sort. For example, the company heavily items with headlines that it judges to be “clickbaity,” based on a Bayesian algorithm (similar to those used to identify spam) trained on a body of such headlines. That means that if you write a story with a headline that fits that pattern, it is unlikely to be seen by many Facebook users because the company will hide it. Since January 2015 Facebook has also heavily stories that Facebook suspects are “hoaxes,” based on their being flagged as such by users and frequently deleted by posters. (That would presumably cover something like the Drano example.)

Most of this interference with the neutral flow of information among Friends is aimed at making Facebook more fun and entertaining for its users. Though I’m uncomfortable with the power they have, I don’t have any specific reason to doubt that their algorithm is currently oriented toward that stated goal, especially since it aligns with the company’s commercial incentives as an advertiser.

There are of course very real and serious questions about how Facebook’s algorithmic pursuit of “fun” for its users contributes to the Filter Bubble, in which we tend to see only material that confirms our existing views. The difference between art and commerce has been defined as the difference between that which expands our horizons by getting us out of our comfort zone—i.e. by making us uncomfortable—and that which lets us stay complacently where we already are with pleasing and soothing confirmations of our existing views. In that, Facebook’s Newsfeed is definitely commerce, not art. It does not pay to challenge people and make them uncomfortable.

But for Facebook to assume the burden of trying to solve a larger societal problem of fake news by tweaking these algorithms would likely just make the situation worse. To its current role as commercially motivated curator of things-that-will-please-its-users would be added a new role: guardian of the social good. And that would be based on who-knows-what judgment of what that good might be at a given time. If the company had been around in the 1950s and 1960s, for example, how would it have handled information about Martin Luther King, Malcolm X, gay rights, and women’s rights? A lot of material that is now seen as vital to social progress would then have been widely seen as beyond the pale. The company already has a frightening amount of power, and this would increase it dangerously. We wouldn’t want the government doing this kind of censorship—that would almost certainly be unconstitutional—and many of the reasons that would be a bad idea would also apply to Facebook, which is the government of its own vast realm. For one thing, once Facebook builds a giant apparatus for this kind of constant truth evaluation, we can’t know in what direction it may be turned. What would Donald Trump’s definition of “fake news” be?

The ACLU’s ideal is that a forum for free expression that is as central to our national political conversations as Facebook has become would not feature any kind of censorship or other interference with the neutral flow of information. It already does engage in such interference in response to its commercial interest in tamping down the uglier sides of free speech, but to give Facebook the role of national Guardian of Truth would exponentially increase the pitfalls that approach brings. The company does not need to interfere more heavily in Americans’ communications. We would like to see Facebook go in the other direction, becoming more transparent about the operation of its algorithms to ordinary users, and giving them an ever-greater degree of control over how that algorithm works.

The real problem
At the end of the day, fake news is not a symptom of a problem with our social-communications sites, but a societal problem. Facebook and other sites are just the medium.

Writing in the New Yorker, Nicholas Lemann beyond information regulation by Facebook to another possible solution to the fake news problem: creating and bolstering public media like the BBC and NPR. But whatever the merits of public media may be, the problem today is not that there aren’t good news outlets; the problem is that there is a large group of Americans who don’t believe what those outlets say, and have aggressively embraced an alternate, self-contained set of facts and sources of facts. This is not a problem that can be fixed either by Mark Zuckerberg or by turning PBS into another BBC.

There are two general (albeit overlapping) problems here. The first is simply that there are a lot of credulous people out there who create a marketplace for mercenary of fake news, which can be about any topic. The timeless problem of gullible people has been exacerbated by the explosion of news sources and people’s inability to evaluate their credibility. For much of the 20th century, most people got most of their news from three television networks and a hometown newspaper or two. If a guy was handing out a leaflet on a street corner, people knew to question its value. If he was working for their union or for the Red Cross, they might trust him. If he was a random , they might not. The wonderful and generally healthy explosion of information sources made possible by the Internet has a downside, which is that it has collapsed the distinctions between established newspapers and the online equivalent of people handing out material on street corners. The physical cues that signal to people whether or not to trust pamphleteers in the park are diminished, and many people have not yet learned to read them.

We can hope that someday the entire population will be well-educated enough to discriminate between legitimate and bogus sources online—or at least adapt and learn to be more discriminating online as it’s natural to be off. But until that day arrives, gullibility will always be a problem.

The second problem is the existence of a specific political movement that rejects the “mainstream media” in favor of a group of ideological news outlets like Breitbart and Infowars—a movement of politically motivated people who eagerly swallow not just opinions but also facts that confirm their views and attitudes and aggressively reject anything that challenges those views. Left and right have always picked and chosen from among established facts to some extent, and constructed alternate narratives to explain the same facts. But what is new is a large number of Americans who have rejected the heretofore commonly accepted sources of the facts that those narratives are built out of. The defense mechanisms against intellectual challenge by those living in this world are robust. I have encountered this in my own social media debates when I try to correct factual errors. When I point posters to a news article in a source like the New York Times or Washington Post, I am told that those “liberal mainstream media sources” can’t be trusted. While these sources certainly make mistakes, and like everyone are inevitably subject to all kinds of systemic biases in what they choose to publish and how they tell stories, they are guided by long-evolved professional and reputational standards and do not regularly get major facts wrong without being called to task. When I point people to the highly reputable fact-checking site Snopes, I am told that it is “funded by George Soros,” and for that reason can apparently be dismissed. (This is itself a false fact; Snopes says it is entirely self-funded through advertising revenues.)

This phenomenon “epistemic closure.” While originally a charge levied at intellectuals at Washington think tanks, it is an apt term for everyday readers of Breitbart and its ilk who close themselves off from alternate sources of information.

This is not a problem that can be fixed by Facebook; it is a social problem that exists at the current moment in our history. The problems with bogus material on Facebook and elsewhere (and their as-yet-undetermined role in the 2016 election) merely reflect these larger societal ills. Attempting to program those channels to somehow make judgments about and filter out certain material is the wrong approach.

Note: I participated in a panel discussion on this issue at the 92nd Street Y in New York City on Tuesday, which can be seen .

Update (Dec. 16, 2016):
A followup blog post on changes announced by Facebook to its service has been posted here.

Learn More About the Issues on This Page