Facebook  and the Fake News Problem for U.S. Democracy

Facebook’s admirable mission has been to give people the power to share. . . “in order to make the world more open and connected.”  In 2007, CEO Mark Zuckerberg wanted a global presence, and one key strategy was the “News Feed,” designed to be a customized view of events for people all over the world.  Driven by an algorithm that would provide users with the content that the user most wanted to see, the key was to make users keep scrolling and “liking,” because the longer users were engaged with the News Feed, the more advertisements they would see, and the more valuable Facebook would be for advertisers.

The addition of the new “Like” button in 2009 allowed News Feed to collect vast amounts of users’ personal data.  Its growth was aided by existing laws that didn’t hold Internet companies liable for the quality of the content on their sites.  Facebook did set up some basic rules, such as “basic decency, no nudity and no violent or hateful speech.”  Beyond that, however, Facebook felt a “reluctance” to interpose its value system on a “worldwide community that was growing.”

In 2012, COO Cheryl Sandberg’s “growth team” began developing new ways to collect personal data from users wherever they went on the Internet.  Several months before its IPO, Facebook revealed its first relationship with data broker companies,[1] companies that buy data about every one of us: what we buy, where we shop, where we live, what our traffic patterns are, what our families are doing, what our likes are, and what magazines we read –– data that we don’t even know is being collected. That data is also shared with Facebook so that the company can better target ads back to the user.

David Vladeck was in charge of consumer protection at the Federal Trade Commission from 2009 to 2012, and was investigating whether Facebook had been deceiving its users. He learned that Facebook had been sharing users’ personal data with third-party developers –– companies that built games and apps for the platform.  From the FTC’s viewpoint, it was fine for Facebook to collect this data, but sharing this data with third parties without users’ consent was a violation of consumer privacy rights.  Facebook was not making it clear to its users how much of their personal data would be shared with third parties; nor was Facebook keeping track of what the third parties were doing with that data.  Under pressure from the FTC, Facebook entered into a Consent Order that required them to identify risk to personal privacy and to plug those gaps quickly.  But with its IPO on the horizon, Facebook’s leaders were under pressure to keep monetizing all that personal information, not just fix the FTC’s privacy issues.

In May of 2012, the world’s largest social network managed to raise more than $18 billion, making it the largest technology IPO in U.S. history.

Meanwhile, at DARPA (the Defense Advanced Research Projects Agency), program managers saw that the detailed information about Facebook users could be used to manipulate those users, not just by advertisers, but also by others with political motives. DARPA’s fears were already playing out at a secret propaganda factory in St. Petersburg, Russia, called the Internet Research Agency.[1] Hundreds of Russian operatives were using social media to fight the anti-Russian government in neighboring Ukraine. The Russian propaganda had its intended effect: helping to sow distrust and fear of the new, non-Russian leaning Ukrainian government.

Dmytro Shymkiv, Advisor to the President of Ukraine from 2014-2018 went to Facebook with his concerns. He told them that propaganda was coming from fake accounts, meaning that someone at the Internet Research Agency in St. Petersburg, Russia was pretending to be Ukrainian, and was spreading false and hateful messages about the Ukrainian government.  Executives told him that Facebook was a “pro-democracy platform,” where anybody can say anything.

By 2016, Russia was continuing to use social media as a political weapon, not just in the Ukraine.  Division and polarization were running through the U.S. presidential campaign, as well, where an estimated 39% of the population was getting their election news and decision-making material from Facebook.  Among other false posts that showed up on Twitter and Facebook was the false claim that Hilary Clinton was running a child sex ring out of a D.C. pizzeria.[2] Many people believed this story to be true.  But, unlike traditional media companies, Facebook didn’t see itself as responsible for insuring the accuracy of news and information on its site.

Instead of deciding what was most important or checking for accuracy, Facebook’s editor was its algorithm, designed to feed users whatever was most engaging to them. Inside Facebook, they didn’t see that as a problem. As Zuckerberg often said, Facebook is a technology company, not a media company.

Not all the “fake news” was done for overtly political purposes.  Young men in Macedonia learned that they could make money by providing “news” that people would enjoy, such as Hillary Clinton being indicted, the Pope endorsing Trump, or Hillary Clinton selling weapons to ISIS. [3]  These stories were getting close to or above a million Shares, Likes, and comments, more than when The New York Times had a scoop about Donald Trump’s tax returns.

Hyper-partisan group pages also began showing up:  These were Facebook pages that succeeded in Likes and Sharing by ramping up levels of hyper-partisanship. Beyond a “we’re right and they’re wrong” approach, there was also the raw hatred of the “other side,” confirming the viewer’s disposition to believe that “the others” were “terrible people” worthy only of contempt.  These Facebook pages were getting tremendous engagement with “news” like “a million migrants are coming over the wall and they’re going to rape your children.” As an early investor told PBS, polarization was the key to the algorithm; appealing to people’s emotions like fear and anger would create greater engagement, confirm pre-existing biases, and for Facebook, creating more time on site, more sharing, and therefore, more advertising value.

After US intelligence agencies reported in early 2017 that Russians had deliberately sought to influence the US 2016 elections, Facebook got more serious about the issue. According to Alex Stamos, Facebook Chief Security Officer, 2015-2018, many at Facebook were enlisted to working long hours to go through many thousands of false postings.  They eventually found that a large cluster of these came from the Internet Research Agency of St. Petersburg. The “IRA” created the appearance of legitimate social movements; they would create, for example, a pro-immigration group and an anti-immigration group, and both of those groups would be almost caricatures of what those two sides think of each other.  The goal was to amplify these fault lines or divisions in the U.S. electorate in the hopes that Americans would trust each other less and less.  James Clapper, Director of National Intelligence, 2010-2017, commented that “. . . the Russians exploited that divisiveness . . .because they had, they had messages for everybody. You know, Black Lives Matter, white supremacists, gun control advocates, gun control opponents. It didn’t matter. They had messages for everybody.”

In Myanmar, Buddhists were inciting hatred and violence against Muslims through social media and mainstream media.[4]   Reports later proved to be false that some Muslim man had raped a Buddhist woman was shared on Facebook. In 2017, David Madden, a tech entrepreneur living in Myanmar, tried to bring this to Facebook’s attention, alerting the company to widespread misinformation, rumors, and extremist propaganda that was fueling the genocide of Muslim Rohingyas. He met with Facebook executives that year, but with little results. Later, the United Nations would call the violence against Rohingyas in Myanmar a genocide, and found social media and Facebook in particular had played a significant role.

Facebook says it has taken down problematic accounts in Myanmar, hired more language experts, and improved its policies.  But Congress has been less than assured by these efforts.  Christopher Wylie knew that a political consulting firm he had worked for, Cambridge Analytica, had been using the personal data of more than 50 million Facebook users to try to influence voters.  After Wylie went public in 2018, Facebook banned the firm from their site and announced they were ending their working directly with the data broker companies. Still, public uproar was intense enough that in April 2018, Mark Zuckerberg was called before Congress in what would become a reckoning over Facebook’s conduct, its business model, and its impact on democracy.  Prior to the hearing, Facebook hired a consulting firm, McPhetridge & Associates. Sometime in summer of 2019, the firm concluded (the good news!) that the company could devote sufficient resources to insure that only a negligible amount of false and inflammatory claims would appear on Facebook’s News Feed domestically and globally; but (the bad news!) doing so would likely result in a 15% decline in News Feed’s ad-related income over the next three years.  Whether income from the News Feed would ever recover, the consulting firm could not say.

After years of unchecked growth, the talk in D.C. now is increasingly about how to rein in Facebook. Already in Europe, there’s a new Internet privacy law aimed at companies like Facebook. Inside the company, numerous people interviewed by Frontline, a PBS documentary show, insisted that Facebook is still a force for good.  Many Republicans, who favor less regulation, would prefer to take no action.  But several Democrats, particularly Sen. Mark Warner, suggested during the April hearings that Facebook and Twitter are not capable of solving the problems the platforms have helped enable.  “I’m skeptical that ultimately you’ll be able to truly address this challenge on your own,” Warner said. “I believe Congress is going to have to act.”

Ideas about what such action might look like ranged from new laws that would facilitate greater information sharing among public and private stakeholders (particularly about threats posed by foreign actors in cyberspace) to rules that would hold social media companies more accountable for the content on their platforms.

What if insuring that its News Feed would contain no false and divisive “fake news” designed to polarize people in our democracy?  What if that would take even more employees and reduce revenues?  Should Facebook devote even greater resources to insuring that people in the U.S. and worldwide are not getting false and inflammatory ‘political news’ on the Facebook News Feed? Will they?  As with most corporations these days, growth by any means is an imperative –– perfectly legal, of course–– but doing the right thing by U.S. democratic process could definitely drive down profits.  

NOTE:  Most of the information about this ongoing dilemma for Facebook can be found online in the Frontline PBS series.  See https://www.pbs.org/wgbh/frontline/film/facebook-dilemma/

Supplemental Materials and Endnotes

Mark Warner’s policy paper:

https://www.documentcloud.org/documents/4620765-PlatformPolicyPaper.html#document/p1



[1] https://www.theatlantic.com/international/archive/2018/02/russia-troll-farm/553616/

[2] https://www.nytimes.com/2016/12/05/business/media/comet-ping-pong-pizza-shooting-fake-news-consequences.html

[3] https://money.cnn.com/interactive/media/the-macedonia-story/

[4] https://www.nytimes.com/2018/10/15/technology/myanmar-facebook-genocide.html

Share This

Share this post with your friends!