How Fake News Spreads on Twitter

• Bookmarks: 47


The 2016 presidential election was a watershed moment in American politics. Political polarization reached its highest ebb in decades, fueled in part by individuals’ self-guided consumption of media matched to their own political ideologies. During and after the election, there was public scrutiny of “fake news”—inaccurate information published and shared in the guise of reputable journalism. Fake news widens the partisan gulf and drowns out moderate voices. Despite this attention, the social phenomenon of fake news has only recently entered the scope of academic inquiry.

A team of computational social scientists at Harvard University, Northeastern University and SUNY Buffalo set out to quantify exactly how pervasive online fake news really was during the 2016 election. Nir Grinberg and his team of researchers published their findings in a recent study in Science. The team began by creating a formal definition of fake news outlets as sources that “lack the news media’s editorial norms and processes for ensuring the accuracy and credibility of information,” defining “fakeness” as an attribute of a publisher rather than an individual story. They also divided sources into three levels of “fakeness,” which were coded as orange, red or black in increasing order of severity. The team collected Twitter data from a representative panel of 16,442 accounts that were active between August and December 2016 and examined the content of those accounts’ newsfeeds. This process generated a variable called “exposure,” which was used to assign Twitter users to one of five ideological groups, ranging from extreme left to extreme right.

The authors found that five percent of an average user’s total exposure and 6.7 percent of their sharing of political URLs were associated with fake news sources. However, this figure belies the fact that fake news dissemination was highly concentrated; just 0.1 percent of the Twitter users accounted for 79.8 percent of fake new shares and just 1 percent of the panel consumed 80 percent of the total volume of fake news. The authors labeled these users “supersharers” and “superconsumers,” respectively, and noted that supersharer accounts in particular tended to be both hyperactive (tweeting upwards of 70 times a day) and at least partially automated, using Twitter features like auto mode or automatically posting updates from RSS feeds.

Excluding outliers, the average panel member was exposed to fake news sources 204 times during the five-month period of analysis. Men, white people, swing state voters and active tweeters tended to be exposed to fake news at slightly higher rates than other groups. Notably, exposure was closely correlated with political affiliation: people receiving five percent or more of their political exposure from fake sources comprised just 2.5 percent of individuals on the left or extreme left and over 16 percent of individuals on the right or extreme right.

This ideological pattern of fake news consumption also held in fake news sharing. Fewer than five percent of panel members on the left or in the center ever shared any content from fake news sources, compared with eleven percent of people on the right and 21 percent of people on the extreme right. However, when the authors conditioned sharing behavior on whether the shared source was belief-congruent (i.e. the source’s ideology aligned with the panel member’s), there were no significant differences in sharing rates between those on the left and the right across both fake and non-fake sources.

Social media platforms like Twitter can exert positive influences on society by connecting people to new ideas and challenging old assumptions. But connectivity can also have negative consequences, since just as social media can disseminate insightful and educational content, it can also amplify the reach and harm of fake news. The study’s authors noted that it is encouraging that the vast majority of individuals’ media engagement was with reputable sources like the New York Times and Washington Post. Fake news comprised just 1.2 percent of political URL exposures for the average panel member, and only six percent of people sharing political URLs shared content from fake news sources. This relatively low incidence runs counter to the popular narrative of “echo chambers” dominating American news consumption.

Nonetheless, the authors recognized the threat that fake news poses and cited some potential steps that social media companies could take. They suggested that platforms could discourage users from sharing content produced by the most pervasive and widely-recognized fake news sources. They also wrote that disincentivizing frequent posting and prioritizing content posted by less active users could counteract the “flooding” strategy used by the small number of supersharers that account for most fake news dissemination. Social media companies like Twitter have enormous amounts of social power—they need to decide whether to accept social responsibility in addressing the harms their platforms can propagate.

Article source: Grinberg, Nir, Kenneth Joseph, Lisa Friedland, Briony Swire-Thompson and David Lazer. “Fake news on Twitter during the 2016 U.S. presidential election,” Science 363. (2019): 374-378.

Featured photo: cc/(zakokor, photo ID: 178101320, from iStock by Getty Images)

453 views
bookmark icon