Try Accuracy Prompts to Reduce the Spread of Misinformation Online

• Bookmarks: 149


The rapid dissemination of inaccurate information––otherwise known as misinformation––within the online news media ecosystem has become a subject of pressing concern. As media consumption habits move increasingly online, the pernicious effects of misinformation on readers’ long-term beliefs are particularly alarming. Research shows that mere exposure to misinformation leads to belief in misinformation (Pennycook et al., 2018).

Studies of online behavior often cite consumers’ inattention to accuracy as one leading explanation for the rapid dissemination of misinformation online. Yet while it may seem reasonable to conclude that those who spread misinformation do so intentionally, the reality is more nuanced: news media consumers navigating online spaces are susceptible to a variety of distractions and, thus, are often prone to serious, if not understandable, misjudgment.

What if accuracy was a more important consideration in a consumer’s decision to share information online? How can online platforms encourage diligence and attention among online news media consumers? Researchers from the Massachusetts Institute of Technology, the University of Regina, and Google sought to answer these questions, and their conclusions have important implications for policy makers and technology creators and users alike (Epstein et al., 2021).

In studying participants’ engagement with COVID-19 related headlines, for example, researchers found that judgments of accuracy did not always factor into users’ sharing intentions. In the study, two control groups were shown a set of COVID-19 headlines (some of which were true, some of which were false) and asked about each headline’s accuracy. Participants in the first “accuracy” control group successfully identified accurate headlines approximately 68% of the time. When a second “sharing” control group was shown the same set of headlines and asked whether they’d consider sharing the story online, the results were telling; the second group’s ability to discern a “true” article from a “false” article declined to 54% when the context changed from judgements of accuracy to sharing intentions.

In light of these findings, the researchers concluded that despite the study participants’ relative success in distinguishing between accurate and inaccurate articles about COVID-19, discernment of accuracy isn’t always a key influence in the decision whether to share content. The potential to spread COVID-19 misinformation increases as users neglect to align their sharing motivations with their understanding of accuracy.

Source: Harvard Kennedy School, Misinformation Review

How, then, do we bridge the gap between a social media user’s judgment of accuracy and sharing motivations? To explore this question, the researchers employed accuracy prompts as a treatment before asking subjects about their sharing intentions. These accuracy prompts were modest interventions that encouraged participants to consider the informational accuracy of the content presented to them. The researchers measured the degree to which each prompt effectively increased participants’ discernment for accuracy and, in turn, influenced their sharing intentions.

Of the prompts tested, those that primed participants with ideas of accuracy yielded a significant improvement in judgement; every accuracy priming prompt led to a decrease of at least 3 percentage points. In one example, participants were explicitly asked whether the article headline presented to them was “accurate.” Other prompts asked participants about the value they placed on accuracy when sharing information online, or primed participants with digital literacy tips. Each prompt preceded the question of whether the participants would share a given article on social media platforms like Facebook or Twitter.

The results demonstrated a positive correlation between the perceived accuracy of headlines and the influence of prompts on the sharing intentions of participants. In comparison to the control groups, the behavior of the treatment groups indicated a disconnect between perceived accuracy and sharing intentions. Thus, participants who engaged with prompts showed better discernment of accuracy and chose to share accordingly (i.e. were better at sharing true headlines while avoiding to share false headlines). In short, the treatments reduced the sharing of headlines to the extent that they were perceived by participants as inaccurate.

Source: Harvard Kennedy School, Misinformation Review

The researchers also concluded that the treatments effectively primed users with the concepts of “accuracy” and “digital literacy,” commonly regarded as important values in an age of pervasive digital media consumption. They argued that the average social media user who shares misinformation often does not do so intentionally. Rather, users may become distracted by elements of an article, such as the prominence of the source or an attention-grabbing headline. Consumers of digital content often fail to consider the importance of accuracy. If users were prompted to consider the value they assign to accuracy, their online behavior––namely, their tendency to share content––would align with their values.

These findings illustrate the value of prompts in empowering social media users to act in their individual––and our collective––best interest. While some accuracy prompts proved more effective than others, they all nevertheless strengthened the link between a headline’s perceived accuracy and its likelihood of being shared. The researchers’ findings further demonstrate how platforms could prompt users to consider accuracy when sharing content, thereby reducing the spread of misinformation.

In addition to their value in reducing the spread of misinformation online, accuracy prompts are successful in a practical sense. Using accuracy prompts can serve as an alternative to content moderation via artificial intelligence, for example, which decides, via algorithmic determination, whether news is “true” or “false.” While human moderators are often tasked with catching errors missed by artificial intelligence protocols, human fact-checking is hardly scalable. Accuracy prompts not only increase a user’s ability to discern between fact and fiction before sharing content; they can also be easily scaled.

Equipped with a diverse toolkit of accuracy prompts, social media companies can shape online environments to empower users to act responsibly when consuming and disseminating information. However, technology companies should not be the sole arbiters of accurate information. After all, while popular technology platforms may be new, the challenge of combating the spread of misinformation is not a novel task.

In large part, Epstein et al find that the success of accuracy prompts depends on the degree to which users meaningfully internalize the dangers of spreading misinformation online. Today, policymakers have an opportunity to partner with private and public organizations to strengthen efforts that increase digital and news literacy. They should invest in the tools and resources to educate the public about the importance of thinking critically and acting responsibly in online spaces.


Epstein, Ziv, Adam J. Berinsky, Rockly Cole, Andrew Gully, Gordon Pennycook, and David G. Rand. 2021. “Developing an accuracy-prompt toolkit to reduce COVID-19 misinformation online.” Harvard Kennedy School (HKS) Misinformation Review. https://misinforeview.hks.harvard.edu/article/developing-an-accuracy-prompt-toolkit-to-reduce-covid-19-misinformation-online/

Pennycook, Gordon, Tyrone D. Cannon, and David G. Rand. 2018. “Prior exposure increases perceived accuracy of fake news.” Journal of Experimental Psychology: General, 147(12), 1865–1880. https://doi.org/10.1037/xge0000465

1185 views
bookmark icon