report

Countering Disinformation Effectively: An Evidence-Based Policy Guide

A high-level, evidence-informed guide to some of the major proposals for how democratic governments, platforms, and others can counter disinformation.

Published on January 31, 2024

Summary

Disinformation is widely seen as a pressing challenge for democracies worldwide. Many policymakers are grasping for quick, effective ways to dissuade people from adopting and spreading false beliefs that degrade democratic discourse and can inspire violent or dangerous actions. Yet disinformation has proven difficult to define, understand, and measure, let alone address.

Even when leaders know what they want to achieve in countering disinformation, they struggle to make an impact and often don’t realize how little is known about the effectiveness of policies commonly recommended by experts. Policymakers also sometimes fixate on a few pieces of the disinformation puzzle—including novel technologies like social media and artificial intelligence (AI)—without considering the full range of possible responses in realms such as education, journalism, and political institutions.

This report offers a high-level, evidence-informed guide to some of the major proposals for how democratic governments, platforms, and others can counter disinformation. It distills core insights from empirical research and real-world data on ten diverse kinds of policy interventions, including fact-checking, foreign sanctions, algorithmic adjustments, and counter-messaging campaigns. For each case study, we aim to give policymakers an informed sense of the prospects for success—bridging the gap between the mostly meager scientific understanding and the perceived need to act. This means answering three core questions: How much is known about an intervention? How effective does the intervention seem, given current knowledge? And how easy is it to implement at scale?

Overall Findings

  • There is no silver bullet or “best” policy option. None of the interventions considered in this report were simultaneously well-studied, very effective, and easy to scale. Rather, the utility of most interventions seems quite uncertain and likely depends on myriad factors that researchers have barely begun to probe. For example, the precise wording and presentation of social media labels and fact-checks can matter a lot, while counter-messaging campaigns depend on a delicate match of receptive audiences with credible speakers. Bold claims that any one policy is the singular, urgent solution to disinformation should be treated with caution.
  • Policymakers should set realistic expectations. Disinformation is a chronic historical phenomenon with deep roots in complex social, political, and economic structures. It can be seen as jointly driven by forces of supply and demand. On the supply side, there are powerful political and commercial incentives for some actors to engage in, encourage, or tolerate deception, while on the demand side, psychological needs often draw people into believing false narratives. Credible options exist to curb both supply and demand, but technocratic solutionism still has serious limits against disinformation. Finite resources, knowledge, political will, legal authority, and civic trust constrain what is possible, at least in the near- to medium-term.
  • Democracies should adopt a portfolio approach to manage uncertainty. Policymakers should act like investors, pursuing a diversified mixture of counter-disinformation efforts while learning and rebalancing over time. A healthy policy portfolio would include tactical actions that appear well-researched or effective (like fact-checking and labeling social media content). But it would also involve costlier, longer-term bets on promising structural reforms (like supporting local journalism and media literacy). Each policy should come with a concrete plan for ongoing reassessment.
  • Long-term, structural reforms deserve more attention. Although many different counter-disinformation policies are being implemented in democracies, outsized attention goes to the most tangible, immediate, and visible actions. For example, platforms, governments, and researchers routinely make headlines for announcing the discovery or disruption of foreign and other inauthentic online networks. Yet such actions, while helpful, usually have narrow impacts. In comparison, more ambitious but slower-moving efforts to revive local journalism and improve media literacy (among other possibilities) receive less notice despite encouraging research on their prospects.
  • Platforms and tech cannot be the sole focus. Research suggests that social media platforms help to fuel disinformation in various ways—for example, through recommendation algorithms that encourage and amplify misleading content. Yet digital platforms exist alongside, and interact with, many other online and offline forces. The rhetoric of political elites, programming on traditional media sources like TV, and narratives circulating among trusted community members are all highly influential in shaping people’s speech, beliefs, and behaviors. At the same time, the growing number of digital platforms dilutes the effectiveness of actions by any single company to counter disinformation. Given this interplay of many voices and amplifiers, effective policy will involve complementary actions in multiple spheres.
  • Countering disinformation is not always apolitical. Those working to reduce the spread and impact of disinformation often see themselves as disinterested experts and technocrats—operating above the fray of political debate, neither seeking nor exercising political power. Indeed, activities like removing inauthentic social media assets are more or less politically neutral. But other efforts, such as counter-messaging campaigns that use storytelling or emotional appeals to compete with false ideas at a narrative and psychological level, can be hard to distinguish from traditional political advocacy. Ultimately, any institutional effort to declare what is true and what is false—and to back such declarations with power, resources, or prestige—implies some claim of authority and therefore can be seen as having political meaning (and consequences). Denying this reality risks encouraging overreach, or inviting blowback, which deepens distrust.
  • Research gaps are pervasive. The relatively robust study of fact-checking offers clues about the possibilities and the limits of future research on other countermeasures. On the one hand, dedicated effort has enabled researchers to validate fact-checking as a generally useful tool. Policymakers can have some confidence that fact-checking is worthy of investment. On the other hand, researchers have learned that fact-checking’s efficacy can vary a lot depending on a host of highly contextual, poorly understood factors. Moreover, numerous knowledge gaps and methodological biases remain even after hundreds of published studies on fact-checking. Because fact-checking represents the high-water mark of current knowledge about counter-disinformation measures, it can be expected that other measures will likewise require sustained research over long periods—from fundamental theory to highly applied studies.
  • Research is a generational task with uncertain outcomes. The knowledge gaps highlighted in this report can serve as a road map for future research. Filling these gaps will take more than commissioning individual studies; major investments in foundational research infrastructure, such as human capital, data access, and technology, are needed. That said, social science progresses slowly, and it rarely yields definite answers to the most vexing current questions. Take economics, for example: a hundred years of research has helped Western policymakers curb (though not eliminate) depressions, recessions, and panics—yet economists still debate great questions of taxes and trade and are reckoning only belatedly with catastrophic climate risks. The mixed record of economics offers a sobering benchmark for the study of disinformation, which is a far less mature and robust field.
  • Generative AI will have complex effects but might not be a game changer. Rapid AI advances could soon make it much easier and cheaper to create realistic and/or personalized false content. Even so, the net impact on society remains unclear. Studies suggest that people’s willingness to believe false (or true) information is often not primarily driven by the content’s level of realism. Rather, other factors such as repetition, narrative appeal, perceived authority, group identification, and the viewer’s state of mind can matter more. Meanwhile, studies of microtargeted ads—already highly data-driven and automated—cast doubt on the notion that personalized messages are uniquely compelling. Generative AI can also be used to counter disinformation, not just foment it. For example, well-designed and human-supervised AI systems may help fact-checkers work more quickly. While the long-term impact of generative AI remains unknown, it’s clear that disinformation is a complex psychosocial phenomenon and is rarely reducible to any one technology.

Case Study Summaries

  1. Supporting Local Journalism. There is strong evidence that the decline of local news outlets, particularly newspapers, has eroded civic engagement, knowledge, and trust—helping disinformation to proliferate. Bolstering local journalism could plausibly help to arrest or reverse such trends, but this has not been directly tested. Cost is a major challenge, given the expense of quality journalism and the depth of the industry’s financial decline. Philanthropy can provide targeted support, such as seed money for experimentation. But a long-term solution would probably require government intervention and/or alternate business models. This could include direct subsidies (channeled through nongovernmental intermediaries) or indirect measures, such as tax exemptions and bargaining rights.
  2. Media Literacy Education. There is significant evidence that media literacy training can help people identify false stories and unreliable news sources. However, variation in pedagogical approaches means the effectiveness of one program does not necessarily imply the effectiveness of another. The most successful variants empower motivated individuals to take control of their media consumption and seek out high-quality information—instilling confidence and a sense of responsibility alongside skills development. While media literacy training shows promise, it suffers challenges in speed, scale, and targeting. Reaching large numbers of people, including those most susceptible to disinformation, is expensive and takes many years.
  3. Fact-Checking. A large body of research indicates that fact-checking can be an effective way to correct false beliefs about specific claims, especially for audiences that are not heavily invested in the partisan elements of the claims. However, influencing factual beliefs does not necessarily result in attitudinal or behavioral changes, such as reduced support for a deceitful politician or a baseless policy proposal. Moreover, the efficacy of fact-checking depends a great deal on contextual factors—such as wording, presentation, and source—that are not well understood. Even so, fact-checking seems unlikely to cause a backfire effect that leads people to double down on false beliefs. Fact-checkers face a structural disadvantage in that false claims can be created more cheaply and disseminated more quickly than corrective information; conceivably, technological innovations could help shift this balance.
  4. Labeling Social Media Content. There is a good body of evidence that labeling false or untrustworthy content with additional context can make users less likely to believe and share it. Large, assertive, and disruptive labels are the most effective, while cautious and generic labels often do not work. Reminders that nudge users to consider accuracy before resharing show promise, as do efforts to label news outlets with credibility scores. Different audiences may react differently to labels, and there are risks that remain poorly understood: labels can sometimes cause users to become either overly credulous or overly skeptical of unlabeled content, for example. Major social media platforms have embraced labels to a large degree, but further scale-up may require better information-sharing or new technologies that combine human judgment with algorithmic efficiency.
  5. Counter-messaging Strategies. There is strong evidence that truthful communications campaigns designed to engage people on a narrative and psychological level are more effective than facts alone. By targeting the deeper feelings and ideas that make false claims appealing, counter-messaging strategies have the potential to impact harder-to-reach audiences. Yet success depends on the complex interplay of many inscrutable factors. The best campaigns use careful audience analysis to select the most resonant messengers, mediums, themes, and styles—but this is a costly process whose success is hard to measure. Promising techniques include communicating respect and empathy, appealing to prosocial values, and giving the audience a sense of agency.
  6. Cybersecurity for Elections and Campaigns. There is good reason to think that campaign- and election-related cybersecurity can be significantly improved, which would prevent some hack-and-leak operations and fear-inducing breaches of election systems. The cybersecurity field has come to a strong consensus on certain basic practices, many of which remain unimplemented by campaigns and election administrators. Better cybersecurity would be particularly helpful in preventing hack-and-leaks, though candidates will struggle to prioritize cybersecurity given the practical imperatives of campaigning. Election systems themselves can be made substantially more secure at a reasonable cost. However, there is still no guarantee that the public would perceive such systems as secure in the face of rhetorical attacks by losing candidates.
  7. Statecraft, Deterrence, and Disruption. Cyber operations targeting foreign influence actors can temporarily frustrate specific foreign operations during sensitive periods, such as elections, but any long-term effect is likely marginal. There is little evidence to show that cyber operations, sanctions, or indictments have achieved strategic deterrence, though some foreign individuals and contract firms may be partially deterrable. Bans on foreign platforms and state media outlets have strong first-order effects (reducing access to them); their second-order consequences include retaliation against democratic media by the targeted state. All in all, the most potent tool of statecraft may be national leaders’ preemptive efforts to educate the public. Yet in democracies around the world, domestic disinformation is far more prolific and influential than foreign influence operations.
  8. Removing Inauthentic Asset Networks. The detection and removal from platforms of accounts or pages that misrepresent themselves has obvious merit, but its effectiveness is difficult to assess. Fragmentary data—such as unverified company statements, draft platform studies, and U.S. intelligence—suggest that continuous takedowns might be capable of reducing the influence of inauthentic networks and imposing some costs on perpetrators. However, few platforms even claim to have achieved this, and the investments required are considerable. Meanwhile, the threat posed by inauthentic asset networks remains unclear: a handful of empirical studies suggest that such networks, and social media influence operations more generally, may not be very effective at spreading disinformation. These early findings imply that platform takedowns may receive undue attention in public and policymaking discourse.
  9. Reducing Data Collection and Targeted Ads. Data privacy protections can be used to reduce the impact of microtargeting, or data-driven personalized messages, as a tool of disinformation. However, nascent scholarship suggests that microtargeting—while modestly effective in political persuasion—falls far short of the manipulative powers often ascribed to it. To the extent that microtargeting works, privacy protections seem to measurably undercut its effectiveness. But this carries high economic costs—not only for tech and ad companies, but also for small and medium businesses that rely on digital advertising. Additionally, efforts to blunt microtargeting can raise the costs of political activity in general, especially for activists and minority groups who lack access to other communication channels.
  10. Changing Recommendation Algorithms. Although platforms are neither the sole sources of disinformation nor the main causes of political polarization, there is strong evidence that social media algorithms intensify and entrench these off-platform dynamics. Algorithmic changes therefore have the potential to ameliorate the problem; however, this has not been directly studied by independent researchers, and the market viability of such changes is uncertain. Major platforms’ optimizing for something other than engagement would undercut the core business model that enabled them to reach their current size. Users could opt in to healthier algorithms via middleware or civically minded alternative platforms, but most people probably would not. Additionally, algorithms are blunt and opaque tools: using them to curb disinformation would also suppress some legitimate content.

Acknowledgments

The authors wish to thank William Adler, Dan Baer, Albin Birger, Kelly Born, Jessica Brandt, David Broniatowski, Monica Bulger, Ciaran Cartmell, Mike Caulfield, Tímea Červeňová, Rama Elluru, Steven Feldstein, Beth Goldberg, Stephanie Hankey, Justin Hendrix, Vishnu Kannan, Jennifer Kavanagh, Rachel Kleinfeld, Samantha Lai, Laura Livingston, Peter Mattis, Tamar Mitts, Brendan Nyhan, George Perkovich, Martin Riedl, Ronald Robertson, Emily Roseman, Jen Rosiere Reynolds, Zeve Sanderson, Bret Schafer, Leah Selig Chauhan, Laura Smillie, Rory Smith, Victoria Smith, Kate Starbird, Josh Stearns, Gerald Torres, Meaghan Waff, Alicia Wanless, Laura Waters, Gavin Wilde, Kamya Yadav, and others for their valuable feedback and insights. Additional thanks to Joshua Sullivan for research assistance and to Alie Brase, Lindsay Maizland, Anjuli Das, Jocelyn Soly, Amy Mellon, and Jessica Katz for publications support. The final report reflects the views of the authors only. This research was supported by a grant from the Special Competitive Studies Project.

About the Authors

Jon Bateman is a senior fellow in the Technology and International Affairs Program at the Carnegie Endowment for International Peace. His research areas include disinformation, cyber operations, artificial intelligence, and techno-nationalism. Bateman previously was special assistant to Chairman of the Joint Chiefs of Staff General Joseph F. Dunford, Jr., serving as a speechwriter and the lead strategic analyst in the chairman’s internal think tank. He has also helped craft policy for military cyber operations in the Office of the Secretary of Defense and was a senior intelligence analyst at the Defense Intelligence Agency, where he led teams responsible for assessing Iran’s internal stability, senior-level decisionmaking, and cyber activities. Bateman is a graduate of Harvard Law School and Johns Hopkins University.

Dean Jackson is principal of Public Circle Research & Consulting and a specialist in democracy, media, and technology. In 2023, he was named an inaugural Tech Policy Press reporting fellow and an affiliate fellow with the Propaganda Research Lab at the University of Texas at Austin. Previously, he was an investigative analyst with the Select Committee to Investigate the January 6th Attack on the U.S. Capitol and project manager of the Influence Operations Researchers’ Guild at the Carnegie Endowment for International Peace. From 2013 to 2021, Jackson managed research and program coordination activities related to media and technology at the National Endowment for Democracy. He holds an MA in international relations from the University of Chicago and a BA in political science from Wright State University in Dayton, OH.

Notes

1 The cells of this table are color coded: green suggests the most positive assessment for each factor, while red is the least positive and yellow is in between. These overall ratings are a combination of various subfactors, which may be in tension: for example, an intervention can be highly effective but only for a short time or with high risk of second-order consequences.

A green cell means an intervention is well studied, likely to be effective, or easy to implement. For the first column, this means there is a large body of literature on the topic. While it may not conclusively answer every relevant question, it provides strong indicators of effectiveness, cost, and related factors. For the second column, a green cell suggests that an intervention can be highly effective at addressing the problem in a lasting way at a relatively low level of risk. For the third column, a green cell means that the intervention can quickly make a large impact at relatively low cost and without major obstacles to successful implementation.

A yellow cell indicates an intervention is less well studied (there is relevant literature but major questions about efficacy are unanswered or significantly underexplored), less efficacious (its impact is noteworthy but limited in size or duration, or it carries some risk of blowback), or faces nonnegligible hurdles to implementation, such as cost, technical barriers, or political opposition.

A red cell indicates that an intervention is poorly understood, with little literature offering guidance on key questions; that it is low impact, has only narrow use cases, or has significant second-order consequences; or that it requires an especially high investment of resources or political capital to implement or scale.

Methodology

This report offers high-level, evidence-informed assessments of ten commonly proposed ways to counter disinformation. It summarizes the quantity and quality of research, the evidence of efficacy, and the ease of scalable implementation. Building on other work that has compiled policy proposals or collected academic literature, this report seeks to synthesize social science and practitioner knowledge for an audience of policymakers, funders, journalists, and others in democratic countries.1 Rather than recommending a specific policy agenda, it aims to clarify key considerations that leaders should weigh based on their national and institutional contexts, available resources, priorities, and risk tolerance.

To conduct this research, we compiled a list of nearly two dozen counter-disinformation measures frequently proposed by experts, scholars, and policymakers.2 We then selected ten for inclusion based on several factors. First, we prioritized proposals that had a fairly direct connection to the problem of disinformation. For example, we excluded antitrust enforcement against tech companies because it affects disinformation in an indirect way, making it difficult to evaluate in this report. Second, we focused on countermeasures that could plausibly be subject to meaningful empirical study. We therefore did not consider diplomatic efforts to build international norms against disinformation, for example, or changes to platforms’ legal liability as intermediaries. Third, we sought to cover a diverse range of interventions. This meant including actions implementable by the government, the private sector, and civil society; tactical measures as well as structural reforms; and multiple theories of change such as resilience, disruption, and deterrence.

The ten selected interventions became the subjects of this report’s ten case studies. Each case study defines the intervention, gives concrete use cases, and highlights additional reading. The case studies focus on three questions: How much is known about an intervention? How effective does it seem, given current knowledge? And how easy is it to implement at scale? To develop these case studies, we reviewed hundreds of academic papers, previous meta-analyses, programmatic literature, and other relevant materials. We also conducted a series of workshops and consultations with scholars, practitioners, policymakers, and funders. We drew on experts with domain knowledge to vet individual case studies, as well as those with a broader view of the counter-disinformation field to provide feedback on the project as a whole. The resulting report expresses the views of the authors alone.

Although this report reviews a number of important, commonly proposed policy ideas, it is not comprehensive. In particular, we did not study the following significant categories of long-term, large-scale change. First, political institutions could try to perform stronger gatekeeping functions. This may involve reforms of party primaries, redistricting processes, and campaign finance systems. Second, tech platforms might need stronger incentives and capacity to curb disinformation. This could involve new regulation, diversification of revenue, and market power reductions that enable users, advertisers, activists, and others to provide checks on major platforms. Third, the public may need more encouragement to value truth and place trust in truthful institutions and figures. This might involve addressing the many root causes of popular alienation, fear, and anger, such as with local community-building efforts, a reversal of geographic sorting, improvements to economic prospects, and healing of racial grievances. Any of these ideas would be daunting to implement, and none are easy to assess. But they all have serious potential to help counter disinformation—perhaps even more so than the ten interventions studied in this report.

Notes

1 See “Final Report: Commission on Information Disorder,” Aspen Institute, November 2021, https://www.aspeninstitute.org/wp-content/uploads/2021/11/Aspen-Institute_Commission-on-Information-Disorder_Final-Report.pdf; Daniel Arnaudo et al., “Combating Information Manipulation: A Playbook for Elections and Beyond,” National Democratic Institute, International Republican Institute, and Stanford Internet Observatory, September 2021, https://www.ndi.org/sites/default/files/InfoManip%20Playbook%20updated%20FINAL.pdf; “Center of Excellence on Democracy, Human Rights, and Governance: Disinformation Primer,” U.S. Agency for International Development, February 2021, https://cnxus.org/wp-content/uploads/2021/11/usaid-disinformation-primer.pdf; and Laura Courchesne, Julia Ilhardt, and Jacob N. Shapiro, “Review of Social Science Research on the Impact of Countermeasures Against Influence Operations,” Harvard Kennedy School Misinformation Review 2, no. 5 (September 2021): https://misinforeview.hks.harvard.edu/article/review-of-social-science-research-on-the-impact-of-countermeasures-against-influence-operations/.

2 This list was drawn from multiple sources, including Kamya Yadav, “Countering Influence Operations: A Review of Policy Proposals Since 2016,” Carnegie Endowment for International Peace, November 30, 2020, https://carnegieendowment.org/2020/11/30/countering-influence-operations-review-of-policy-proposals-since-2016-pub-83333; a more detailed, unpublished database of policy proposals compiled by Vishnu Kannan in 2022; Courchesne, Ilhardt, and Shapiro, “Review of Social Science Research”; and “The 2022 Code of Practice on Disinformation,” European Commission, accessed January 27, 2023, https://digital-strategy.ec.europa.eu/en/policies/code-practice-disinformation. These sources were supplemented by further literature review and expert feedback.

Challenges and Cautions

Before seeking to counter disinformation, policymakers should carefully consider what this idea means. “Disinformation,” usually defined as information known by the speaker to be false, is a notoriously tricky concept that comes with numerous limitations, contradictions, and risks.1

Conceptual Challenges

Identifying disinformation presents several puzzles. For one thing, labeling any claim as false requires invoking an authoritative truth. Yet the institutions and professions most capable of discerning the truth—such as science, journalism, and courts—are sometimes wrong and often distrusted. Moreover, true facts can be selectively assembled to create an overall narrative that is arguably misleading but not necessarily false in an objective sense. This may be even more common and influential than outright lies, yet it’s unclear whether it counts as disinformation. In fact, “disinformation” is frequently conflated with a range of other political and societal maladies such as polarization, extremism, and hate. All of these are technically distinct issues, though they can be causally related to disinformation and to each other. Finally, it is difficult to know whether someone spreading false claims does so intentionally. Disinformation typically passes through a long chain of both witting and unwitting speakers.

The challenges of the term “disinformation” are not merely theoretical; they have influenced public debates. Despite the word’s scientific-sounding imprimatur, it is often invoked quite loosely to denigrate any viewpoint seen as wrong, baseless, disingenuous, or harmful. Such usage has the effect of pathologizing swaths of routine discourse: after all, disagreements about what is wrong, baseless, disingenuous, or harmful are what drives democratic politics and social change. Moreover, today’s talk of “disinformation” can sometimes imply a more novel, solvable problem than really exists. Although the word has been familiar in the West for decades, it attained new currency just a few years ago after a series of catalyzing episodes—such as Russian election interference in the United States—involving social media. This led many people to see social media as the defining cause of disinformation, rather than one driver or manifestation of it. The messy battle for truth is, of course, an eternal aspect of human society.

For policymaking, reliance on a loaded but vague idea like “disinformation” brings several risks. When the term is used to imply that normal and necessary public discourse is dangerously disordered, it encourages the empowerment of technocrats to manage speech and, in turn, potentially erodes legal and normative boundaries that sustain democracy. Moreover, the term’s vagaries and contradictions are already well understood by segments of the public and have been seized upon, including by disinformers themselves, to undermine counter-disinformation efforts. In some cases, those accused of spreading disinformation have successfully sought to reclaim the term by arguing that counter-disinformation efforts are the real sources of disinformation, thus reversing the roles of perpetrator and victim.

This risk is most obvious in authoritarian regimes and flawed democracies, where leaders may suppress dissent by labeling it disinformation. But the problem can manifest in other ways too. A prominent U.S. example was the 2020 public letter by former intelligence officials warning that the then-recent disclosure of Hunter Biden’s laptop data “has all the classic earmarks of a Russian information operation.”2 Later, when the data’s authenticity was largely confirmed, those promoting the laptop story said the letter itself was a form of disinformation.3 Similar boomerang patterns have previously been seen with “fake news,” a phrase that originally described unethical content farms but was quickly repurposed to delegitimize truthful journalism. To be sure, such boomerangs often rest on exaggerated or bad faith claims. Yet they exploit a core truth: “disinformation” is a flawed, malleable term whose implied assertion of authority can lead to overreach and blowback.

For these and other reasons, a growing number of experts reject the term “disinformation.” Some prefer to focus instead on “misinformation” (which elides intent) or “influence/information operations” (which de-emphasizes falsity). Others favor more self-consciously political terms such as “propaganda” or “information warfare,” which they see as clearer warnings of the problem. A range of alternative conceptions have been proposed, including “malinformation” and “information disorder.” Recently, some experts have advocated holistic concepts, like “information ecology” or “information and society,” that shift attention away from individual actors or claims and toward larger social systems. Meanwhile, platforms have developed their own quasi-legalistic argot—such as Meta’s “coordinated inauthentic behavior”—to facilitate governance and enforcement.

There is also a growing set of scholars and commentators who believe the field itself, not just its terminology, must be fundamentally rethought.4 Some point out that disinformation and its ilk are elastic notions which tend to reflect the biases of whoever invokes them. Others observe that disinformation isn’t pervasive or influential enough to explain the ills often attributed to it. Several critics have gone so far as to label the disinformation crisis a moral panic, one suffered most acutely by elite groups. On this telling, privileged and expert classes—such as the White liberals who for decades dominated academia and journalism in the United States—have seized upon a perceived surge of disinformation to explain their recent loss of control over the national discourse. This story, rooted in nostalgia for a mythical era of shared truth, offers a comforting, depoliticized morality play: right-thinking in-groups are under siege by ignorant out-groups in the thrall of manipulative (often foreign) bogeymen. The narrative has troubling historical antecedents, such as baseless Cold War–era fears of communist “brainwashing” that led to curtailment of civil liberties in the West.

Despite all these complications and pitfalls, this report begrudgingly embraces the term “disinformation” for three primary reasons. First, it captures a specific, real, and damaging phenomenon: malicious falsehoods are undermining democratic stability and governance around the world. However difficult it may be to identify or define disinformation at the edges, a set of core cases clearly exists and deserves serious attention from policymakers. A paradigmatic example is the “Stop the Steal” movement in the United States. The claim that the 2020 presidential election was stolen is provably false, was put forward with demonstrated bad faith, and has deeply destabilized the country. Second, other phrases have their own problems, and no single term has yet emerged as a clearly better alternative. Third, “disinformation” remains among the most familiar terms for policymakers and other stakeholders who constitute the key audience for this report.

Evaluation Challenges

Beyond the conceptual issues, policymakers should also be aware of several foundational challenges in assessing the efficacy of disinformation countermeasures. Each of these challenges emerged time and again in the development of this report’s case studies.

  • The underlying problem is hard to measure. It is hard to know how well a countermeasure works if analysts don’t also know how much impact disinformation has, both before and after the countermeasure is implemented. In fact, countermeasures are only necessary insofar as disinformation is influential to begin with. Unfortunately, experts broadly agree that disinformation (like other forms of influence) is poorly understood and hard to quantify. A 2021 Princeton University meta-analysis commissioned by Carnegie found that “[e]mpirical research on how influence operations can affect people and societies—for example, by altering beliefs, changing voting behavior, or inspiring political violence—is limited and scattered.”5 It specifically noted that “empirical research does not yet adequately answer many of the most pressing questions facing policymakers” regarding the effectiveness of various influence tactics, the role of the medium used (such as specific online platforms), the duration of influence effects, and country-level differences. Until more is known about disinformation itself, the ability to assess countermeasures will remain limited.
  • Success can be defined in multiple ways. What makes an intervention successful in countering disinformation? An effective intervention might be one that stops someone from embracing a false belief, or discourages people from acting based on false claims, or slows the spread of false information, or protects the integrity of democratic decisionmaking, among other possibilities. All of these effects can be measured over varying time horizons. Additionally, effectiveness is tied to an intervention’s cost, scalability, and the willingness of key stakeholders to facilitate implementation. The risk of blowback is another factor: decisionmakers should consider potential second-, third-, and higher-order effects on the information environment. In short, there is no single way to understand success. Policymakers must decide this for themselves.
  • Policies can coincide, synergize, and conflict with each other. This report offers discrete evaluations of ten countermeasure types. In reality, multiple kinds of interventions should be implemented at the same time. Simultaneous, interconnected efforts are necessary to address the many complex drivers of disinformation. Policymakers and analysts must therefore avoid judging any one policy option as if it could or should provide a comprehensive solution. An ideal assessment would consider how several interventions can work together, including potential synergies, conflicts, and trade-offs. Such holistic analysis would be extremely difficult to do, however, and is beyond the scope of this report.
  • Subpopulations matter and may react differently. Many studies of disinformation countermeasures focus on their overall efficacy with respect to the general population, or the “average” person. However, atypical people—those at the tails of the statistical distribution—sometimes matter more. People who consume or share the largest amount of disinformation, hold the most extreme or conspiratorial views, have the biggest influence in their social network, or harbor the greatest propensity for violence often have disproportionate impact on society. Yet these tail groups are harder to study. Policymakers should take care not to assume that interventions which appear generally effective have the same level of impact on important tail groups. Conversely, interventions that look ineffective at a population level may still be able to influence key subpopulations.
  • Findings may not generalize across countries and regions. The feasibility and impact of an intervention can vary from place to place. For example, the United States is more polarized than most other advanced democracies, and it faces greater constitutional constraints and government gridlock. On the other hand, the United States has outsized influence over the world’s leading social media platforms and possesses relatively wealthy philanthropic institutions and, at the national level, a robust independent press. These kinds of distinctive characteristics will shape what works in the United States, while other countries must consider their own national contexts. Unfortunately, much of the available research focuses on the United States and a handful of other wealthy Western democracies. This report incorporates some examples from other countries, but geographic bias remains present.

These evaluation challenges have no easy solutions. Researchers are working to fill knowledge gaps and define clearer policy objectives, but doing so will take years or even decades. Meanwhile, policymakers must somehow forge ahead. Ideally, they will draw upon the best information available while remaining cognizant of the many unknowns. The following case studies are designed with those twin goals in mind.

Notes

1 Alicia Wanless and James Pamment, “How Do You Define a Problem Like Influence?,” Carnegie Endowment for International Peace, December 30 2019, https://carnegieendowment.org/2019/12/30/how-do-you-define-problem-like-influence-pub-80716. For more on the distinction between misinformation and disinformation, see Dean Jackson, “Issue Brief: Distinguishing Disinformation From Propaganda, Misinformation, and ‘Fake News,’” National Endowment for Democracy, October 17, 2017, https://www.ned.org/issue-brief-distinguishing-disinformation-from-propaganda-misinformation-and-fake-news/.

2 Jim Clapper et al., “Public Statement on the Hunter Biden Emails,” Politico, October 19, 2020, https://www.politico.com/f/?id=00000175-4393-d7aa-af77-579f9b330000.

3 Luke Broadwater, “Officials Who Cast Doubt on Hunter Biden Laptop Face Questions,” New York Times, May 16, 2023, https://www.nytimes.com/2023/05/16/us/politics/republicans-hunter-biden-laptop.html.

4 See, for example, Joseph Bernstein, “Bad News: Selling the Story of Disinformation,” Harper’s, 2021, https://harpers.org/archive/2021/09/bad-news-selling-the-story-of-disinformation; Rachel Kuo and Alice Marwick, “Critical Disinformation Studies: History, Power, and Politics,” Harvard Kennedy School Misinformation Review, August 12, 2021, https://misinforeview.hks.harvard.edu/article/critical-disinformation-studies-history-power-and-politics; Alice Marwick, Rachel Kuo, Shanice Jones Cameron, and Moira Weigel, “Critical Disinformation Studies: A Syllabus,” Center for Information, Technology, and Public Life, 2021, https://citap.unc.edu/research/critical-disinfo; Ben Smith, “Inside the ‘Misinformation’ Wars,” New York Times, November 28, 2021, https://www.nytimes.com/2021/11/28/business/media-misinformation-disinformation.html; Matthew Yglesias, “The Misinformation Cope,” Slow Boring, April 20, 2022, https://www.slowboring.com/p/misinformation; Théophile Lenoir, “Reconsidering the Fight Against Disinformation,” Tech Policy Press, August 1, 2022, https://www.techpolicy.press/reconsidering-the-fight-against-disinformation; Dan Williams, “Misinformation Researchers Are Wrong: There Can’t Be a Science of Misleading Content,” Conspicuous Cognition, January 10, 2024, https://www.conspicuouscognition.com/p/misinformation-researchers-are-wrong; and Gavin Wilde, “From Panic to Policy: The Limits of Propaganda and the Foundations of an Effective Response,” Texas National Security Review (forthcoming 2024).

5 Jon Bateman, Elonnai Hickok, Laura Courchesne, Isra Thange, and Jacob N. Shapiro, “Measuring the Effects of Influence Operations: Key Findings and Gaps From Empirical Research,” June 28, 2021, Carnegie Endowment for International Peace, https://carnegieendowment.org/2021/06/28/measuring-effects-of-influence-operations-key-findings-and-gaps-from-empirical-research-pub-84824.

Case Study 1: Supporting Local Journalism

Key takeaways:

There is strong evidence that the decline of local news outlets, particularly newspapers, has eroded civic engagement, knowledge, and trust—helping disinformation to proliferate. Bolstering local journalism could plausibly help to arrest or reverse such trends, but this has not been directly tested. Cost is a major challenge, given the expense of quality journalism and the depth of the industry’s financial decline. Philanthropy can provide targeted support, such as seed money for experimentation. But a long-term solution would probably require government intervention and/or alternate business models. This could include direct subsidies (channeled through nongovernmental intermediaries) or indirect measures, such as tax exemptions and bargaining rights.

Key sources:

Description and Use Cases

Many analysts have called for investing in local journalism—especially print and digital media—as a way to counter disinformation. The hope is that high-quality local journalism can inform democratic deliberation, debunk false claims, and restore the feelings of trust and community that help to keep conspiracy theories at bay.1 More specifically, new financial investments would aim to halt or reverse the industry’s long-term financial deterioration. Local newspapers and other outlets have seen steady declines in ad revenue and readership for the last two decades, as the internet gave birth to more sophisticated forms of digital advertising and alternative sources of free information. According to one count, a fourth of the newspapers operating in the United States in 2004 had closed by the end of 2020.2 The COVID-19 pandemic accelerated this trend, causing widespread layoffs across print, broadcast, radio, and digital outlets.3 Such challenges have not been limited to the United States or Western countries: for example, COVID-19 “ravaged the revenue base” of Nigerian media organizations, according to one local publisher.4

New funding for local journalism could come from governments, philanthropists, commercial sources, or a combination of these. One model for government funding is the New Jersey Civic Information Consortium, a state-supported nonprofit. The consortium receives money from government and private sources, then disburses grants to projects that promote the “quantity and quality of civic information.”5 The use of a nonprofit intermediary aims to reduce the risk that government officials would leverage state funds to influence news coverage.6 Another model is for governments to use tax exemptions and other policy tools to financially boost the journalism industry without directly subsidizing it.7 In the United Kingdom, newspapers, books, and some news sites are exempt from the Value-Added Tax because of their public benefit.8 In Canada, people who purchase a digital news subscription can claim a tax exemption.9 Australia has taken another approach by passing legislation that empowers news publishers to jointly negotiate for compensation when platforms like Facebook and Google link to their content.10 Other advocates have proposed a tax on digital advertising that would be used to support journalism.11

Philanthropic support for local journalism can also come in various forms. Not-for-profit news outlets in North America currently get about half of their revenue from foundation grants, but national and global outlets receive more than two-thirds of these grant dollars.12 To bolster local outlets, a greater portion of grants could be redirected to them. The next largest source of funding for nonprofit newsrooms is individual gifts, which make up about 30 percent of revenue and primarily come from donations of $5,000 or more.13 However, small-dollar donations are growing; NewsMatch, a U.S. fundraising effort, encourages audiences to donate to local media organizations and matches individual donations with other sources of philanthropy. NewsMatch has raised more than $271 million since 2017.14

Multiple government, philanthropic, or commercial revenue streams can be combined in novel ways, as illustrated by Report for America. The initiative raised $8 million in 2022 to place reporting fellows in local newsrooms.15 A relatively small portion, about $650,000, was taxpayer money from the Corporation for Public Broadcasting.16 The remainder came from foundations and technology companies, matched dollar-for-dollar by contributions split between the local newsrooms themselves and other local funders.

How Much Do We Know?

Research is clear that the decline of local journalism is associated with the drivers of disinformation. However, the inverse proposition—that greater funding for local journalists will reduce disinformation—does not automatically follow and has not been empirically tested.

Going forward, decisionmakers and scholars could study the link between disinformation and the health of local media outlets more closely by monitoring and evaluating the impact of local news startups on a variety of metrics related to disinformation, such as polarization, professed trust in institutions like the media and government, civic engagement and voter turnout, and susceptibility to online rumors.

How Effective Does It Seem?

Studies suggest at least two mechanisms whereby the decline of local media outlets can fuel the spread of disinformation.

First, the decline contributes to civic ignorance and apathy as voters become less informed about the issues, candidates, and stakes in local elections. Research indicates that reduced access to local news is linked to lower voter turnout and civic engagement as well as increased corruption and mismanagement. At least one longitudinal study also finds a relationship between the decline of local news, on the one hand, and diminished civic awareness and engagement on the other hand.17 These conditions ultimately erode public trust, which can increase belief in misinformation and conspiracy theories.18 Conversely, scholarship has shown that strong media is linked to robust civic participation. Many studies correlate the existence of local newspapers with higher turnout in local elections. And, at an individual level, a person’s consumption of local political news is associated with higher likelihood to vote.19 These patterns can be seen in a variety of electoral contexts—including downballot and judicial elections—and across historical periods, despite changing technology.20 A study of U.S. history from 1869 to 2004 found that a community’s civic participation rose when its first newspaper was created, and that this connection persisted even after the introduction of radio and television.21

When local media disappears, lower-quality information sources can fill the gap as people look elsewhere for information.

Second, when local media disappears, lower-quality information sources can fill the gap as people look elsewhere for information. Social media has emerged as a primary alternative.22 Although social media platforms contain plenty of accurate and authoritative voices, they also create opportunities for low-quality and hyperpartisan personalities and outlets (some of which pose as local newspapers) that spread misleading, divisive content.23 Indeed, research shows a connection between the decline of local media and the rise of polarization. For example, one study found that communities that lost their local newspaper became more polarized as voters replaced information from local media with more partisan cues picked up elsewhere, such as national cable TV.24 To be sure, polarizing content should not be equated with disinformation. Nevertheless, most analysts believe the two are linked: as voters drift away from the “mainstream” of the political spectrum—often, but not always, toward the right—they may become more accepting of less credible alternative media sources and misleading claims that align with their partisan preferences and demonize political opponents.25

Given the evidence that local media declines breed susceptibility to disinformation, it is reasonable to predict that efforts to bolster local media could have the opposite effect. However, that prediction has not yet been empirically tested. It is possible, for example, that people who have drifted from traditional local journalism toward social media as an information source might have developed new habits that would be difficult to reverse. Likewise, communities that have suffered a general loss of civic engagement and trust due to the decline of local media might now have less interest or faith in a startup newsroom than they previously would have.

How Easily Does It Scale?

Reversing the decline of local journalism is an extremely costly proposition, at least in the United States, because the scale of downsizing has been so large. A Georgetown University study found that newspapers employed 150,000 fewer people in 2022 compared to the 1980s—a decline of 63 percent. Although web publishers have replaced about half of those jobs, replacing the rest would require tremendous investment. For example, the American Journalism Project raised over $100 million to partially fund thirty-three nonprofit newsrooms—a small fraction of the 2,100 newsrooms that closed in the United States in the past two decades.26 Washington Post columnist Perry Bacon Jr. estimated in 2022 that it would cost at least $10 billion per year to hire 87,000 new journalists—that is, to ensure that each U.S. congressional district had 200 journalists, plus operational support.27 More localized coverage could be even costlier. In 2022, Democracy Fund created a calculator to estimate the total cost of meeting the information needs of every community in the United States. Hiring several reporters to cover crucial issues in each township and municipality would cost $52 billion per year.28

Philanthropy can provide targeted investments in particularly needy areas—for example, communities too small or poor to sustain local media on their own—and offer seed money to run experiments. But given the sums required, a large-scale solution would demand some combination of long-term government support, new journalistic business models, or other structural changes in the marketplace. The Australian bargaining law provides one promising case study. While critics said the approach would be unlikely to generate much revenue and would mostly benefit large publishers, an Australian government review found that Google and Meta reached thirty agreements with publications of varying size, including some groups of outlets. In its first year, the law raised more than $140 million for these outlets, much of which was used to hire new journalists and purchase equipment.29 Similar schemes are now being implemented in Canada and under consideration in California—though these efforts, like the Australia law, have faced strong initial pushback from big tech companies.30

Notes

1 Consider David Salvo, Jamie Fly, and Laura Rosenberger, “The ASD Policy Blueprint for Countering Authoritarian Interference in Democracies,” German Marshall Fund, June 26, 2018, https://www.gmfus.org/news/asd-policy-blueprint-countering-authoritarian-interference-democracies; “A Multi-dimensional Approach to Disinformation,” European Commission, 2018, https://op.europa.eu/en/publication-detail/-/publication/6ef4df8b-4cea-11e8-be1d-01aa75ed71a1; and Edward Lucas and Peter Pomeranzev, “Winning the Information War: Techniques and Counter-strategies to Russian Propaganda in Central and Eastern Europe,” Center for European Policy Analysis, 2016, https://cepa.ecms.pl/files/?id_plik=2773.

2 Tom Stites, “A Quarter of All U.S. Newspapers Have Died in 15 Years, a New UNC News Deserts Study Found,” Poynter Institute, June 24, 2020, https://www.poynter.org/locally/2020/unc-news-deserts-report-2020/.

3 Penelope Muse Abernathy, “News Deserts and Ghost Newspapers: Will Local News Survive?,” University of North Carolina, 2020, https://www.usnewsdeserts.com/reports/news-deserts-and-ghost-newspapers-will-local-news-survive/; and “The Tow Center COVID-19 Newsroom Cutback Tracker,” Tow Center for Digital Journalism, September 9, 2020, https://www.cjr.org/widescreen/covid-cutback-tracker.php.

4 For example, see Dapo Olorunyomi, “Surviving the Pandemic: The Struggle for Media Sustainability in Africa,” National Endowment for Democracy, January 2021, https://www.ned.org/wp-content/uploads/2021/01/Pandemic-Struggle-Media-Sustainability-Africa-Olorunyomi.pdf.

5 “About the Consortium,” New Jersey Civic Information Consortium, accessed January 27, 2023, https://njcivicinfo.org/about/.

6 Anya Schiffrin, ed., In the Service of Power: Media Capture and the Threat to Democracy, Center for International Media Assistance, 2017, https://www.cima.ned.org/resource/service-power-media-capture-threat-democracy/.

7 Consider “INN Mission & History,” Institute for Nonprofit News, accessed January 27, 2023, https://inn.org/about/who-we-are/.

8 Jim Waterson, “VAT Ruling on Times Digital Edition Could Save News UK Millions,” Guardian, January 6, 2020, https://www.theguardian.com/media/2020/jan/06/vat-ruling-on-times-digital-edition-could-save-news-uk-millions.

9 “About the Digital News Subscription Tax Credit,” Government of Canada, accessed March 24, 2023, https://www.canada.ca/en/revenue-agency/services/tax/individuals/topics/about-your-tax-return/tax-return/completing-a-tax-return/deductions-credits-expenses/deductions-credits-expenses/digital-news-subscription.html.

10 “News Media Bargaining Code,” Australian Competition & Consumer Commission, accessed January 27, 2023, https://www.accc.gov.au/by-industry/digital-platforms-and-services/news-media-bargaining-code/news-media-bargaining-code.

11 Julia Angwin, “Can Taxing Big Tech Save Journalism?” Markup, July 16, 2022, https://themarkup.org/newsletter/hello-world/can-taxing-big-tech-save-journalism.

12 “INN Index 2022: Enduring in Crisis, Surging in Local Communities,” Institute for Nonprofit News, July 27, 2022, https://inn.org/research/inn-index/inn-index-2022/; and “Newsmatch,” Institute for Nonprofit News, accessed April 18, 2023, https://newsmatch.inn.org/.

13 “INN Index 2022,” Institute for Nonprofit News.

14 “Newsmatch,” Institute for Nonprofit News.

15 “About Us,” Report for America, accessed January 27, 2023, https://www.reportforamerica.org/about-us/.

16 “Supporting Report for America,” Report for America, accessed December 23, 2023, https://www.reportforamerica.org/supporters.

17 Danny Hayes and Jennifer L. Lawless, “The Decline of Local News and Its Effects: New Evidence from Longitudinal Data,” Journal of Politics 80, no. 1 (January 2018): https://www.dannyhayes.org/uploads/6/9/8/5/69858539/decline.pdf.

18 The relationship between disinformation and trust in media, government, and other institutions is complex. Exposure to false content online is associated with lower trust in media but higher trust in government for conservatives when their preferred party is in power. Lack of trust in institutions is associated with higher belief in conspiracy theories, for example in the context of COVID-19 vaccination. See Katherine Ognyanova, David Lazer, Ronald E. Robertson, and Christo Wilson, “Misinformation in Action: Fake News Exposure Is Linked to Lower Trust in Media, Higher Trust in Government When Your Side Is in Power,” Harvard Kennedy School Misinformation Review, June 2, 2020, https://misinforeview.hks.harvard.edu/article/misinformation-in-action-fake-news-exposure-is-linked-to-lower-trust-in-media-higher-trust-in-government-when-your-side-is-in-power; and Will Jennings et al., “Lack of Trust, Conspiracy Beliefs, and Social Media Use Predict COVID-19 Vaccine Hesitancy,” Vaccines 9, no. 6 (June 2021): https://www.ncbi.nlm.nih.gov/pmc/articles/PMC8226842. See also Jay Jennings and Meghan Rubado, “Newspaper Decline and the Effect on Local Government Coverage,” University of Texas at Austin, November 2019, https://moody.utexas.edu/sites/default/files/Strauss_Research_Newspaper_Decline_2019-11-Jennings.pdf; Jackie Filla and Martin Johnson, “Local News Outlets and Political Participation,” Urban Affairs Review 45, no. 5 (2010): https://journals.sagepub.com/doi/abs/10.1177/1078087409351947?journalCode=uarb; “2021 Edelman Trust Barometer,” Edelman, 2021, https://www.edelman.com/trust/2021-trust-barometer; and Jeffrey Hiday, “Combating Disinformation by Bolstering Truth and Trust,” RAND Corporation, May 24, 2020, https://www.rand.org/pubs/articles/2022/combating-disinformation-by-bolstering-truth-and-trust.html.

19 Martin Baekgaard, Carsten Jensen, Peter B. Mortensen, and Søren Serritzlew, “Local News Media and Voter Turnout,” Local Government Studies 40 (2014): https://www.tandfonline.com/doi/abs/10.1080/03003930.2013.834253.

20 Christopher Chapp and Peter Aehl, “Newspapers and Political Participation: The Relationship Between Ballot Rolloff and Local Newspaper Circulation,” Newspaper Research Journal 42, no. 2 (2021): https://journals.sagepub.com/doi/10.1177/07395329211014968; and “Does Local Journalism Stimulate Voter Participation in State Supreme Court Elections?,” David Hughes, Journal of Law and Courts 8, no. 1 (2020): https://www.cambridge.org/core/journals/journal-of-law-and-courts/article/abs/does-local-journalism-stimulate-voter-participation-in-state-supreme-court-elections/CE8E2CBDF4CF9C58DF08A013AE8B05A3.

21 Matthew Gentzkow, Jesse M. Shapiro, and Michael Sinkinson, “The Effect of Newspaper Entry and Exit on Electoral Politics,” American Economic Review 101 (December 2011): https://web.stanford.edu/~gentzkow/research/voting.pdf. For a roundup of this research, see Josh Stearns and Christine Schmidt, “How We Know Journalism Is Good for Democracy,” Democracy Fund, September 15, 2022, https://democracyfund.org/idea/how-we-know-journalism-is-good-for-democracy/.

22 David S. Ardia, Evan Ringel, Victoria Ekstrand, and Ashley Fox, “Addressing the Decline of Local News, Rise of Platforms, and Spread of Mis- and Disinformation Online: A Summary of Current Research and Policy Proposals,” University of North Carolina, December 22, 2020, https://papers.ssrn.com/sol3/papers.cfm?abstract_id=3765576.

23 Jessica Mahone and Philip Napoli, “Hundreds of Hyperpartisan Sites Are Masquerading as Local News. This Map Shows If There’s One Near You,” Nieman Lab, July 13, 2020, https://www.niemanlab.org/2020/07/hundreds-of-hyperpartisan-sites-are-masquerading-as-local-news-this-map-shows-if-theres-one-near-you/.

24 Joshua P. Darr, Matthew P. Hitt, and Johanna L. Dunaway, “Newspaper Closures Polarize Voting Behavior,” Journal of Communication 68, no. 6 (December 2018): https://academic.oup.com/joc/article-abstract/68/6/1007/5160090.

25 See Imelda Deinla, Gabrielle Ann S. Mendoza, Kier Jesse Ballar, and Jurel Yap, “The Link Between Fake News Susceptibility and Political Polarization of the Youth in the Philippines,” Ateneo School of Government, Working Paper no. 21-029, November 2021, https://papers.ssrn.com/sol3/papers.cfm?abstract_id=3964492; and Mathias Osmundsen, Michael Bang Petersen, and Alexander Bor, “How Partisan Polarization Drives the Spread of Fake News,” Brookings Institution, May 13, 2021, https://www.brookings.edu/articles/how-partisan-polarization-drives-the-spread-of-fake-news/. In the United States, Yochai Benkler and others have argued that asymmetric polarization—with the right drifting from the center faster and farther than the left—has been driven at least in part by media dynamics. Specifically, right-leaning media across cable television, radio, and the internet are less connected to mainstream media than their left-leaning counterparts. See Yochai Benkler, Robert Faris, Hal Roberts, and Ethan Zuckerman, “Study: Breitbart-Led Right-Wing Media Ecosystem Altered Broader Media Agenda,” Columbia Journalism Review, March 3, 2017, https://www.cjr.org/analysis/breitbart-media-trump-harvard-study.php. Not all analysts believe polarization is a bad thing; for example, some argue that polarization provides voters more distinct choices and has led to increased political participation. Others have argued that polarization contributes to crisis flash points that disrupt a problematic status quo in ways that are ultimately healthy. Consider Jessica Rettig, “Why Political Polarization Might Be Good for America,” U.S. News, May 27, 2010, https://www.usnews.com/opinion/articles/2010/05/27/why-political-polarization-might-be-good-for-america; see also “The US Is Suffering From Toxic Polarization. That’s Arguably a Good Thing,” Peter T. Coleman, Scientific American, April 2, 2021, https://www.scientificamerican.com/article/the-u-s-is-suffering-from-toxic-polarization-thats-arguably-a-good-thing. The United States is an international outlier on polarization. A review by Jennifer McCoy and Benjamin Press found that the United States is “the only advanced Western democracy to have faced such intense polarization for such an extended period.” Their study suggests grim outcomes from high levels of polarization. McCoy and Press examine a sample of fifty-two democratic societies suffering “pernicious polarization,” defined “as the division of society into mutually distrustful political camps in which political identity becomes a social identity.” They find that half of cases faced democratic erosion, and fewer than a fifth were able to sustain a decline in pernicious polarization. Jennifer McCoy and Benjamin Press, “What Happens When Democracies Become Perniciously Polarized?” Carnegie Endowment for International Peace, January 18, 2022, https://carnegieendowment.org/2022/01/18/what-happens-when-democracies-become-perniciously-polarized-pub-86190.

26 Stites, “Quarter of All U.S. Newspapers”; “Fiscal Year 2022 Operating Budget,” Corporation for Public Broadcasting, accessed January 27, 2023, https://www.cpb.org/aboutcpb/financials/budget; Anthony P. Carnevale and Emma Wenzinger, “Stop the Presses: Journalism Employment and Economic Value of 850 Journalism and Communication Programs,” Georgetown University Center on Education and Workforce, 2022, https://cewgeorgetown.wpenginepowered.com/wp-content/uploads/cew-journalism-fr.pdf.

27 Perry Bacon, Jr., “America Should Spend Billions to Revive Local News,” Washington Post, October 17, 2022, https://www.washingtonpost.com/opinions/2022/10/17/local-news-crisis-plan-fix-perry-bacon/.

28 “National Ecosystem Calendar,” Democracy Fund, accessed April 18, 2023, https://oneroyalace.github.io/news-ecosystem-model/national_calculator.html.

29 See Brian Fung, “Meta Avoids Showdown Over News Content in US After Journalism Bargaining Bill Shelved,” CNN, December 7, 2022, https://www.cnn.com/2022/12/07/tech/meta-journalism-bargaining-bill/index.html; Joshua Benton, “Don’t Expect McConnell’s Paradox to Help News Publishers Get Real Money Out of Google and Facebook,” Nieman Lab, January 8, 2020, https://www.niemanlab.org/2020/01/dont-expect-mcconnells-paradox-to-help-news-publishers-get-real-money-out-of-google-and-facebook/; Jeff Jarvis, “As Rupert Murdoch Works to Dismantle the Internet, Why Are Other Media Outlets Helping Him?,”Crikey, February 15, 2021, https://www.crikey.com.au/2021/02/15/rupert-murdoch-news-media-bargaining-code/; Josh Frydenberg, “Review of the News Media and Digital Platforms Mandatory Bargaining Code,” Australian Department of the Treasury, February 2022, https://ministers.treasury.gov.au/ministers/josh-frydenberg-2018/media-releases/review-news-media-and-digital-platforms-mandatory; and Anya Schiffrin, “Australia’s News Media Bargaining Code Pries $140 Million From Google and Facebook,” Poynter Institute, August 16, 2022, https://www.poynter.org/business-work/2022/australias-news-media-bargaining-code-pries-140-million-from-google-and-facebook.

30 Max Matza, “Google and Canada Reach Deal to Avert News Ban Over Online News Act,” BBC, November 29, 2023, https://www.bbc.com/news/world-us-canada-67571027; and Jaimie Ding, “California Bill Requiring Big Tech to Pay for News Placed on Hold Until 2024,” Los Angeles Times, July 7, 2023, https://www.latimes.com/business/story/2023-07-07/california-journalism-bill-on-hold-until-2024.

Case Study 2: Media Literacy Education

Key takeaways:

There is significant evidence that media literacy training can help people identify false stories and unreliable news sources. However, variation in pedagogical approaches means the effectiveness of one program does not necessarily imply the effectiveness of another. The most successful variants empower motivated individuals to take control of their media consumption and seek out high-quality information—instilling confidence and a sense of responsibility alongside skills development. While media literacy training shows promise, it suffers challenges in speed, scale, and targeting. Reaching large numbers of people, including those most susceptible to disinformation, is expensive and takes many years.

Key sources:

Description and Use Cases

Increasing individuals’ media literacy through education and training is one of the most frequently recommended countermeasures against disinformation.1 Proponents argue that “media literacy and critical thinking are the first barrier to deception” and that teaching people these skills therefore enables them to better identify false claims.2 The National Association for Media Literacy Education defines media literacy as “the ability to access, analyze, evaluate, create, and act using all forms of communication.” However, scholars point to conceptual confusion around the term, and practitioners take many different approaches.3 Common goals include instilling knowledge of the media industry and journalistic practices, awareness of media manipulation and disinformation techniques, and familiarity with the internet and digital technologies.

Media literacy education initiatives target a range of different audiences, occur in multiple settings, and use a variety of methods—including intensive classroom-based coursework as well as short online videos and games. Many programs focus on children and adolescents,4 with research suggesting that young people are less familiar with the workings of the internet and digital media and more susceptible to online hoaxes and propaganda than commonly assumed.5 For example, a 2016 study of over 7,800 students found many failed to distinguish sponsored content and untrustworthy websites in search results.6 Public education is therefore one major vehicle to reach large numbers of people early in their lives, alongside other kinds of youth programs. Aspects of media literacy have long been embedded in general education and liberal arts curricula in advanced democracies, especially in subjects that emphasize critical reading and thinking, such as language arts, essay writing, civics, and rhetoric. Public libraries have also historically promoted media literacy.

Not all media literacy programs target young people. After all, people don’t necessarily age out of their susceptibility to disinformation; in fact, older individuals seem more likely to share false stories on Facebook.7 Media literacy training for adults may happen at libraries, senior citizen centers, recreational events, or professional settings. Civil society and government agencies have also run public awareness campaigns and released gamified education tools. For example, Sweden established a Psychological Defence Agency in 2022. Its responsibilities include leading “training, exercises and knowledge development” to help residents “identify and counter foreign malign information influence, disinformation and other dissemination of misleading information directed at Sweden.”8

One valuable case study is the International Research and Exchanges Board (IREX)’s Learn to Discern program, which has used a “train the trainers” approach in Ukraine and a number of other countries since 2015. This program equips volunteers to deliver a media literacy curriculum to members of their community.9 Reaching more vulnerable adults (for example, racial and ethnic minorities and those with fewer economic resources, less education, or less experience with the internet) is a policy priority for governments focused on media literacy.10

How Much Do We Know?

The body of scholarship on media literacy is large relative to most other disinformation countermeasures. For example, a 2022 literature review on digital literacy—one component of media literacy—found forty-three English-language studies since 2001, with thirty-three of these published since 2017, when interest in the topic swelled.11 The existence of dedicated journals and conferences is another indicator of growth in this subfield. For example, the National Association for Media Literacy Education published the first issue of the Journal of Media Literacy Education in 2009.12 Other major repositories of research on media literacy include a database maintained by the United Nations Alliance of Civilizations.13

Review of this literature shows that specific media literacy approaches have a strong theoretical basis and a large body of experimental evidence. However, variation in pedagogical approaches means the effectiveness of one program does not necessarily imply the effectiveness of another.14 Moreover, the lack of robust mechanisms for collecting data on classroom activities is a recognized gap. In 2018, the Media Literacy Programme Fund in the United Kingdom (considered a leader in media literacy education) cited grants to support evaluation as a priority.15 Since then, several studies have conducted real-time evaluation and sought to measure lasting improvements in student performance. Additional studies could expand the menu of possible approaches to evaluation; also useful would be to examine further the effectiveness of media literacy training for atypical individuals at the extremes, such as those who are especially motivated by partisanship, conspiracy theories, or radical ideologies.

How Effective Does It Seem?

There is significant evidence that media literacy training can help people identify false stories and unreliable news sources.16 Scholars sometimes refer to this as inoculation, because “preemptively exposing, warning, and familiarising people with the strategies used in the production of fake news helps confer cognitive immunity when exposed to real misinformation.”17 One experiment found that playing an online browser game designed to expose players to six different disinformation strategies reduced subjects’ susceptibility to false claims, especially among those users who were initially most vulnerable to being misled. Such laboratory findings are bolstered by studies of larger, real-world interventions. An evaluation of IREX’s Learn to Discern program found durable increases in good media consumption habits, such as checking multiple sources, lasting up to eighteen months after delivery of the training. 18 Other studies support teaching students to read “laterally”—using additional, trusted sources to corroborate suspect information.19

Because media literacy comes in many forms, it is important to assess which variants are most effective at reducing belief in false stories so trainers and educators can prioritize them. Research suggests that the most successful variants empower motivated individuals to take control of their media consumption and seek out high-quality information. This has been described as “actionable skepticism,” or sometimes simply as “information literacy.”20 For example, a 2019 review in American Behavioral Scientist examined various factors that might enable someone to recognize false news stories. They found that people’s “abilities to navigate and find information online that is verified and reliable”—for example, differentiating between an encyclopedia and a scientific journal—was an important predictor. In contrast, subjects’ understanding of the media industry and journalistic practices or their self-reported ability to “critically consume, question, and analyze information” were not predictive.21 Later research based on survey data also supported these findings.22

The most successful variants empower motivated individuals to take control of their media consumption and seek out high-quality information.

Importantly, multiple studies have shown that effective media literacy depends not only on people’s skills but also on their feelings and self-perceptions. Specifically, individuals who feel confident in their ability to find high-quality news sources, and who feel responsible for proactively doing so, are less likely to believe misleading claims. This factor is often called an individual’s “locus of control,” and it has been identified as important in studies of multiple nationally and demographically diverse populations.23 People who purposefully curate their information diet are less likely to be misled; passive consumers, on the other hand, are more vulnerable. However, this may be truer of typical news consumers than of outliers like extremists and very motivated partisans. The latter groups might self-report confidence in curating their media diet while nevertheless selecting for misleading, radical, or hyper-partisan sources.

A growing body of recent literature based on large-scale classroom studies shows how specific techniques can provide news consumers with greater agency and ability to seek out accurate information.24 Whereas past forms of online media literacy education often focused on identifying markers of suspicious websites—like typographical errors or other indicators of low quality—these signs are less useful in the modern information environment, where sources of misinformation can have the appearance of high production value for low cost.25 Recent studies have shown that lateral reading is more effective.26 In one study of students at a public college in the northeastern United States, only 12 percent of subjects used lateral reading before receiving training on how to do so; afterward, more than half did, and students showed an overall greater ability to discern true claims from fictional ones.27 A similar study on university students in California found these effects endured after five weeks.28 Another one-day exercise with American middle school students found that students had a difficult time overcoming impressions formed from “superficial features” on websites and should be trained to recognize different types of information sources, question the motivation behind them, and—crucially—compare those sources with known trustworthy sites.29

Teaching people to recognize unreliable news sources and common media manipulation tactics becomes even more effective when participants are also able to improve their locus of control, according to academic research and program evaluations. In a study of media literacy among 500 teenagers, researchers found that students with higher locus of control were more resilient against false stories. In another study based on survey data, researchers found that individuals who exhibited high locus of control and the ability to identify false stories were more likely to take corrective action on social media, such as reporting to the platform or educating the poster.30 (The participatory nature of social media increases the importance of educating users not only on how to recognize untrustworthy content but also on how to respond to and avoid sharing it.31)

Evaluations of IREX’s Learn to Discern program in Ukraine and a similar program run by PEN America in the United States shed further light on locus of control. These curricula’s focus on identifying untrustworthy content led subjects to become overly skeptical of all media. While trainees’ ability to identify disinformation and their knowledge of the news media increased, their locus of control changed only slightly. Ultimately, trainees’ ability to identify accurate news stories did not improve, and they remained distrustful of the media as a whole.32 A major challenge, then, is news consumers who feel under threat from the information environment rather than empowered to inform themselves. One potential intervention point could be social media platforms, which can provide tools and make other design choices to help users compare on-platform information with credible external sources (see case study 4). This could reinforce users’ locus of control while assisting them in exercising it.

Educators should be mindful of media literacy expert Paul Mihailidis’s warning that “critical thought can quickly become cynical thought.”33 In a 2018 essay, media scholar danah boyd argued that individuals who are both cynical about institutions and equipped to critique them can become believers in, and advocates for, conspiracy theories and disinformation. To avoid this trap, media literacy education must be designed carefully. This means empowering people to engage with media critically, constructively, and discerningly rather than through the lenses of undifferentiated paranoia and distrust.34

How Easily Does It Scale?

While media literacy training shows promise, it suffers challenges from speed, scale, and targeting. Many approaches will take years to reach large numbers of people, including many vulnerable and hard-to-reach populations. Attempts to reach scale through faster, leaner approaches, like gamified online modules or community-based efforts to train the trainers, are highly voluntary and most likely to impact already motivated individuals rather than large percentages of the public.

Many media literacy projects are not particularly expensive to deliver to small audiences. However, achieving wide impact requires high-scale delivery, such as integrating media literacy into major institutions like public education—a costly proposition. When a proposed 2010 bill in the U.S. Congress, the Healthy Media for Youth Act, called for $40 million for youth media literacy initiatives, leading scholars deemed the amount insufficient and advocated for larger financial commitments from the government, foundations, and the private sector.35

Once the resources and curricula are in place, it will still take time to develop necessary infrastructure to implement large-scale media literacy programs. For example, hiring skilled educators is a critical yet difficult task. Studies from the European Union (EU) and South Africa both identified major deficiencies in teachers’ own abilities to define core media literacy concepts or practice those concepts themselves.36

Notes

1 For examples, see Lucas and Pomeranzev, “Winning the Information War”; Katarína Klingová and Daniel Milo, “Countering Information War Lessons Learned from NATO and Partner Countries: Recommendations and Conclusions,” GLOBSEC, February 2017, https://www.globsec.org/what-we-do/publications/countering-information-war-lessons-learned-nato-and-partner-countries; Claire Wardle and Hossein Derakhshan, “Information Disorder: Toward an Interdisciplinary Framework for Research and Policy Making,” Council of Europe, September 2017, https://rm.coe.int/information-disorder-toward-an-interdisciplinary-framework-for-researc/168076277c; Daniel Fried and Alina Polyakova, “Democratic Defense Against Disinformation,” Atlantic Council, February 2018, https://www.atlanticcouncil.org/wp-content/uploads/2018/03/Democratic_Defense_Against_Disinformation_FINAL.pdf; “A Multi-Dimensional Approach,” European Commission; Erik Brattberg and Tim Maurer, “Russian Election Interference: Europe’s Counter to Fake News and Cyber Attacks,” Carnegie Endowment for International Peace, May 2018, https://carnegieendowment.org/files/CP_333_BrattbergMaurer_Russia_Elections_Interference_FINAL.pdf; “Action Plan Against Disinformation,” European Commission, May 2018, https://www.eeas.europa.eu/node/54866_en; Fly, Rosenberger, and Salvo, “The ASD Policy Blueprint”; Jean-Baptiste Jeangène Vilmer, Alexandre Escorcia, Marine Guillaume, and Janaina Herrera, “Information Manipulation: A Challenge for Our Democracies,” French Ministry for Europe and Foreign Affairs and the Institute for Strategic Research, August 2018, https://www.diplomatie.gouv.fr/IMG/pdf/information_manipulation_rvb_cle838736.pdf; Todd C. Helmus et al., “Russian Social Media Influence: Understanding Russian Propaganda in Eastern Europe,” RAND Corporation, 2018, https://www.rand.org/pubs/research_reports/RR2237.html; and Paul Barrett, “Tackling Domestic Disinformation: What the Social Media Companies Need to Do,” New York University, March 2019,  https://issuu.com/nyusterncenterforbusinessandhumanri/docs/nyu_domestic_disinformation_digital?e=31640827/68184927.

2 Klingová and Milo, “Countering Information War.”

3 “Media Literacy Defined,” National Association for Media Literacy Education, accessed February 13, 2023, https://namle.net/resources/media-literacy-defined/. See also Monica Bulger and Patrick Davison, “The Promises, Challenges, and Futures of Media Literacy,” Data & Society, February 21, 2018, https://datasociety.net/library/the-promises-challenges-and-futures-of-media-literacy/; and Géraldine Wuyckens, Normand Landry, and Pierre Fastrez, “Untangling Media Literacy, Information Literacy, and Digital Literacy: A Systematic Meta-review of Core Concepts in Media Education,” Journal of Media Literacy Education 14, no. 1 (2022): https://digitalcommons.uri.edu/cgi/viewcontent.cgi?article=1531&context=jmle.

4 Renee Hobbs, “Digital and Media Literacy: A Plan of Action,” Aspen Institute, 2010, https://www.aspeninstitute.org/wp-content/uploads/2010/11/Digital_and_Media_Literacy.pdf; and “Online Media Literacy Strategy,” UK Department for Digital, Culture, Media, & Sport, July 2021, https://assets.publishing.service.gov.uk/government/uploads/system/uploads/attachment_data/file/1004233/DCMS_Media_Literacy_Report_Roll_Out_Accessible_PDF.pdf.

5 Tiffany Hsu, “When Teens Find Misinformation, These Teachers Are Ready,” New York Times, September 8, 2022, https://www.nytimes.com/2022/09/08/technology/misinformation-students-media-literacy.html; “A Global Study on Information Literacy: Understanding Generational Behaviors and Concerns Around False and Misleading Information Online,” Poynter Institute, August 2022, https://www.poynter.org/wp-content/uploads/2022/08/A-Global-Study-on-Information-Literacy-1.pdf; and Elena-Alexandra Dumitru, “Testing Children and Adolescents’ Ability to Identify Fake News: A Combined Design of Quasi-Experiment and Group Discussions,” Societies 10, no. 3 (September 2020): https://www.mdpi.com/2075-4698/10/3/71/htm.

6 Sam Wineburg, Sarah McGrew, Joel Breakstone, and Teresa Ortega, “Evaluating Information: The Cornerstone of Civic Online Reasoning,” Stanford Digital Repository, November 22, 2016, https://purl.stanford.edu/fv751yt5934.

7 For evidence that older users are more likely to share false stories on Facebook, see Andrew Guess, Jonathan Nagler, and Joshua Tucker, “Less Than You Think: Prevalence and Predictors of Fake News Dissemination on Facebook,” Science Advances 5, no. 1 (2019): https://www.science.org/doi/10.1126/sciadv.aau4586.

8 Elisabeth Braw, “Create a Psychological Defence Agency to ‘Prebunk’ Fake News,” Prospect, December 8, 2022, https://www.prospectmagazine.co.uk/politics/60291/create-a-psychological-defence-agency-to-prebunk-fake-news; and Adela Suliman, “Sweden Sets Up Psychological Defense Agency to Fight Fake News, Foreign Interference,” Washington Post, January 6, 2022, https://www.washingtonpost.com/world/2022/01/06/sweden-fake-news-psychological-defence-agency.

9 Erin Murrock, Joy Amulya, Mehri Druckman, and Tetiana Liubyva, “Winning the War on State-Sponsored Propaganda: Gains in the Ability to Detect Disinformation a Year and a Half After Completing a Ukrainian News Media Literacy Program,” Journal of Media Literacy Education 10, no. 2 (2018): https://digitalcommons.uri.edu/cgi/viewcontent.cgi?article=1361&context=jmle.

10 “Online Media Literacy Strategy,” UK Department for Digital, Culture, Media, & Sport; and Kara Brisson-Boivin and Samantha McAleese, “From Access to Engagement: Building a Digital Media Literacy Strategy for Canada,” MediaSmarts, 2022, https://mediasmarts.ca/research-reports/access-engagement-building-digital-media-literacy-strategy-canada.

11 Hasan Tinmaz, Yoo-Taek Lee, Mina Fanea-Ivanovici, and Hasnan Baber, “A Systematic Review on Digital Literacy,” Smart Learning Environments 9 (2022), https://slejournal.springeropen.com/articles/10.1186/s40561-022-00204-y.

12 “History,” National Association for Media Literacy Education, accessed February 13, 2023, https://namle.net/about/history/.

13 “Media & Information Literacy,” UN Alliance of Civilizations, accessed March 26, 2023, https://milunesco.unaoc.org/mil-organizations/acma-digital-media-literacy-research-program.

14 “Media & Information Literacy,” UN Alliance of Civilizations.

15 “Online Media Literacy Strategy,” UK Department for Digital, Culture, Media, & Sport; “Media Literacy Programme Fund,” Government of the United Kingdom, accessed March 26, 2023, https://www.gov.uk/guidance/media-literacy-programme-fund; and Bulger and Davison, “Promises, Challenges, and Futures.”

16 Consider Bulger and Davison, “Promises, Challenges, and Futures,” as well as Theodora Dame Adjin-Tettey, “Combating Fake News, Disinformation, and Misinformation: Experimental Evidence for Media Literacy Education,” Cogent Arts & Humanities 9 (2022): https://www.tandfonline.com/doi/full/10.1080/23311983.2022.2037229.

17 Jon Roozenbeek and Sander van der Linden, “Fake News Game Confers Psychological Resistance Against Online Misinformation,” Palgrave Communications 5 (2019): https://www.nature.com/articles/s41599-019-0279-9.

18 Murrock, Amulya, Druckman, and Liubyva, “Winning the War.”

19 Carl-Anton Werner Axelsson, Mona Guath, and Thomas Nygren, “Learning How to Separate Fake From Real News: Scalable Digital Tutorials Promoting Students’ Civic Online Reasoning,” Future Internet 13, no. 3 (2021): https://www.mdpi.com/1999-5903/13/3/60.

20 Jennifer Fleming, “Media Literacy, News Literacy, or News Appreciation? A Case Study of the News Literacy Program at Stony Brook University,” Journalism & Mass Communication Educator 69, no. 2 (2013): https://journals.sagepub.com/doi/abs/10.1177/1077695813517885.

21 Because the measurement of media literacy was self-reported, the study posits this as an example of the “Dunning-Kruger effect”: an individual’s (over)confidence in their ability to critically consume media is related to their susceptibility to deception. See Mo Jones-Jang, Tara Mortensen, and Jingjing Liu, “Does Media Literacy Help Identification of Fake News? Information Literacy Helps, but Other Literacies Don’t,” American Behavioral Scientist (August 2019): https://www.researchgate.net/publication/335352499_Does_Media_Literacy_Help_Identification_of_Fake_News_Information_Literacy_Helps_but_Other_Literacies_Don't.

22 Brigitte Huber, Porismita Borah, and Homero Gil de Zúñiga, “Taking Corrective Action When Exposed to Fake News: The Role of Fake News Literacy,” Journal of Media Literacy Education 14 (July 2022): https://www.researchgate.net/publication/362513295_Taking_corrective_action_when_exposed_to_fake_news_The_role_of_fake_news_literacy.

23 Murrock, Amulya, Druckman, and Liubyva, “Winning the War”; and “Impact Report: Evaluating PEN America's Media Literacy Program,” PEN America & Stanford Social Media Lab, September 2022, https://pen.org/report/the-impact-of-community-based-digital-literacy-interventions-on-disinformation-resilience. See also Yan Su, Danielle Ka Lai Lee, and Xizhu Xiao,  “‘I Enjoy Thinking Critically, and I’m in Control’: Examining the Influences of Media Literacy Factors on Misperceptions Amidst the COVID-19 Infodemic,” Computers in Human Behavior 128 (2022): https://www.sciencedirect.com/science/article/pii/S0747563221004349, a study based on subjects in China. The similar findings between the United States, Ukraine, and China—despite significant differences in the three countries’ media systems and histories—is noteworthy.

24 See generally: Folco Panizza et al., “Lateral Reading and Monetary Incentives to Spot Disinformation About Science,” Scientific Reports 12 (2022): https://www.nature.com/articles/s41598-022-09168-y; Sam Wineburg et al., “Lateral Reading on the Open Internet: A District-Wide Field Study in High School Government Classes,” Journal of Educational Psychology 114, no. 5 (2022): https://www.studocu.com/id/document/universitas-kristen-satya-wacana/social-psychology/lateral-reading-on-the-open-internet-a-district-wide-field-study-in-high-school-government-classes/45457099; and Joel Breakstone et al., “Lateral Reading: College Students Learn to Critically Evaluate Internet Sources in an Online Course,” Harvard Kennedy School Misinformation Review 2 (2021), https://misinforeview.hks.harvard.edu/article/lateral-reading-college-students-learn-to-critically-evaluate-internet-sources-in-an-online-course.

25 For more on this method, its success in classroom trials, and its departure from previous forms of media literacy education, see D. Pavlounis, J. Johnston, J. Brodsky, and P. Brooks, “The Digital Media Literacy Gap: How to Build Widespread Resilience to False and Misleading Information Using Evidence-Based Classroom Tools,” CIVIX Canada, November 2021, https://ctrl-f.ca/en/wp-content/uploads/2021/11/The-Digital-Media-Literacy-Gap-Nov-7.pdf.

26 Axelsson, Guath, and Nygren, “Learning How to Separate.”

27 Jessica E. Brodsky et al., “Associations Between Online Instruction in Lateral Reading Strategies and Fact-Checking COVID-19 News Among College Students,” AERA Open (2021): https://journals.sagepub.com/doi/full/10.1177/23328584211038937.

28 Sarah McGrew, Mark Smith, Joel Breakstone, Teresa Ortega, and Sam Wineburg, “Improving University Students’ Web Savvy: An Intervention Study,” British Journal of Educational Psychology 89, no. 3 (September 2019): https://bpspsychub.onlinelibrary.wiley.com/doi/10.1111/bjep.12279.

29 Angela Kohnen, Gillian Mertens, and Shelby Boehm, “Can Middle Schoolers Learn to Read the Web Like Experts? Possibilities and Limits of a Strategy-Based Intervention,” Journal of Media Literacy Education 12, no. 2 (2020): https://digitalcommons.uri.edu/cgi/viewcontent.cgi?article=1457&context=jmle.

30 Adam Maksl, Seth Ashley, and Stephanie Craft, “Measuring News Media Literacy,” Journal of Media Literacy Education 6 (2015), https://digitalcommons.uri.edu/jmle/vol6/iss3/3/; and Huber, Borah, and Zúñiga, “Taking Corrective Action.”

31 Bulger and Davison, “Promises, Challenges, and Futures.”

32 Like Jones-Jang, Mortensen, and Liu, the authors of the IREX evaluation suggest that the “false sense of control” already felt by individuals who did not receive media literacy training may also partially explain the relatively small improvements in these subjects’ locus of control.

33 Paul Mihailidis, “Beyond Cynicism: Media Education and Civic Learning Outcomes in the University,” International Journal of Learning and Media 1, no. 3 (August 2009): https://www.researchgate.net/publication/250958225_Beyond_Cynicism_Media_Education_and_Civic_Learning_Outcomes_in_the_University.

34 “You Think You Want Media Literacy… Do You?” danah boyd, apophenia, March 9, 2018, https://www.zephoria.org/thoughts/archives/2018/03/09/you-think-you-want-media-literacy-do-you.html.

35 Hobbs, “Digital and Media Literacy.”

36 Sandy Zinn, Christine Stilwell, and Ruth Hoskins, “Information Literacy Education in the South African Classroom: Reflections from Teachers’ Journals in the Western Cape Province,” Libri 66 (April 2016): https://www.degruyter.com/document/doi/10.1515/libri-2015-0102/html; and Maria Ranieri, Isabella Bruni, and Anne-Claire Orban de Xivry, “Teachers’ Professional Development on Digital and Media Literacy. Findings and Recommendations From a European Project,” Research on Education and Media 9, no. 2 (2017): https://sciendo.com/article/10.1515/rem-2017-0009.

Case Study 3: Fact-Checking

Key takeaways:

A large body of research indicates that fact-checking can be an effective way to correct false beliefs about specific claims, especially for audiences that are not heavily invested in the partisan elements of the claims. However, influencing factual beliefs does not necessarily result in attitudinal or behavioral changes, such as reduced support for a deceitful politician or a baseless policy proposal. Moreover, the efficacy of fact-checking depends a great deal on contextual factors—such as wording, presentation, and source—that are not well understood. Even so, fact-checking seems unlikely to cause a backfire effect that leads people to double down on false beliefs. Fact-checkers face a structural disadvantage in that false claims can be created more cheaply and disseminated more quickly than corrective information; conceivably, technological innovations could help shift this balance.

Key sources:

Description and Use Cases

Fact-checking, in this report, refers broadly to the issuance of corrective information to debunk a false or misleading claim. A 2020 global survey by Carnegie identified 176 initiatives focused on fact-checking and journalism, while the Duke University Reporters’ Lab counted more than 400 active fact-checking efforts across more than 100 countries in 2023.1 These initiatives come in many different forms. They include dedicated, stand-alone organizations, such as Snopes, as well as fact-checkers integrated into newspapers and TV programs. Some prioritize political claims, like the Washington Post’s “Fact Checker” and the website PolitiFact. Others address health claims, like the CoronaVirusFacts/DatosCoronaVirus Alliance Database led by the International Fact-Checking Network at the Poynter Institute.2

Collaborative fact-checking models uniting the efforts of several organizations have also emerged, like Verificado 2018, an effort to collect rumors and disinformation circulating on WhatsApp during the 2018 Mexican elections and deliver corrections through private messaging.3 Projects like this attempt to quickly reach a large audience through a medium people already use. Other initiatives in multiple countries have attempted to crowdsource from citizen fact-checkers.

In recent years, some social media companies have highlighted fact-checks on their platforms and used the assessments of fact-checkers to inform other policy actions. For example, Meta’s third-party fact-checking program routes Facebook and Instagram posts that contain potential falsehoods to fact-checkers certified through the International Fact-Checking Network and applies a label if the posts are false or disputed.4 (For more on social media labeling, see case study 4.) Beyond social media, fact-checks can also be disseminated on dedicated websites or during televised political debates, among other possibilities.

How Much Do We Know?

Fact-checking is well-studied—markedly more so than other interventions. Nearly 200 articles related to fact-checking published since 2013 were reviewed for this case study. However, the strong empirical research base also reveals that fact-checking’s effectiveness depends on a complex interplay of multiple factors which remain poorly understood. Research has only begun to probe the specific parameters that apparently affect fact-checking’s impact, such as format, language, and source. Additionally, much of the academic literature on fact-checking comes from laboratory studies based on unrepresentative samples of university students, or from online quizzes based on crowdsourcing platforms like Amazon’s Mechanical Turk—raising questions about the findings’ generalizability. Among other problems, the subjects of such studies may be more interested or engaged with fact-checking content presented to them by experimenters, as compared with members of the general public who encounter such content organically. More research evaluating the longitudinal impact of ongoing fact-checking efforts in a diverse set of real-time, real-world environments is still needed.

How Effective Does It Seem?

A number of studies suggest that it is easier to cause people to disbelieve false claims but harder to change the behaviors related to those beliefs. For example, international studies have shown fact-checks to have some success at changing beliefs about viral diseases, but they do not always lead to increased intent to receive vaccines or improved public health behaviors.5 This disconnect may be especially large for politically charged topics in divided societies. Fact-checking the claims of political figures has limited impact on voters’ support for a candidate or policy position—even when the voters can correctly reject false claims.6

In general, studies find strong evidence of confirmation bias: subjects are more susceptible to false claims that align with preexisting beliefs or allegiances and are more resistant to fact-checks associated with an opposing political party or its positions.7 In fact, research suggests that accuracy is not always a top-of-mind issue for news consumers. For example, one 2013 study suggested that individuals put more stock in the perceived trustworthiness (or sincerity) of a corrective source than in the source’s actual expertise on the relevant topic.8 In another study, right-leaning, U.S.-based participants who were asked to judge the validity of articles tended to provide “expressive” assessments—aimed more at demonstrating their partisan allegiance than at seriously evaluating a source’s credibility.9 To be sure, many studies of fact-checking and confirmation bias focus on U.S. audiences, where political polarization is especially strong.10 It is possible that partisan barriers to fact-checking are less present in more unified societies.11

Some research initially sparked concern that fact-checking might perversely cause audiences to double down on their false beliefs. The term “backfire effect” was initially coined to describe this behavior in a 2010 article by political scientists Brendan Nyhan and Jason Reifler and took root in American public consciousness after the 2016 U.S. presidential election.12 However, more recent research (including by Nyhan) suggests that backfiring may be a rare phenomenon.

The efficacy of fact-checks depends on many factors. The precise wording of fact-checks matters, with more straightforward refutations being more effective than nuanced explanations. Additionally, one 2015 study found that a fact-check that provides an alternative “causal explanation for an unexplained event is significantly more effective than a denial even when the denial is backed by unusually strong evidence.”13 In other words, replacing a false story with a true story works better than merely refuting the false story. However, many of these factors remain poorly understood; for example, research is inconclusive on whether fact-checks should repeat the false claim being debunked or avoid doing so.

The use of emotion and storytelling in fact-checks is another potentially important but under-researched area. One study found that “narrative correctives,” which embed fact-checks within an engaging story, can be effective—and stories that end on an emotional note, such as fear or anger, work better than those that do not.14 Another study suggested that anger and anxiety increase motivated reasoning and partisan reactions, although this did not seem to prevent fact-checks from influencing users.15

One of the most important outstanding research areas is the durability of fact-checks: how long is corrective information remembered and believed by the recipient? Studies have reached complicated or conflicting results. Some research, for example, has suggested that a recipient’s increase in knowledge of truthful information may last longer than any change in deeper beliefs or attitudes related to that knowledge.16 This finding highlights an important difference between informational knowledge and affective feeling—both of which influence people’s beliefs and behaviors. A 2015 study found evidence that misinformation affected the audience’s sentiment toward public figures even after false claims were immediately debunked.17

How Easily Does It Scale?

The large number of ongoing fact-checking efforts around the world indicates that this intervention can be undertaken at reasonable expense. Some efforts, such as those incorporated into for-profit journalistic enterprises, may even be self-sustaining—whether on their own or as part of a larger business model. Initiatives like the International Fact-Checking Network have received financial and other support from philanthropists, tech companies, and universities.

Fact-checking does face at least two scaling challenges. First, it often takes much more time and expertise to produce a fact-check than to generate the false content being debunked. So long as fact-checkers face this structural disadvantage, fact-checking cannot be a comprehensive solution to disinformation. Rather than scale up to match the full scope of false claims, fact-checkers must instead do triage. Second, fact-checks require distribution mechanisms capable of competing effectively with the spread of disinformation. This means finding ways to reach the audience segments most vulnerable to disinformation. The faster and the more frequent the fact-checks, the better. Ideally, fact-checking should occur before or at the same time as the false information is presented. But this is no easy task. Given the significant investments already being made to produce fact-checks, funders should ensure that distribution mechanisms are sufficient to fully leverage fact-checkers’ work.

Technological innovation may help to reduce the cost of producing high-quality fact-checks and enable their rapid dissemination. Crowdsourcing methods, such as Twitter’s Birdwatch (later renamed Community Notes on X), are one approach that merits further study.18 Others have begun to test whether generative AI can be used to perform fact-checks. While today’s generative AI tools are too unreliable to produce accurate fact-checks without human supervision, they may nevertheless assist human fact-checkers in certain research and verification tasks, lowering costs and increasing speed.19 Ultimately, both crowdsourcing and AI methods still depend on the availability of authoritative, discoverable facts by which claims can be assessed. Producing this factual baseline—whether through science, journalism, or other knowledge-seeking efforts—is an important part of the fact-checking cycle. This too requires funding.

Notes

1 Victoria Smith, “Mapping Worldwide Initiatives to Counter Influence Operations,” Carnegie Endowment for International Peace, December 14, 2020, https://carnegieendowment.org/2020/12/14/mapping-worldwide-initiatives-to-counter-influence-operations-pub-83435; and “Fact-Checking,” Duke Reporter’s Lab, accessed January 27, 2023, https://reporterslab.org/fact-checking.

2 “The CoronaVirusFacts/DatosCoronaVirus Alliance Database,” Poynter Institute, accessed December 10, 2023, https://www.poynter.org/ifcn-covid-19-misinformation.

3 “Verificado 2018,” Online Journalism Awards, accessed December 10, 2023, https://awards.journalists.org/entries/verificado-2018.

4 “About Fact-Checking on Facebook and Instagram,” Meta, accessed March 22, 2023, https://www.facebook.com/business/help/2593586717571940?id=673052479947730.

5 John M. Carey et al., “The Effects of Corrective Information About Disease Epidemics and Outbreaks: Evidence From Zika and Yellow Fever in Brazil,” Science Advances 6 (2020), https://www.science.org/doi/10.1126/sciadv.aaw7449; Jeremy Bowles, Horacio Larreguy, and Shelley Liu, “Countering Misinformation via WhatsApp: Preliminary Evidence From the COVID-19 Pandemic in Zimbabwe,” PLOS ONE 15 (2020), https://doi.org/10.1371/journal.pone.0240005; and Sara Pluviano, Sergio Della Sala, and Caroline Watt, “The Effects of Source Expertise and Trustworthiness on Recollection: The Case of Vaccine Misinformation,” Cognitive Processing 21 (2020), https://pubmed.ncbi.nlm.nih.gov/32333126/.

6 See Brendan Nyhan, Ethan Porter, Jason Reifler, and Thomas Wood, “Taking Fact-Checks Literally but Not Seriously? The Effects of Journalistic Fact-Checking on Factual Beliefs and Candidate Favorability,” Political Behavior 42 (2019): https://link.springer.com/article/10.1007/s11109-019-09528-x; Briony Swire-Thompson, Ullrich K. H. Ecker, Stephan Lewandowsky, and Adam J. Berinsky, “They Might Be a Liar But They’re My Liar: Source Evaluation and the Prevalence of Misinformation,” Political Psychology 41 (2020), https://onlinelibrary.wiley.com/doi/abs/10.1111/pops.12586; Oscar Barrera, Sergei Guriev, Emeric Henry, and Ekaterina Zhuravskaya, “Facts, Alternative Facts, and Fact Checking in Times of Post-Truth Politics,” Journal of Public Economics 182 (2017), https://www.sciencedirect.com/science/article/pii/S0047272719301859; and Briony Swire, Adam J. Berinsky, Stephan Lewandowsky, and Ullrich K. H. Ecker, “Processing Political Misinformation: Comprehending the Trump Phenomenon,” Royal Society Open Science 4 (2017), https://royalsocietypublishing.org/doi/10.1098/rsos.160802.

7 Antino Kim and Alan R. Dennis, “Says Who? The Effects of Presentation Format and Source Rating on Fake News in Social Media,” MIS Quarterly 43, no. 3 (2019): https://papers.ssrn.com/sol3/papers.cfm?abstract_id=2987866; Ethan Porter, Thomas J. Wood, and David Kirby, “Sex Trafficking, Russian Infiltration, Birth Certificates, and Pedophilia: A Survey Experiment Correcting Fake News,” Journal of Experimental Political Science 5, no. 2 (2018): https://www.cambridge.org/core/journals/journal-of-experimental-political-science/article/sex-trafficking-russian-infiltration-birth-certificates-and-pedophilia-a-survey-experiment-correcting-fake-news/CFEB9AFD5F0AEB64DF32D5A7641805B6; and Jeong-woo Jang, Eun-Ju Lee, and Soo Yun Shin, “What Debunking of Misinformation Does and Doesn’t,” Cyberpsychology, Behavior, and Social Networking 22, no. 6 (2019): https://pubmed.ncbi.nlm.nih.gov/31135182.

8 Jimmeka J. Guillory and Lisa Geraci “Correcting Erroneous Inferences in Memory: The Role of Source Credibility,” Journal of Applied Research in Memory and Cognition 2, no. 4 (2013): https://doi.org/10.1016/j.jarmac.2013.10.001; and Pluviano and Della Sala, “Effects of Source Expertise.”

9 Maurice Jakesch, Moran Koren, Anna Evtushenko, and Mor Naaman, “The Role of Source, Headline and Expressive Responding in Political News Evaluation,” SSRN, January 31, 2019, https://dx.doi.org/10.2139/ssrn.3306403.

10 Consider Thomas Carothers and Andrew O’Donohue, “How Americans Were Driven to Extremes: In the United States, Polarization Runs Particularly Deep,” Foreign Affairs, September 25, 2019, https://www.foreignaffairs.com/articles/united-states/2019-09-25/how-americans-were-driven-extremes.

11 Consider Michael J. Aird, Ullrich K. H. Ecker, Briony Swire, Adam J. Berinsky, and Stephan Lewandowsky, “Does Truth Matter to Voters? The Effects of Correcting Political Misinformation in an Australian Sample,” Royal Society Open Science (2018), https://royalsocietypublishing.org/doi/10.1098/rsos.180593.

12 The backfire effect was captured in Brendan Nyhan and Jason Reifler, “When Corrections Fail: The Persistence of Political Misperceptions,” Political Behavior 32 (2010): https://link.springer.com/article/10.1007/s11109-010-9112-2. The popular online comic The Oatmeal featured commentary about the backfire effect, demonstrating its breakthrough into popular imagination; see “Believe,” The Oatmeal, accessed January 27, 2023, https://theoatmeal.com/comics/believe. However, other studies have since called the effect into question. See Thomas Wood and Ethan Porter, “The Elusive Backfire Effect: Mass Attitudes’ Steadfast Factual Adherence,” Political Behavior 41 (2019): https://link.springer.com/article/10.1007/s11109-018-9443-y; see also Kathryn Haglin, “The Limitations of the Backfire Effect,” Research & Politics 4 (2017): https://journals.sagepub.com/doi/10.1177/2053168017716547; and Brendan Nyhan, “Why the Backfire Effect Does Not Explain the Durability of Political Misperceptions,” PNAS 118 (2020): https://www.pnas.org/doi/10.1073/pnas.1912440117.

13 Brendan Nyhan and Jason Reifler, “Displacing Misinformation About Events: An Experimental Test of Causal Corrections,” Journal of Experimental Political Science 2 (2015): https://www.cambridge.org/core/journals/journal-of-experimental-political-science/article/abs/displacing-misinformation-about-events-an-experimental-test-of-causal-corrections/69550AB61F4E3F7C2CD03532FC740D05.

14 Angeline Sangalang, Yotam Ophir, and Joseph N. Cappella, “The Potential for Narrative Correctives to Combat Misinformation,” Journal of Communication 69, no. 3 (2019): https://academic.oup.com/joc/article-abstract/69/3/298/5481803?redirectedFrom=fulltext.

15 Brian E. Weeks, “Emotions, Partisanship, and Misperceptions: How Anger and Anxiety Moderate the Effect of Partisan Bias on Susceptibility to Political Misinformation,” Journal of Communication 65, no. 4 (2015): https://onlinelibrary.wiley.com/doi/abs/10.1111/jcom.12164.

16 Ethan Porter and Thomas Wood, “The Global Effectiveness of Fact-Checking: Evidence From Simultaneous Experiments in Argentina, Nigeria, South Africa, and the United Kingdom,” PNAS 118, no. 37 (2021): https://www.pnas.org/doi/10.1073/pnas.2104235118; see also John M. Carey et al., “The Ephemeral Effects of Fact-Checks on COVID-19 Misperceptions in the United States, Great Britain and Canada,” Nature Human Behaviour 6 (2022), https://www.nature.com/articles/s41562-021-01278-3; and Patrick R. Rich and Maria S. Zaragoza, “Correcting Misinformation in News Stories: An Investigation of Correction Timing and Correction Durability,” Journal of Applied Research in Memory and Cognition 9, no. 3 (2020): https://www.sciencedirect.com/science/article/abs/pii/S2211368120300280.

17 Emily Thorson, “Belief Echoes: The Persistent Effects of Corrected Misinformation,” Political Communication 33, no. 3 (2015): https://www.tandfonline.com/doi/full/10.1080/10584609.2015.1102187.

18 Consider Mevan Babakar, “Crowdsourced Factchecking: A Pie in The Sky?” European Journalism Observatory, June 1, 2018, https://en.ejo.ch/specialist-journalism/crowdsourced-factchecking-a-pie-in-the-sky. Studies suggest interventions from users can be as or more effective than interventions from experts: consider Leticia Bode and Emily K. Vraga, “See Something, Say Something: Correction of Global Health Misinformation on Social Media,” Health Communication 33, no. 9 (2018): https://www.tandfonline.com/doi/full/10.1080/10410236.2017.1331312; and Jonas Colliander, “This is Fake News: Investigating the Role of Conformity to Other Users’ Views When Commenting on and Spreading Disinformation in Social Media,” Computers in Human Behavior 97 (August 2019): https://linkinghub.elsevier.com/retrieve/pii/S074756321930130X.

19 Sam Guzik, “AI Will Start Fact-Checking. We May Not Like the Results,” Nieman Lab, December 2022, https://www.niemanlab.org/2022/12/ai-will-start-fact-checking-we-may-not-like-the-results; and Grace Abels, “Can ChatGPT Fact-Check? We Tested,” Nieman Lab, May 31, 2023, https://www.poynter.org/fact-checking/2023/chatgpt-ai-replace-fact-checking.

Case Study 4: Labeling Social Media Content

Key takeaways:

There is a good body of evidence that labeling false or untrustworthy content with additional context can make users less likely to believe and share it. Large, assertive, and disruptive labels are the most effective, while cautious and generic labels often do not work. Reminders that nudge users to consider accuracy before resharing show promise, as do efforts to label news outlets with credibility scores. Different audiences may react differently to labels, and there are risks that remain poorly understood: labels can sometimes cause users to become either overly credulous or overly skeptical of unlabeled content, for example. Major social media platforms have embraced labels to a large degree, but further scale-up may require better information-sharing or new technologies that combine human judgment with algorithmic efficiency.

Key sources:

Description and Use Cases

Social media companies are increasingly applying labels to content on their platforms, some of which aim to help users assess whether information is trustworthy. In this report, “labeling” refers to the insertion of relevant context or advisories to inform or influence how content is viewed, though without directly fact-checking it. (For more on fact-checking, see case study 3.)

Labels can be applied to a social media account (for example, identifying it as state-sponsored media or satirical) or to individual posts. When a post links to another source, such as an external website, that source can be labeled (as with so-called nutrition labels that score news outlets by their adherence to journalistic practices). Alternatively, specific content or claims can be labeled—as disputed, potentially outdated, or fast-developing, for instance. Some labels are prominent, use firm language, and require a user to click before seeing or interacting with the content. Other labels are small, discreet, and neutrally worded.

Labels can be positive, like a digital signature that verifies video as authentic or a “verified” badge that purports to confirm an account’s identity. Other labels do not seek to inform users, per se, but rather admonish or “nudge” them to follow good information practices. For example, a user seeking to reshare an article may encounter a message that encourages them to first read the article and/or consider its accuracy; such “friction” in user interfaces seeks to promote more deliberate, reflective behavior. Additionally, many common platform design features can loosely be understood as labels. For example, platforms often display engagement data—such as the number of likes, shares, or views—alongside content. This data can influence users’ perceptions of the content’s accuracy and importance.1

Facebook was among the first platforms to label misleading content after public concern about so-called fake news and its influence on the 2016 U.S. presidential election.2 Other platforms, including Twitter (now X) and YouTube, have also implemented labels of various kinds—often spurred by major events such as the 2020 U.S. presidential election and the COVID-19 pandemic.

How Much Do We Know?

The academic literature on labeling is smaller than that on fact-checking but still large compared to other interventions. Social media companies began employing labels in earnest only in 2019, according to a Carnegie database.3 Independent studies of social media labels face methodological challenges due to researchers’ lack of access to private platform data on how users react to labels, though internal company research occasionally reaches the public domain through leaks, government inquiries, investigative journalism, or voluntary (if selective) disclosure. Laboratory experiments can be helpful, but they do not fully simulate key aspects of real-life social media usage.

How Effective Does It Seem?

Evidence suggests that large, prominent, and strongly worded labels can sometimes inhibit belief in and spread of false claims. However, other labels appear less effective. For example, studies show that labels which visually stand apart from the adjoining content are more effective than those that blend in. Similarly, labels that deliver a clear warning—for example, by pointing out that the content has previously appeared on an unreliable rumor site—are more effective than those that merely note a claim is “disputed.”4

Some internal research by platforms has also indicated that neutrally worded labels may be ineffective and can even lead users to gradually tune them out. During the COVID-19 pandemic, Facebook relied on independent fact-checkers to determine whether COVID-19 content was false or misleading; debunked content would then be labeled as such and algorithmically demoted. But “fact-checkers were unable to review an overwhelming majority of the content in their queue” because of resource limitations, so Facebook also applied neutral labels en masse to all other COVID-19 content. These labels provided context—“COVID-19 vaccines go through many tests for safety and effectiveness and are then monitored closely”—along with a link to authoritative information. According to Meta’s Oversight Board, however, “initial research showed that these labels may have [had] no effect on user knowledge and vaccine attitudes” and “no detectable effect on users’ likelihood to read, create or re-share” false claims.5 Facebook reduced and ultimately rolled back these labels after finding that users became less likely to click through to the information page after repeated label exposure.

Evidence suggests that large, prominent, and strongly worded labels can sometimes inhibit belief in and spread of false claims. However, other labels appear less effective.

Source ratings, either by fact-checkers or other users, have been shown to be effective at reducing engagement on articles with low scores. Specifically, labels that score a news source’s credibility can influence users’ willingness to like, comment on, or share posts containing links to news articles. This is a promising finding for projects like NewsGuard, which ranks news sites on a 100-point rubric based on best practices for credible and transparent journalism.6 However, empirical studies of NewsGuard have had mixed results. A 2022 study found, on the one hand, that exposure to labels did “not measurably improve news diet quality or reduce misperceptions, on average, among the general population.” On the other hand, there was also “suggestive evidence of a substantively meaningful increase in news diet quality among the heaviest consumers of misinformation.”7 This split finding may be considered successful or unsuccessful depending on the specific problem such labels are intended to address.

Recent research suggests that labels containing accuracy nudges, which simply encourage users to consider accuracy before sharing content, are particularly promising. This is perhaps surprising, as one might assume that social media users already seek to consume and share what they deem to be accurate information. Yet studies have highlighted a range of other motives—such as amusement and partisan signaling—that often influence user behavior.8 Despite these psychological tendencies, research suggests that most users nevertheless value accuracy and that labels reminding them to consider accuracy make them less likely to share misinformation.9 In fact, such labels can reduce—though not eliminate—subjects’ inclination to believe and share false stories that align with their political beliefs.10

Regardless of how labels are designed and implemented, the nature of the content or speaker being labeled can also influence user response. For example, New York University’s Center for Social Media and Politics found that during the 2020 election, tweets by then U.S. president Donald Trump which were labeled as disputed spread further than those without a label.11 This was not true for other politicians’ accounts in the sample, suggesting that labels on posts by extremely prominent individuals may perform differently from other labels. Additional research on this topic—for example, exploring figures other than Trump and metrics beyond spread of the post—would be valuable, because extremely prominent individuals are often responsible for a disproportionate amount of disinformation.

Like other interventions, labeling can sometimes have perverse effects. Several studies found evidence that labeling some articles as false or misleading led users to become more credulous toward the remaining unlabeled headlines.12 Researchers call this the “implied truth effect,” because users who become accustomed to seeing labels on some content may mistakenly assume that other content has also been vetted. Such a perverse effect, if prevalent, could have significant consequences: labeling efforts often have limited scope and therefore leave the vast majority of content unlabeled.

Paradoxically, there is also some evidence of an opposite dynamic: fact-checks or warning labels can sometimes increase overall audience skepticism, including distrust of articles that did not receive any rating.13 This might be called an “implied falsity effect.” Little is known about either effect and why one, or both, of them may be present under varying circumstances. It is possible that geographical, topical, or other unidentified factors may influence the effectiveness of labels and the risk of unintended consequences.14 Moreover, different audiences can respond differently to the same label.

Finally, it is worth remembering that labels explicitly focused on truth or reliability are not the only ways that platform interfaces actively shape how users perceive social media content. One study found that labeling posts with engagement metrics—such as number of “likes”—makes people more likely to share low-credibility, high-engagement posts.15 Researchers should continue to explore the influence of general user interface design on disinformation, including whether and how common design elements may alter the efficacy of interventions like labeling.

How Easily Does It Scale?

Major platforms’ embrace of labels has shown that they can be scaled to a significant degree. Labeling, in its various forms, has emerged as the dominant way that social media companies adjust their platforms’ design and functionality to counter disinformation and other kinds of influence operations. A Carnegie database of interventions announced by major platforms between 2014 and 2021 found a surge in labeling and redirection (a related measure) since 2019, with 77 of 104 total platform interventions falling into these two categories.16 Labels offer platforms a way of addressing disinformation without flatly banning or demoting content, actions that impinge more on users’ freedoms and tend to inspire stronger backlash. As a result of platforms’ experimentation with labels, technical barriers—such as load latency, user friction, and so forth—have been addressed or managed.

However, labeling still carries some of the scaling limitations of fact-checks. Meta’s experience with labeling COVID-19 information illustrates one of the choices facing platforms. They can rely on humans to apply more specific, opinionated, and ultimately effective labels to a smaller amount of content, or they can have algorithms automatically label more content with comparatively cautious, generic labels that tend to be less effective and are sometimes counterproductive. Technological innovations could help to further combine both techniques, as better algorithms do more labeling work under human supervision and/or empower humans to label more efficiently. Such innovations would test platforms’ willingness to apply strong labels to large amounts of content, potentially angering users who disagree with the labels. Future studies can continue to examine the specifics of labels and probe the platforms’ processes for applying them.17

The increasing number of platforms presents another scaling challenge. Content that is labeled on one platform may not be labeled on another. While some platforms shun labels based on an overall strategy of minimal content moderation, other platforms lack sufficient resources or simply haven’t faced the same public pressure as larger companies to confront disinformation. Outside organizations could explore whether prodding smaller platforms and offering them resources—such as technology, data, and best practices—might encourage more labeling.

Notes

1 Mihai Avram, Nicholas Micallef, Sameer Patil, and Filippo Menczer, “Exposure to Social Engagement Metrics Increases Vulnerability to Misinformation,” Harvard Kennedy School Misinformation Review 1 (2020), https://misinforeview.hks.harvard.edu/article/exposure-to-social-engagement-metrics-increases-vulnerability-to-misinformation.

2 Brian Stelter, “Facebook to Start Putting Warning Labels on ‘Fake News’,” CNN, December 15, 2016,
https://money.cnn.com/2016/12/15/media/facebook-fake-news-warning-labels.

3 For data on the rise of labeling and redirection, see Kamya Yadav, “Platform Interventions: How Social Media Counters Influence Operations,” Carnegie Endowment for International Peace, January 25, 2021, https://carnegieendowment.org/2021/01/25/platform-interventions-how-social-media-counters-influence-operations-pub-83698.

4 Björn Ross, Anna-Katharina Jung, Jennifer Heisel, and Stefan Stieglitz, “Fake News on Social Media: The (In)Effectiveness of Warning Messages” (paper presented at Thirty-Ninth International Conference on Information Systems, San Francisco, 2018), https://www.researchgate.net/publication/328784235_Fake_News_on_Social_Media_The_InEffectiveness_of_Warning_Messages.

5 “Policy Advisory Opinion 2022-01, Removal of COVID-19 Misinformation,” Oversight Board, April 2023, https://oversightboard.com/attachment/547865527461223.

6 “Rating Process and Criteria,” NewsGuard, accessed February 7, 2023, https://www.newsguardtech.com/ratings/rating-process-criteria.

7 Kevin Aslett et al., “News Credibility Labels Have Limited Average Effects on News Diet Quality and Fail to Reduce Misperceptions,” Science Advances 8, no. 18 (2022): https://www.science.org/doi/10.1126/sciadv.abl3844.

8 Alexander Bor et al., “‘Fact-Checking’ Videos Reduce Belief in, but Not the Sharing of Fake News on Twitter,” PsyArXiv, April 11, 2020, https://osf.io/preprints/psyarxiv/a7huq.

9 Gordon Pennycook et al., “Shifting Attention to Accuracy Can Reduce Misinformation Online,” Nature 592 (2021): https://www.nature.com/articles/s41586-021-03344-2; Gordon Pennycook et al., “Fighting COVID-19 Misinformation on Social Media: Experimental Evidence for a Scalable Accuracy-Nudge Intervention,” Psychological Science 31, no. 7 (2020): https://journals.sagepub.com/doi/full/10.1177/0956797620939054.

10 Timo K. Koch, Lena Frischlich, and Eva Lermer, “Effects of Fact-Checking Warning Labels and Social Endorsement Cues on Climate Change Fake News Credibility and Engagement on Social Media,” Journal of Applied Social Psychology 53, no. 3 (June 2023): https://onlinelibrary.wiley.com/doi/10.1111/jasp.12959?af=R; and Megan Duncan, “What’s in a Label? Negative Credibility Labels in Partisan News,” Journalism & Mass Communication Quarterly 99, no. 2 (2020): https://journals.sagepub.com/doi/10.1177/1077699020961856?icid=int.sj-full-text.citing-articles.17.

11 Megan A. Brown et al., “Twitter Put Warning Labels on Hundreds of Thousands of Tweets. Our Research Examined Which Worked Best,” Washington Post, December 9, 2020, https://www.washingtonpost.com/politics/2020/12/09/twitter-put-warning-labels-hundreds-thousands-tweets-our-research-examined-which-worked-best.

12 Gordon Pennycook, Adam Bear, Evan T. Collins, and David G. Rand, “The Implied Truth Effect: Attaching Warnings to a Subset of Fake News Headlines Increases Perceived Accuracy of Headlines Without Warnings,” Management Science 66, no. 11 (November 2020): https://pubsonline.informs.org/doi/10.1287/mnsc.2019.3478.

13 Antino Kim, Patricia L. Moravec, and Alan R. Dennis, “Combating Fake News on Social Media With Source Ratings: The Effects of User and Expert Reputation Ratings,” Journal of Management Information Systems 36, no. 3 (2019): https://www.tandfonline.com/doi/full/10.1080/07421222.2019.1628921.

14 Jan Kirchner and Christian Reuter, “Countering Fake News: A Comparison of Possible Solutions Regarding User Acceptance and Effectiveness,” Proceedings of the ACM on Human-Computer Interaction 4 (October 2020): https://www.peasec.de/paper/2020/2020_KirchnerReuter_CounteringFakeNews_CSCW.pdf; and Ciarra N. Smith and Holli H. Seitz, “Correcting Misinformation About Neuroscience via Social Media,” Science Communication 41, no. 6 (2019): https://journals.sagepub.com/doi/10.1177/1075547019890073.

15 Avram, Micallef, Patil, and Menczer, “Exposure to Social Engagement Metrics.”

16 Yadav, “Platform Interventions.”

17 For an example of research that can be conducted with this data, see Samantha Bradshaw and Shelby Grossman, “Were Facebook and Twitter Consistent in Labeling Misleading Posts During the 2020 Election?” Lawfare, August 7, 2022, https://www.lawfaremedia.org/article/were-facebook-and-twitter-consistent-labeling-misleading-posts-during-2020-election.

Case Study 5: Counter-messaging Strategies

Key takeaways:

There is strong evidence that truthful communications campaigns designed to engage people on a narrative and psychological level are more effective than facts alone. By targeting the deeper feelings and ideas that make false claims appealing, counter-messaging strategies have the potential to impact harder-to-reach audiences. Yet success depends on the complex interplay of many inscrutable factors. The best campaigns use careful audience analysis to select the most resonant messengers, mediums, themes, and styles—but this is a costly process whose success is hard to measure. Promising techniques include communicating respect and empathy, appealing to prosocial values, and giving the audience a sense of agency.

Key sources:

Description and Use Cases

Counter-messaging, in this report, refers to truthful communications campaigns designed to compete with disinformation at a narrative and psychological level instead of relying solely on the presentation of facts. Counter-messaging is premised on the notion that evidence and logic aren’t the only, or even the primary, bases of what people believe. Rather, research has shown that people more readily accept claims which jibe with their preexisting worldviews and accepted stories about how the world works, especially if framed in moral or emotional terms.1Moreover, claims are more persuasive when the messenger is a trusted in-group member who appears to respect the audience members and have their best interests at heart. While such factors often facilitate the spread of disinformation, counter-messaging campaigns seek to leverage them in service of truthful ideas.

In a sense, counter-messaging is no different from ordinary political communication, which routinely uses narratives, emotion, and surrogate messengers to persuade. But counter-messaging is sometimes implemented with the specific goal of countering disinformation—often because purely rational appeals, like fact-checking, seem not to reach or have much impact on hard-core believers of false claims. By changing the narrative frame around an issue and speaking in ways designed to resonate, counter-messaging aims to make audiences more open to facts and less ready to accept sensational falsehoods.

One example comes from Poland, where xenophobia toward migrants from the Middle East during the Syrian civil war was fueled in part by false stories of disease and criminality.2 A Polish counter-messaging campaign called Our Daily Bread featured a video of refugees and other marginalized people baking bread, a cherished Polish activity. Rather than presenting facts and evidence about the impact of migration on Polish society or refuting false stories about migrants, the video instead used personal vignettes, evocative imagery, and unifying words. The video attracted significant media attention and was viewed more than 1 million times in the first day after its release.3 Similarly, many efforts to promote COVID-19 vaccines and counter disinformation about them employed themes of personal responsibility. Other such efforts focused on recruiting local doctors as messengers, based on the premise that many people trust their family doctors more than national authorities.4 Vaccine-related public messaging campaigns also partnered with Christian, Jewish, and Muslim faith leaders to reach religious communities in Israel, the United Kingdom, and the United States.5

As these examples indicate, counter-messaging is not always exclusively aimed at countering false claims; other common objectives include promoting desirable behaviors, bolstering social cohesion, and rallying support for government policies. Many initiatives have sought specifically to thwart terrorist recruitment under the banner of “countering violent extremism” and “deradicalization.” For example, the Redirect Method developed by Jigsaw and Moonshot used digital advertising to steer individuals searching for extremist content toward “constructive alternate messages.”6 Other approaches have used one-on-one online conversations or in-person mentorship relationships to dissuade those showing interest in extremism.7 While many of these efforts were designed to address Islamic extremists, they have also been applied to White supremacist and other hate groups.

How Much Do We Know?

For decades, disciplines such as social psychology, political science, communications, advertising, and media studies have researched issues relevant to counter-messaging. Fields that have themselves been subject to persistent disinformation—such as public health and climate science—have also devoted a great deal of attention to counter-messaging in recent years. Efforts to study and suppress hate and extremist groups are particularly relevant, because such groups often employ disinformation.8 Nevertheless, these bodies of knowledge, though replete with useful insights, have generally not used disinformation as their primary frame for evaluating the efficacy of counter-messaging. This leaves us to rely on analogies and parallels rather than direct evidence.

The relevant literature highlights how hard it is to assess the impact of any form of persuasion. For example, many studies of COVID-19-related counter-messages measured changes in subjects’ reported attitudes or beliefs but were unable to verify whether those shifts persisted or led to behavioral changes.9 Studies based on surveys or laboratory experiments are common, but these do not fully capture how audiences react in more natural settings. In the field of countering violent extremism, practitioners report lacking the expertise or resources to evaluate the impact of their work beyond using social media engagement metrics and their gut instinct.10 A review of online counter-extremism interventions similarly found “virtually all” of the evaluations included in the study measured processes, like social media engagement, not outcomes. The review offered several proposals for more impact-based assessments, such as the inclusion of calls to action like contacting a hotline, which can be quantified as a sign of behavior.11

How Effective Does It Seem?

The core insight of counter-messaging—that communications tailored to the narrative and psychological needs of a specific audience are more effective than generic, purely fact-based approaches—is well-established.12 Beyond this basic premise, however, it is difficult to generalize about counter-messaging because of the intervention’s breadth, diversity, and overlap with ordinary politics. Some forms seem capable of affecting individuals’ beliefs and, more rarely, influencing the behaviors informed by those beliefs. Yet success may often depend on the interplay of a large number of factors that can be difficult to discern or control. A granular understanding of the audience should, in theory, enable the selection of mediums, messengers, messages, styles, and tones most likely to resonate with them.13 In practice, developing this audience understanding is a difficult task and determining the best communication approaches is an evolving science at best.

One theme that emerges from many assessments of counter-messaging . . . is the importance of communicating respect and empathy.

One theme that emerges from many assessments of counter-messaging, including public health and counter-extremism interventions, is the importance of communicating respect and empathy. People are often put off by the sense that they are being debated or chastised.14 For example, counselors working with White supremacists had the most success in changing subjects’ views through sustained dialogue that avoided moral judgement.15 Encouraging empathy toward others, such as religious minorities or immigrants, can also be effective; one study found that such messages make individuals more likely to delete their previous hate speech and less likely use hate speech again in the future.16 Similar efforts may be useful in reaching the so-called moveable middle, such as social media spectators who do not spread hateful content or false information themselves but are open to persuasion in either direction. For example, a study on anti-Roma hate speech in Slovakia found more users left pro-Roma comments on anti-Roma posts after researchers intervened with counter-speech.17

Other studies have explored how moral and emotional framings affect audiences, including their perceptions of what is true. Studies of climate change skepticism found that the most effective messages for countering misinformation offer individuals the sense that they can take meaningful action, as opposed to messages that portray the world as doomed.18 A review of public health messaging found some audience segments were moved more by calls to protect themselves or loved ones than by appeals to social responsibility.19

The speaker of the counter-message seems to be quite important. Studies in the rural United States found that friends and family members, community organizations, religious leaders, and medical professionals were the most effective messengers in responding to COVID-19 rumors. In India, health professionals and peers were found to be the most trusted.20 Given the influence of informal messengers like social peers, analysts have considered the possibility of using them for official objectives.21 Volunteer groups countering disinformation, such as the Lithuanian Elves or the North Atlantic Fella Organization, can bring scale, authenticity, and creativity—traits that official efforts often lack.22 Likewise, organic content used to rebut extremist claims and narratives appears more persuasive than government-created content.

There is a risk that poorly designed counter-messaging campaigns can entrench or elevate the very views being rebutted.23 A U.S. Department of State campaign called Think Again, Turn Away illustrates this problem. The anti–Islamic State campaign, launched in 2013, engaged directly with extremists on Twitter but was ultimately deemed counterproductive. Its graphic content and combative tone increased the visibility of Islamic State accounts that replied to the campaign’s posts with anti-U.S. rhetoric, while forcing the State Department to engage on unflattering topics like the torture of Iraqi prisoners at the Abu Ghraib prison.24 Critics have claimed that Think Again, Turn Away was not focused on the drivers of online extremism and was too clearly affiliated with the U.S. government to serve as a credible messenger. These shortcomings point to the complexities of effective counter-messaging and the need to carefully think through message control, effective messengers, appropriate mediums, and characteristics of the target audience.

How Easily Does It Scale?

Counter-messaging faces implementation challenges due to its often reactive nature. Campaigns frequently arise in response to a belated recognition that disinformation narratives have already grown in strength and impact. Such narratives may have roots going back years, decades, or longer, and their adherents can build up psychological investments over a lifetime. The narratives underpinning disinformation also often evoke powerful emotions, like fear, which can be difficult to defuse once activated.25 To mitigate disinformation’s first-mover advantages, counter-messengers can try to anticipate such narratives before they spread—for example, predicting attacks on mail-in voting during the 2020 U.S. election—but this is not always feasible.

The need to tailor counter-messaging to a specific audience and context makes scaling more difficult. Reaching large audiences may require breaking them into identifiable subpopulations, each of which would then receive its own research, message development, and novel or even competing strategies. Opting instead for a more generic, large-scale campaign risks undercutting much of the specificity associated with effective counter-messaging. Moreover, broad campaigns increase the odds of misfires, such as the use of messages or messengers that persuade one audience while making another audience double down on its initial beliefs. Elevating rumors or extremist viewpoints is a particular concern. When a concerning narrative is not yet widespread, campaigners may want to pair strategic silence on the national stage with more discrete messaging that targets specific populations more likely to encounter the narrative.26 When the narrative at issue has already become popular, a broad counter-messaging strategy may be appropriate. New digital technologies have the potential to make counter-messaging cheaper and easier to scale, just as innovation can aid in spreading disinformation.

Given the costs of effective counter-messaging at scale, many campaigns seem only modestly funded. The State Department’s now-shuttered Center for Strategic Counterterrorism Communications spent only $6 million on digital outreach in 2012, the year before it launched Think Again, Turn Away.27 The center’s successor entity, the Global Engagement Center, had a budget of more than $74 million in 2020.28 Australia’s COVID-19 vaccine awareness campaign—which included multiple mediums and consultants for outreach to specific vulnerable communities—cost about $24 million.29 For comparison, major brands spend much, much more on advertising (about 10 percent of total revenue, according to one survey).30 Volunteer-driven efforts, like the North Atlantic Fella Organization, may be appealing partners for external funders due to their low cost and high authenticity. However, overt official support for such activities can diminish their credibility. Extremism scholar Benjamin Lee suggests that looser relationships involving “provision of tools and training” might mitigate this risk.31

Notes

1 See Laura Livingston, “Understanding the Context Around Content: Looking Behind Misinformation Narratives,” National Endowment for Democracy, December 2021, https://www.ned.org/wp-content/uploads/2021/12/Understanding-the-Context-Around-Content-Looking-Behind-Misinformation-Narratives-Laura-Livingston.pdf; and Rachel Brown and Laura Livingston, “Counteracting Hate and Dangerous Speech Online: Strategies and Considerations,” Toda Peace Institute, March 2019, https://toda.org/assets/files/resources/policy-briefs/t-pb-34_brown-and-livingston_counteracting-hate-and-dangerous-speech-online.pdf. Additionally, consider Claire Wardle, “6 Types of Misinformation Circulated This Election Season,” Columbia Journalism Review, November 18, 2016, https://www.cjr.org/tow_center/6_types_election_fake_news.php; see also Paul Goble, “Hot Issue – Lies, Damned Lies and Russian Disinformation,” Jamestown Foundation, August 13, 2014, https://jamestown.org/program/hot-issue-lies-damned-lies-and-russian-disinformation. As another example, Charleston mass murderer Dylann Roof claimed to have been radicalized after a Google search for “black on White crime.” See Rebecca Hersher, “What Happened When Dylann Roof Asked Google for Information About Race?” NPR, January 10, 2017, https://www.npr.org/sections/thetwo-way/2017/01/10/508363607/what-happened-when-dylann-roof-asked-google-for-information-about-race.

2 For analysis of anti-refugee and anti-migrant disinformation, see Judit Szakács and Éva Bognár, “The Impact of Disinformation Campaigns About Migrants and Minority Groups in the EU,” European Parliament, June 2021, https://www.europarl.europa.eu/RegData/etudes/IDAN/2021/653641/EXPO_IDA(2021)653641_EN.pdf.

3 To be sure, view count alone does not imply effectiveness. For more about the Our Daily Bread campaign, see “Video Campaign Aims to Unify Poland Through the Power of Bread,” Olga Mecking, NPR, May 21, 2018, https://www.npr.org/sections/thesalt/2018/05/21/611345277/video-campaign-aims-to-unify-poland-through-the-power-of-bread.

4 Kevin B. O’Reilly, “Time for Doctors to Take Center Stage in COVID-19 Vaccine Push,” American Medical Association, May 21, 2021, https://www.ama-assn.org/delivering-care/public-health/time-doctors-take-center-stage-covid-19-vaccine-push; and Steven Ross Johnson, “Doctors Can Be Key to Higher COVID Vaccination Rates,” U.S. News & World Report, February 28, 2022, https://www.usnews.com/news/health-news/articles/2022-02-28/primary-care-doctors-can-be-key-to-higher-covid-vaccination-rates.

5 Filip Viskupič and David L. Wiltse, “The Messenger Matters: Religious Leaders and Overcoming COVID-19 Vaccine Hesitancy,” Political Science & Politics 55, no. 3 (2022): https://www.cambridge.org/core/journals/ps-political-science-and-politics/article/abs/messenger-matters-religious-leaders-and-overcoming-covid19-vaccine-hesitancy/ED93D8BB6C73C8B384986D28B877E284; and Daniel Estrin and Frank Langfitt, “Religious Leaders Had to Fight Disinformation to Get Their Communities Vaccinated,” NPR, April 23, 2021, https://www.npr.org/2021/04/23/990281552/religious-leaders-had-to-fight-disinformation-to-get-their-communities-vaccinate.

6 “The Redirect Method,” Moonshot, accessed March 6, 2023, https://moonshotteam.com/the-redirect-method/; and “ADL & Moonshot Partnered to Reduce Extremist Violence During US Presidential Election, Redirected Thousands Towards Safer Content,” Anti-Defamation League, February 1, 2021, https://www.adl.org/resources/press-release/adl-moonshot-partnered-reduce-extremist-violence-during-us-presidential.

7 Jacob Davey, Jonathan Birdwell, and Rebecca Skellett, “Counter-conversations: A Model for Direct Engagement With Individuals Showing Signs of Radicalization Online,” Institute for Strategic Dialogue, February 2018, https://www.isdglobal.org/isd-publications/counter-conversations-a-model-for-direct-engagement-with-individuals-showing-signs-of-radicalisation-online; and Jacob Davey, Henry Tuck, and Amarnath Amarasingam, “An Imprecise Science: Assessing Interventions for the Prevention, Disengagement and De-radicalisation of Left and Right-Wing Extremists,” Institute for Strategic Dialogue, 2019, https://www.isdglobal.org/isd-publications/an-imprecise-science-assessing-interventions-for-the-prevention-disengagement-and-de-radicalisation-of-left-and-right-wing-extremists.

8 On the link between disinformation, hate speech, and hate crimes, consider Jonathan Corpus Ong, “Online Disinformation Against AAPI Communities During the COVID-19 Pandemic,” Carnegie Endowment for International Peace, October 19, 2021, https://carnegieendowment.org/2021/10/19/online-disinformation-against-aapi-communities-during-covid-19-pandemic-pub-85515.

9 Consider Viskupič and Wiltse, “Messenger Matters”; Scott Bokemper et al., “Testing Persuasive Messaging to Encourage COVID-19 Risk Reduction,” PLOS ONE 17 (2022): https://journals.plos.org/plosone/article?id=10.1371/journal.pone.0264782; Rupali J. Limaye et al.,, “Message Testing in India for COVID-19 Vaccine Uptake: What Appeal and What Messenger Are Most Persuasive?,” Human Vaccines & Immunotherapeutics 18, no. 6 (2022): https://www.tandfonline.com/doi/full/10.1080/21645515.2022.2091864; and Lan Li, Caroline E. Wood, and Patty Kostkova, “Vaccine Hesitancy and Behavior Change Theory-Based Social Media Interventions: A Systematic Review,” Translational Behavioral Medicine 12, no. 2 (February 2022): https://academic.oup.com/tbm/article/12/2/243/6445967.

10 Davey, Tuck, and Amarasingam, “Imprecise Science.”

11 Todd C. Helmus and Kurt Klein, “Assessing Outcomes of Online Campaigns Countering Violent Extremism: A Case Study of the Redirect Method,” RAND Corporation, 2018, https://www.rand.org/pubs/research_reports/RR2813.html. The metrics used to evaluate counter-messaging efforts should align with the messenger’s desired outcome, which is not always a direct change in the original speaker’s belief or behavior. Other goals of counter-messaging include influencing passive bystanders to speak out or showing solidarity with a victimized community. See Catherine Buerger, “Why They Do It: Counterspeech Theories of Change,” Dangerous Speech Project, September 26, 2022, https://papers.ssrn.com/sol3/papers.cfm?abstract_id=4245211; and Bianca Cepollaro, Maxime Lepoutre, and Robert Mark Simpson, “Counterspeech,” Philosophy Compass 18, no. 1 (January 2023): https://compass.onlinelibrary.wiley.com/doi/full/10.1111/phc3.12890.

12 Limaye et al., “Message Testing in India”; and Li, Wood, and Kostkova, “Vaccine Hesitancy.”

13 “Key Takeaways from Civil Society in the Visegrád Region: Fall 2019 Practitioner Convening,” Over Zero, 2019, https://www.projectoverzero.org/media-and-publications/key-takeaways-from-civil-society-in-the-visegrd-region-fall-2019-practitioner-convening.

14 Consider Viskupič and Wiltse, “Messenger Matters.”

15 Davey, Birdwell, and Skellett, “Counter-Conversations”; and Davey, Tuck, and Amarasingam, “Imprecise Science.”

16 Dominik Hangartner et al, “Empathy-based Counterspeech Can Reduce Racist Hate Speech in a Social Media Field Experiment,” PNAS 118 (2021), https://www.pnas.org/doi/full/10.1073/pnas.2116310118.

17 Buerger, “Why They Do It.”

18 Alysha Ulrich, “Communicating Climate Science in an Era of Misinformation,” Intersect: The Stanford Journal of Science, Technology, and Society 16 (2023), https://ojs.stanford.edu/ojs/index.php/intersect/article/view/2395.

19 Sanchin Banker and Joowon Park, “Evaluating Prosocial COVID-19 Messaging Frames: Evidence from a Field Study on Facebook,” Judgment and Decision Making 15 (2023), https://www.cambridge.org/core/journals/judgment-and-decision-making/article/evaluating-prosocial-covid19-messaging-frames-evidence-from-a-field-study-on-facebook/9EADFB1C6F591AE1A8376C1622FB59D5.

20 Consider Viskupič and Wiltse, “Messenger Matters”; Angela K. Shen et al., “Trusted Messengers and Trusted Messages: The Role for Community-based Organizations in Promoting COVID-19 and Routine Immunizations,” Vaccine 41 (2023), https://www.sciencedirect.com/science/article/pii/S0264410X23001809; Angela K. Shen et al., “Persuading the ‘Movable Middle’: Characteristics of Effective Messages to Promote Routine and COVID-19 Vaccinations for Adults and Children – The impact of COVID-19 on Beliefs and Attitudes,” Vaccine 41 (2023), https://www.sciencedirect.com/science/article/pii/S0264410X2300141X; and Limaye et al., “Message Testing in India.”

21 Benjamin J. Lee, “Informal Countermessaging: The Potential and Perils of Informal Online Countermessaging,” Studies in Conflict & Terrorism 42 (2019): https://www.tandfonline.com/doi/full/10.1080/1057610X.2018.1513697.

22 Suzanne Smalley, “Collective of Anti-disinformation ‘Elves’ Offer a Bulwark Against Russian Propaganda,” CyberScoop, August 9, 2022, https://cyberscoop.com/collective-anti-disinformation-elves-russian-propaganda; and Adam Taylor, “With NAFO, Ukraine Turns the Trolls on Russia,” Washington Post, September 1, 2022, https://www.washingtonpost.com/world/2022/09/01/nafo-ukraine-russia.

23 Davey, Tuck, and Amarasingam, “Imprecise Science.”

24 “Digital Counterterrorism: Fighting Jihadists Online,” Task Force on Terrorism and Ideology, Bipartisan Policy Center, March 2018, https://bipartisanpolicy.org/download/?file=/wp-content/uploads/2019/03/BPC-National-Security-Digital-Counterterrorism.pdf; Rita Katz, “The State Department’s Twitter War With ISIS Is Embarrassing,” Time, September 16, 2014, https://time.com/3387065/isis-twitter-war-state-department; Greg Miller and Scott Higham, “In a Propaganda War Against ISIS, the U.S. Tried to Play by the Enemy’s Rules,” Washington Post, May 8, 2015, https://www.washingtonpost.com/world/national-security/in-a-propaganda-war-us-tried-to-play-by-the-enemys-rules/2015/05/08/6eb6b732-e52f-11e4-81ea-0649268f729e_story.html.

25 Samuel Woolley and Katie Joseff, “Demand for Deceit: How the Way We Think Drives Disinformation,” National Endowment for Democracy, January 2020, https://www.ned.org/wp-content/uploads/2020/01/Demand-for-Deceit.pdf.

26 Consider Joan Donovan and danah boyd, “Stop the Presses? Moving From Strategic Silence to Strategic Amplification in a Networked Media Ecosystem,” American Behavioral Scientist 65, no. 2 (2019): https://journals.sagepub.com/doi/abs/10.1177/0002764219878229.

27 “State, Socom Partner to Counter Cyberterrorism,” Simons Center, June 6, 2012, https://thesimonscenter.org/ia-news/state-socom-partner-to-counter-cyberterrorism.

28 “Inspection of the Global Engagement Center,” Office of the Inspector General, U.S. Department of State, September 15, 2022, https://www.oversight.gov/report/DOS/Inspection-Global-Engagement-Center.

29 “Australia’s COVID-19 Vaccine Information Campaign Begins,” Australian Department of Health and Aged Care, January 27, 2021, https://www.health.gov.au/ministers/the-hon-greg-hunt-mp/media/australias-covid-19-vaccine-information-campaign-begins.

30 “Marketing in a Post-Covid Era: Highlights and Insights Report,” CMO Survey, September 2022, https://cmosurvey.org/wp-content/uploads/2022/09/The_CMO_Survey-Highlights_and_Insights_Report-September_2022.pdf.

31 Lee, “Informal Countermessaging.”

Case Study 6: Cybersecurity for Elections and Campaigns

Key takeaways:

There is good reason to think that campaign- and election-related cybersecurity can be significantly improved, which would prevent some hack-and-leak operations and fear-inducing breaches of election systems. The cybersecurity field has come to a strong consensus on certain basic practices, many of which remain unimplemented by campaigns and election administrators. Better cybersecurity would be particularly helpful in preventing hack-and-leaks, though candidates will struggle to prioritize cybersecurity given the practical imperatives of campaigning. Election systems themselves can be made substantially more secure at a reasonable cost. However, there is still no guarantee that the public would perceive such systems as secure in the face of rhetorical attacks by losing candidates.

Key sources:

Description and Use Cases

Cybersecurity improvements have been proposed as a way to mitigate two distinct kinds of election-related disinformation and influence threats. One threat is hack-and-leak operations, which involve the theft and public exposure of sensitive information about candidates, campaigns, and other political figures. Leaked data may be partially modified or fully authentic. Russian state actors carried out notable hack-and-leaks during the U.S. presidential election in 2016, the French presidential election in 2017, and the UK general election in 2019.1 To prevent hack-and-leaks, many experts have called for increased cybersecurity protection of candidates, campaigns, and political parties, as well as government offices involved in election processes. This can be done through improved adherence to cybersecurity best practices, donated or discounted cybersecurity services, and specialized training, among other options.2 Importantly, such efforts should extend to personal accounts and devices, not just official ones. In 2019, the U.S. Federal Election Commission issued an advisory opinion that some political campaigns could receive free cybersecurity assistance from private firms without violating rules on corporate campaign contributions.3

The second threat is that hackers may probe or compromise election systems, such as the networks that hold voter registration data or vote tallies. If these operations are discovered and publicized, they can heighten fear that election outcomes are subject to manipulation, thereby reducing confidence in the results—even if this fear is unwarranted. For example, a declassified report by the U.S. Senate Select Committee on Intelligence found that in 2016, Russian actors were in a position to delete or modify voter registration data but did not do so. Other U.S. election infrastructure was probed for vulnerabilities, but there was no evidence to suggest vote totals were modified.4

The cybersecurity of election systems can often be improved by implementing standard best practices applicable to any organization, such as proactively monitoring network activity, conducting penetration testing, and developing incident response plans. But election systems may also need security measures tailored to their unique context. Such actions can include regularly backing up voter registration databases, certifying voting machines, maintaining a paper trail for electronic ballots, and conducting post-election audits.5 The cybersecurity of election systems is intertwined with other aspects of election administration. For example, maintaining accurate electronic tallies of votes depends in part on ensuring that any paper ballots are physically secure and that election workers are properly supervised. (The role of electronic voting machines, a major policy question, is beyond the scope of this report.6)

Coordination, transparency, and communication are also areas of focus. The U.S. Department of Homeland Security has designated election systems as “critical infrastructure,” allowing it to create structures for better communication between stakeholders and to provide security assistance, such as free cybersecurity assessments for election administrators.7 Other U.S. examples include the Elections Infrastructure Information Sharing & Analysis Center, a voluntary coordination body created in 2020, and proposals for a national public database of voting system defects.8

How Much Do We Know?

The threat of cyber operations against campaigns and election infrastructure is well documented across several countries, but there are few detailed evaluations of the cybersecurity response. In general, the cybersecurity field has come to a strong consensus on certain basic practices to protect against threats. These include multifactor authentication, routine backups (kept segregated from originals), frequent patching, and vulnerability testing. Other actions or principles that have gained favor in recent years include cloud migration, zero trust architecture, and threat intelligence. However, there is very little quantitative evidence of these practices’ comparative cost-effectiveness. Additionally, it is hard to judge the efficacy of best practices in thwarting a highly capable, persistent state actor.

There is a clear causal link between improving campaign cybersecurity and reducing the risk of hack-and-leak operations. With election systems, however, cybersecurity is only half the battle. To maintain public confidence in election integrity, administrators must also convince people that systems are truly secure. This critical second step has received less attention from researchers and analysts.

How Effective Does It Seem?

There is good reason to think that campaign- and election-related cybersecurity can be significantly improved. A 2018 assessment of election administration in all fifty U.S. states found that a distressing number of states had not taken basic precautions, such as minimum cybersecurity standards for voter registration systems.9 This state of affairs may not be uncommon across government bodies in many countries. A 2022 cybersecurity audit of the U.S. federal government found that eight of the twenty-three assessed agencies showed significant deficiencies in their ability to detect cyber incidents and protect themselves through basic policies like multifactor authentication and data encryption.

In other words, there are still simple ways to improve cybersecurity in many governmental and political institutions, including campaign and election infrastructure.10 Moreover, such investments would probably prevent a number of intrusions. A 2022 study by the research consultancy ThoughtLab found that organizations which performed well against the National Institute of Standards and Technology (NIST) Cybersecurity Framework, a common benchmark used in many public- and private-sector organizations in the United States and elsewhere, suffered somewhat fewer damaging cyber incidents than lower-performing organizations.11

Table 2. U.S. Government Recommendations for Securing Election Systems
Best Practice Summary
Software and patch management Create an inventory of software in use by the organization. Deploy patches in a timely manner.
Log management Maintain secure, centralized logs of devices on and off the network. Review logs to identify, triage, and assess incidents.
Network segmentation Create separate virtual or physical networks for each part of the organization. Use dedicated systems for election-related tasks.
Block suspicious activity Enable blocking, not just alerting, of suspicious activity by default. Scan emails and train employees on phishing attacks.
Credential management Require strong passwords and multi-factor authentication.
Establish a baseline for host and network activity Track the amount, timing, and destination of typical network traffic to identify anomalies. Create a “gold image” of hosts for comparison.
Organization-wide IT guidance and policies Maintain incident response and communications plans, an approved software list, and other policies for cyber hygiene.
Notice and consent banners for computer systems Require that users consent to monitoring, disclosing, and sharing of data for any purpose.
Source: “Best Practices for Securing Election Systems,” U.S. Cybersecurity and Infrastructure Security Agency, November 11, 2022, https://www.cisa.gov/news-events/news/best-practices-securing-election-systems.

In addition to prevention, the NIST framework also emphasizes preparedness to respond to and recover from an incident. The 2017 French presidential election provides a celebrated example: Emmanuel Macron’s campaign prepared for an eventual Russian hack-and-leak operation by creating fake email addresses, messages, and documents so that stolen materials could not be verified and might discredit the leakers.12 Immediate disclosure of all hacking attempts, both to authorities and the public, also built awareness of the disinformation threat to the election. This could be seen as a form of inoculation, or “pre-bunking,” which refers to anticipating a specific disinformation narrative or technique and proactively confronting it before it spreads.13 However, it seems likely that other political, legal, and media factors also played a role in diminishing the influence of the Russian operation.

Unfortunately, public fears of election irregularities cannot always be allayed by truthful assurances that election systems are secure. In United States (in 2020–2021) and Brazil (in 2022–2023), false rhetorical attacks on the integrity of the electoral process by losing candidates and their supporters led to organized postelection violence.14 A side-by-side comparison of the two examples is revealing, because the two countries have substantially different voting systems. In the United States, a complex set of rules and practices delayed the vote count in a number of states, which laid the groundwork for conspiracy theories of electoral manipulation despite the presence of extensive safeguards and paper-backed auditing mechanisms.15 Brazil, in contrast, has an all-electronic voting system that allows for rapid results—though it lacks a paper trail to enable physical audits.16 Despite these divergent approaches, both countries were destabilized by disinformation about election security.

How Easily Does It Scale?

Improving the cybersecurity of political campaigns faces significant cultural and leadership barriers. Campaigns are ephemeral, frenetic environments. They employ large numbers of temporary workers and volunteers who are minimally vetted and trained. Democratic politics also has an inherently open quality—candidates and surrogates must interact with wide swaths of the public, both in person and online—that runs at cross-purposes with physical and cyber security. Finally, a dollar spent on cybersecurity is a dollar not spent on winning votes.

Given these factors, campaigns and candidates often resist making cybersecurity a priority. In the EU, for example, political parties have chronically underfunded their own digital security.17 A dedicated EU fund could help, but politicians would still need to spend scarce time and attention on cybersecurity and accept the inconveniences that sometimes come with it. When the Netherlands offered cybersecurity training to politicians and government officials before the country’s 2017 elections, few expressed interest.18 One-off or annual trainings are also less effective than more frequent trainings—let alone cultural and organizational shifts in behavior mandated and enforced by leadership.19 While some cultural shifts have indeed occurred in recent years across many countries, in campaigns and more generally, political campaigns will likely continue to lag behind other major organizations in their cybersecurity practices.

The cost of securing election infrastructure, while not trivial, seems modest given its foundational importance to democracy.

One advantage of cybersecurity, as compared to other disinformation countermeasures, is that a proven set of best practices already exists and has been widely (if inconsistently) adopted in other sectors. This makes scaling much easier. However, cybersecurity is not necessarily cheap. The size and complexity of national elections and the number of necessary improvements mean that—in the United States, at least—the sums required are significant.20 In 2018, for example, the U.S. Congress allocated $380 million for election security improvements—including cybersecurity—with millions more given by state governments.21 And in 2020, the COVID-19 relief bill allocated another $400 million for elections, with state officials often prioritizing cybersecurity in their grant requests.22 Experts tend to propose even larger and more sustained expenditures.23 The Brennan Center for Justice has called for five-year allocations of $833 million to help state and local governments with cybersecurity, $486 million to secure voter registration infrastructure, and $316 million to protect election agencies from “insider threats.”24

The cost of securing election infrastructure, while not trivial, seems modest given its foundational importance to democracy. Still, governments must find the political will to make such investments. Proposed measures to improve the security of the 2019 elections for the European Parliament faced resistance from member states that viewed the problem as overhyped or were themselves complicit in election disinformation.25

Notes

1 Hack-and-leak operations might be considered malinformation because the offending material is intended to harm yet is often authentic and factual. Alternatively, such operations could be seen as disinformation to the extent that the term encompasses true but highly misleading information (for example, when crucial context is omitted). For more on this taxonomy, and relevant cybersecurity recommendations, see Wardle and Derakhshan, “Information Disorder.” For further examples of cybersecurity recommendations from this period, see also Jean-Baptiste Jeangène Vilmer, “Successfully Countering Russian Electoral Influence: 15 Lessons Learned From the Macron Leaks,” Center for Strategic International Studies, June 2018, https://csis-website-prod.s3.amazonaws.com/s3fs-public/publication/180621_Vilmer_Countering_russiam_electoral_influence.pdf; Brattberg and Maurer, “Russian Election Interference”; and Fly, Rosenberger, and Salvo, “The ASD Policy Blueprint.”

2 Robby Mook, Matt Rhoades, and Eric Rosenbach, “Cybersecurity Campaign Playbook,” November 2017, https://www.belfercenter.org/publication/cybersecurity-campaign-playbook.

3 “Cloudflare for Campaigns: United States,” Cloudflare, accessed April 21, 2023, https://www.cloudflare.com/campaigns/usa.

4 “Russian Active Measures Campaigns and Interference in the 2016 U.S. Election, Volume 1: Russian Efforts Against Election Infrastructure With Additional Views,” U.S. Senate Select Committee on Intelligence (16th Congress, Report 116-XX), https://www.intelligence.senate.gov/sites/default/files/documents/Report_Volume1.pdf.

5 Danielle Root, Liz Kennedy, and Michael Sozan, “Election Security in All 50 States,” Center for American Progress, February 12, 2018, https://www.americanprogress.org/article/election-security-50-states; Brattberg and Maurer, “Russian Election Interference”; “Statement by Secretary Jeh Johnson on the Designation of Election Infrastructure as a Critical Infrastructure Subsector,” U.S. Department of Homeland Security, January 6, 2017, https://www.dhs.gov/news/2017/01/06/statement-secretary-johnson-designation-election-infrastructure-critical; and “Recommendations to Defend America’s Election Infrastructure,” Brennan Center for Justice, October 23, 2019, https://www.brennancenter.org/our-work/research-reports/recommendations-defend-americas-election-infrastructure.

6 “Recommendations,” Brennan Center for Justice.

7 “Starting Point: U.S. Election Systems as Critical Infrastructure,” U.S. Election Assistance Commission, accessed March 28, 2023, https://www.eac.gov/sites/default/files/eac_assets/1/6/starting_point_us_election_systems_as_Critical_Infrastructure.pdf; and “DHS Cybersecurity Services Catalog for Election Infrastructure,” U.S. Department of Homeland Security, accessed March 28, 2023, https://www.eac.gov/sites/default/files/eac_assets/1/6/DHS_Cybersecurity_Services_Catalog_for_Election_Infrastructure.pdf.

8 Root, Kennedy, and Sozan, “Election Security”; “Recommendations,” Brennan Center for Justice ; and “Elections Infrastructure Information Sharing & Analysis Center,” Center for Internet Security, accessed April 21, 2023, https://www.cisecurity.org/ei-isac.

9 Root, Kennedy, and Sozan, “Election Security.”

10 “Federal Cybersecurity Progress Report for Fiscal Year 2022,” U.S. General Services Administration, 2022, https://www.performance.gov/cyber. See also “Cross-Sector Cybersecurity Performance Goals,” U.S. Cybersecurity and Infrastructure Security Agency, 2022, https://www.cisa.gov/sites/default/files/publications/2022_00092_CISA_CPG_Report_508c.pdf.

11 “Framework for Improving Critical Infrastructure Cybersecurity,” National Institute of Standards and Technology, April 16, 2018, https://nvlpubs.nist.gov/nistpubs/cswp/nist.cswp.04162018.pdf; and “Cybersecurity Solutions for a Riskier World,” ThoughtLab, 2022, https://thoughtlabgroup.com/cyber-solutions-riskier-world.

12 See Jeangène Vilmer, “Successfully Countering”; and Brattberg and Maurer, “Russian Election Interference.”

13 For more on pre-bunking, see case studies 3 and 7.

14 Dean Jackson and João Guilherme Bastos dos Santos, “A Tale of Two Insurrections: Lessons for Disinformation Research From the Jan. 6 and 8 Attacks,” Lawfare, February 27, 2023, https://www.lawfaremedia.org/article/tale-two-insurrections-lessons-disinformation-research-jan-6-and-8-attacks-0.

15 See William T. Adler, “To Stop Election-Related Misinformation, Give Election Officials the Resources They Need,” Center for Democracy and Technology, November 13, 2020, https://cdt.org/insights/to-stop-election-related-misinformation-give-election-officials-the-resources-they-need.

16 William T. Adler and Dhanaraj Thakur, “A Lie Can Travel: Election Disinformation in the United States, Brazil, and France,” Center for Democracy and Technology, December 2021, https://cdt.org/wp-content/uploads/2021/12/2021-12-13-CDT-KAS-A-Lie-Can-Travel-Election-Disinformation-in-United-States-Brazil-France.pdf. See also Philip Bump, “The Uncomplicated Reason Brazil Can Count Its Ballots So Quickly,” Washington Post, October 31, 2022, https://www.washingtonpost.com/politics/2022/10/31/brazil-elections-vote-count-united-states.

17 Sam van der Staak, “The Weak Link in Election Security: Europe’s Political Parties,” Politico, June 8, 2021, https://www.politico.eu/article/european-election-security-political-parties-cybersecurity.

18 Brattberg and Maurer, “Russian Election Interference.”

19 Isabella Harford, “How Effective Is Security Awareness Training? Not Enough,” TechTarget, April 5, 2022, https://www.techtarget.com/searchsecurity/feature/How-effective-is-security-awareness-training-Not-enough.

20 Lawrence Norden and Edgardo Cortés, “What Does Election Security Cost?” Brennan Center for Justice, August 15, 2019, https://www.brennancenter.org/our-work/analysis-opinion/what-does-election-security-cost.

21 Elizabeth Howard et al., “Defending Elections: Federal Funding Needs for State Election Security,” Brennan Center for Justice, July 18, 2019, https://www.brennancenter.org/our-work/research-reports/defending-elections-federal-funding-needs-state-election-security.

22 Zach Montellaro, “Coronavirus Relief Bill Allocates $400M for Election Response,” Politico, March 26, 2020, https://www.politico.com/newsletters/morning-score/2020/03/26/coronavirus-relief-bill-allocates-400m-for-election-response-786407; and “Funding Safe and Secure Elections During the COVID-19 Pandemic,” e.Republic Center for Digital Government, 2020, https://papers.govtech.com/A-Better-Way-to-Find-New-Jobs-and-Careers-136728.html/Funding-Safe-and-Secure-Elections-During-COVID-19-129933.html. On the general state of election administration finances, see Tom Scheck, Geoff Hing, Sabby Robinson, and Gracie Stockton, “How Private Money From Facebook’s CEO Saved the 2020 Election,” NPR, December 8, 2020, https://www.npr.org/2020/12/08/943242106/how-private-money-from-facebooks-ceo-saved-the-2020-election.

23 Rachel Orey, “New Election Security Funding Positive but Misses the Mark,” February 28, 2023, Bipartisan Policy Center, https://bipartisanpolicy.org/blog/new-election-security-funding.

24 Norden and Cortés, “What Does Election Security Cost?”; and Lawrence Norden, Derek Tisler, and Turquoise Baker, “Estimated Costs for Protecting Election Infrastructure Against Insider Threats,” Brennan Center for Justice, March 7, 2022, https://www.brennancenter.org/our-work/research-reports/estimated-costs-protecting-election-infrastructure-against-insider. See also Derek Tisler and Lawrence Norden, “Estimated Costs for Protecting Election Workers From Threats of Physical Violence,” Brennan Center for Justice, May 3, 2022, https://www.brennancenter.org/our-work/research-reports/estimated-costs-protecting-election-workers-threats-physical-violence.

25 Erik Brattberg, “The EU’s Looming Test on Election Interference,” Carnegie Endowment for International Peace, April 18, 2019, https://carnegieendowment.org/2019/04/18/eu-s-looming-test-on-election-interference-pub-78938; and Sam van der Staak and Peter Wolf, “Cybersecurity in Elections: Models of Interagency Collaboration,” International IDEA, 2019, https://www.idea.int/sites/default/files/publications/cybersecurity-in-elections-models-of-interagency-collaboration.pdf.

Case Study 7: Statecraft, Deterrence, and Disruption

Key takeaways:

Cyber operations targeting foreign influence actors can temporarily frustrate specific foreign operations during sensitive periods, such as elections, but any long-term effect is likely marginal. There is little evidence to show that cyber operations, sanctions, or indictments have achieved strategic deterrence, though some foreign individuals and contract firms may be partially deterrable. Bans on foreign platforms and state media outlets have strong first-order effects (reducing access to them); their second-order consequences include retaliation against democratic media by the targeted state. All in all, the most potent tool of statecraft may be national leaders’ preemptive efforts to educate the public. Yet in democracies around the world, domestic disinformation is far more prolific and influential than foreign influence operations.

Key sources:

Description and Use Cases

When disinformation and other influence operations stem from abroad, governments can use a range of foreign policy tools to respond. These include sanctions, indictments, media regulation or bans, public statements by government officials, and cyber operations.

The U.S. government has been particularly prolific in many of these areas. After the Russian effort to interfere in the 2016 election, Washington announced a number of sanctions on Russian individuals and organizations.1 It also announced criminal charges—in 2018 against five Russian organizations and nineteen Russian individuals, in 2021 against two Iranian men, and in 2022 against three Russian men, among others.2 Although indictments of foreign nationals for influence operation–related activities are unusual globally, sanctions are becoming more common.3 In 2022, the United Kingdom sanctioned several individuals and media outlets accused of serving as propagandists for Moscow.4 The same