Countering Disinformation Effectively: An Evidence-Based Policy Guide

A high-level, evidence-informed guide to some of the major proposals for how democratic governments, platforms, and others can counter disinformation.

Published on January 31, 2024


Disinformation is widely seen as a pressing challenge for democracies worldwide. Many policymakers are grasping for quick, effective ways to dissuade people from adopting and spreading false beliefs that degrade democratic discourse and can inspire violent or dangerous actions. Yet disinformation has proven difficult to define, understand, and measure, let alone address.

Even when leaders know what they want to achieve in countering disinformation, they struggle to make an impact and often don’t realize how little is known about the effectiveness of policies commonly recommended by experts. Policymakers also sometimes fixate on a few pieces of the disinformation puzzle—including novel technologies like social media and artificial intelligence (AI)—without considering the full range of possible responses in realms such as education, journalism, and political institutions.

This report offers a high-level, evidence-informed guide to some of the major proposals for how democratic governments, platforms, and others can counter disinformation. It distills core insights from empirical research and real-world data on ten diverse kinds of policy interventions, including fact-checking, foreign sanctions, algorithmic adjustments, and counter-messaging campaigns. For each case study, we aim to give policymakers an informed sense of the prospects for success—bridging the gap between the mostly meager scientific understanding and the perceived need to act. This means answering three core questions: How much is known about an intervention? How effective does the intervention seem, given current knowledge? And how easy is it to implement at scale?

Overall Findings

  • There is no silver bullet or “best” policy option. None of the interventions considered in this report were simultaneously well-studied, very effective, and easy to scale. Rather, the utility of most interventions seems quite uncertain and likely depends on myriad factors that researchers have barely begun to probe. For example, the precise wording and presentation of social media labels and fact-checks can matter a lot, while counter-messaging campaigns depend on a delicate match of receptive audiences with credible speakers. Bold claims that any one policy is the singular, urgent solution to disinformation should be treated with caution.
  • Policymakers should set realistic expectations. Disinformation is a chronic historical phenomenon with deep roots in complex social, political, and economic structures. It can be seen as jointly driven by forces of supply and demand. On the supply side, there are powerful political and commercial incentives for some actors to engage in, encourage, or tolerate deception, while on the demand side, psychological needs often draw people into believing false narratives. Credible options exist to curb both supply and demand, but technocratic solutionism still has serious limits against disinformation. Finite resources, knowledge, political will, legal authority, and civic trust constrain what is possible, at least in the near- to medium-term.
  • Democracies should adopt a portfolio approach to manage uncertainty. Policymakers should act like investors, pursuing a diversified mixture of counter-disinformation efforts while learning and rebalancing over time. A healthy policy portfolio would include tactical actions that appear well-researched or effective (like fact-checking and labeling social media content). But it would also involve costlier, longer-term bets on promising structural reforms (like supporting local journalism and media literacy). Each policy should come with a concrete plan for ongoing reassessment.
  • Long-term, structural reforms deserve more attention. Although many different counter-disinformation policies are being implemented in democracies, outsized attention goes to the most tangible, immediate, and visible actions. For example, platforms, governments, and researchers routinely make headlines for announcing the discovery or disruption of foreign and other inauthentic online networks. Yet such actions, while helpful, usually have narrow impacts. In comparison, more ambitious but slower-moving efforts to revive local journalism and improve media literacy (among other possibilities) receive less notice despite encouraging research on their prospects.
  • Platforms and tech cannot be the sole focus. Research suggests that social media platforms help to fuel disinformation in various ways—for example, through recommendation algorithms that encourage and amplify misleading content. Yet digital platforms exist alongside, and interact with, many other online and offline forces. The rhetoric of political elites, programming on traditional media sources like TV, and narratives circulating among trusted community members are all highly influential in shaping people’s speech, beliefs, and behaviors. At the same time, the growing number of digital platforms dilutes the effectiveness of actions by any single company to counter disinformation. Given this interplay of many voices and amplifiers, effective policy will involve complementary actions in multiple spheres.
  • Countering disinformation is not always apolitical. Those working to reduce the spread and impact of disinformation often see themselves as disinterested experts and technocrats—operating above the fray of political debate, neither seeking nor exercising political power. Indeed, activities like removing inauthentic social media assets are more or less politically neutral. But other efforts, such as counter-messaging campaigns that use storytelling or emotional appeals to compete with false ideas at a narrative and psychological level, can be hard to distinguish from traditional political advocacy. Ultimately, any institutional effort to declare what is true and what is false—and to back such declarations with power, resources, or prestige—implies some claim of authority and therefore can be seen as having political meaning (and consequences). Denying this reality risks encouraging overreach, or inviting blowback, which deepens distrust.
  • Research gaps are pervasive. The relatively robust study of fact-checking offers clues about the possibilities and the limits of future research on other countermeasures. On the one hand, dedicated effort has enabled researchers to validate fact-checking as a generally useful tool. Policymakers can have some confidence that fact-checking is worthy of investment. On the other hand, researchers have learned that fact-checking’s efficacy can vary a lot depending on a host of highly contextual, poorly understood factors. Moreover, numerous knowledge gaps and methodological biases remain even after hundreds of published studies on fact-checking. Because fact-checking represents the high-water mark of current knowledge about counter-disinformation measures, it can be expected that other measures will likewise require sustained research over long periods—from fundamental theory to highly applied studies.
  • Research is a generational task with uncertain outcomes. The knowledge gaps highlighted in this report can serve as a road map for future research. Filling these gaps will take more than commissioning individual studies; major investments in foundational research infrastructure, such as human capital, data access, and technology, are needed. That said, social science progresses slowly, and it rarely yields definite answers to the most vexing current questions. Take economics, for example: a hundred years of research has helped Western policymakers curb (though not eliminate) depressions, recessions, and panics—yet economists still debate great questions of taxes and trade and are reckoning only belatedly with catastrophic climate risks. The mixed record of economics offers a sobering benchmark for the study of disinformation, which is a far less mature and robust field.
  • Generative AI will have complex effects but might not be a game changer. Rapid AI advances could soon make it much easier and cheaper to create realistic and/or personalized false content. Even so, the net impact on society remains unclear. Studies suggest that people’s willingness to believe false (or true) information is often not primarily driven by the content’s level of realism. Rather, other factors such as repetition, narrative appeal, perceived authority, group identification, and the viewer’s state of mind can matter more. Meanwhile, studies of microtargeted ads—already highly data-driven and automated—cast doubt on the notion that personalized messages are uniquely compelling. Generative AI can also be used to counter disinformation, not just foment it. For example, well-designed and human-supervised AI systems may help fact-checkers work more quickly. While the long-term impact of generative AI remains unknown, it’s clear that disinformation is a complex psychosocial phenomenon and is rarely reducible to any one technology.

Case Study Summaries

  1. Supporting Local Journalism. There is strong evidence that the decline of local news outlets, particularly newspapers, has eroded civic engagement, knowledge, and trust—helping disinformation to proliferate. Bolstering local journalism could plausibly help to arrest or reverse such trends, but this has not been directly tested. Cost is a major challenge, given the expense of quality journalism and the depth of the industry’s financial decline. Philanthropy can provide targeted support, such as seed money for experimentation. But a long-term solution would probably require government intervention and/or alternate business models. This could include direct subsidies (channeled through nongovernmental intermediaries) or indirect measures, such as tax exemptions and bargaining rights.
  2. Media Literacy Education. There is significant evidence that media literacy training can help people identify false stories and unreliable news sources. However, variation in pedagogical approaches means the effectiveness of one program does not necessarily imply the effectiveness of another. The most successful variants empower motivated individuals to take control of their media consumption and seek out high-quality information—instilling confidence and a sense of responsibility alongside skills development. While media literacy training shows promise, it suffers challenges in speed, scale, and targeting. Reaching large numbers of people, including those most susceptible to disinformation, is expensive and takes many years.
  3. Fact-Checking. A large body of research indicates that fact-checking can be an effective way to correct false beliefs about specific claims, especially for audiences that are not heavily invested in the partisan elements of the claims. However, influencing factual beliefs does not necessarily result in attitudinal or behavioral changes, such as reduced support for a deceitful politician or a baseless policy proposal. Moreover, the efficacy of fact-checking depends a great deal on contextual factors—such as wording, presentation, and source—that are not well understood. Even so, fact-checking seems unlikely to cause a backfire effect that leads people to double down on false beliefs. Fact-checkers face a structural disadvantage in that false claims can be created more cheaply and disseminated more quickly than corrective information; conceivably, technological innovations could help shift this balance.
  4. Labeling Social Media Content. There is a good body of evidence that labeling false or untrustworthy content with additional context can make users less likely to believe and share it. Large, assertive, and disruptive labels are the most effective, while cautious and generic labels often do not work. Reminders that nudge users to consider accuracy before resharing show promise, as do efforts to label news outlets with credibility scores. Different audiences may react differently to labels, and there are risks that remain poorly understood: labels can sometimes cause users to become either overly credulous or overly skeptical of unlabeled content, for example. Major social media platforms have embraced labels to a large degree, but further scale-up may require better information-sharing or new technologies that combine human judgment with algorithmic efficiency.
  5. Counter-messaging Strategies. There is strong evidence that truthful communications campaigns designed to engage people on a narrative and psychological level are more effective than facts alone. By targeting the deeper feelings and ideas that make false claims appealing, counter-messaging strategies have the potential to impact harder-to-reach audiences. Yet success depends on the complex interplay of many inscrutable factors. The best campaigns use careful audience analysis to select the most resonant messengers, mediums, themes, and styles—but this is a costly process whose success is hard to measure. Promising techniques include communicating respect and empathy, appealing to prosocial values, and giving the audience a sense of agency.
  6. Cybersecurity for Elections and Campaigns. There is good reason to think that campaign- and election-related cybersecurity can be significantly improved, which would prevent some hack-and-leak operations and fear-inducing breaches of election systems. The cybersecurity field has come to a strong consensus on certain basic practices, many of which remain unimplemented by campaigns and election administrators. Better cybersecurity would be particularly helpful in preventing hack-and-leaks, though candidates will struggle to prioritize cybersecurity given the practical imperatives of campaigning. Election systems themselves can be made substantially more secure at a reasonable cost. However, there is still no guarantee that the public would perceive such systems as secure in the face of rhetorical attacks by losing candidates.
  7. Statecraft, Deterrence, and Disruption. Cyber operations targeting foreign influence actors can temporarily frustrate specific foreign operations during sensitive periods, such as elections, but any long-term effect is likely marginal. There is little evidence to show that cyber operations, sanctions, or indictments have achieved strategic deterrence, though some foreign individuals and contract firms may be partially deterrable. Bans on foreign platforms and state media outlets have strong first-order effects (reducing access to them); their second-order consequences include retaliation against democratic media by the targeted state. All in all, the most potent tool of statecraft may be national leaders’ preemptive efforts to educate the public. Yet in democracies around the world, domestic disinformation is far more prolific and influential than foreign influence operations.
  8. Removing Inauthentic Asset Networks. The detection and removal from platforms of accounts or pages that misrepresent themselves has obvious merit, but its effectiveness is difficult to assess. Fragmentary data—such as unverified company statements, draft platform studies, and U.S. intelligence—suggest that continuous takedowns might be capable of reducing the influence of inauthentic networks and imposing some costs on perpetrators. However, few platforms even claim to have achieved this, and the investments required are considerable. Meanwhile, the threat posed by inauthentic asset networks remains unclear: a handful of empirical studies suggest that such networks, and social media influence operations more generally, may not be very effective at spreading disinformation. These early findings imply that platform takedowns may receive undue attention in public and policymaking discourse.
  9. Reducing Data Collection and Targeted Ads. Data privacy protections can be used to reduce the impact of microtargeting, or data-driven personalized messages, as a tool of disinformation. However, nascent scholarship suggests that microtargeting—while modestly effective in political persuasion—falls far short of the manipulative powers often ascribed to it. To the extent that microtargeting works, privacy protections seem to measurably undercut its effectiveness. But this carries high economic costs—not only for tech and ad companies, but also for small and medium businesses that rely on digital advertising. Additionally, efforts to blunt microtargeting can raise the costs of political activity in general, especially for activists and minority groups who lack access to other communication channels.
  10. Changing Recommendation Algorithms. Although platforms are neither the sole sources of disinformation nor the main causes of political polarization, there is strong evidence that social media algorithms intensify and entrench these off-platform dynamics. Algorithmic changes therefore have the potential to ameliorate the problem; however, this has not been directly studied by independent researchers, and the market viability of such changes is uncertain. Major platforms’ optimizing for something other than engagement would undercut the core business model that enabled them to reach their current size. Users could opt in to healthier algorithms via middleware or civically minded alternative platforms, but most people probably would not. Additionally, algorithms are blunt and opaque tools: using them to curb disinformation would also suppress some legitimate content.


The authors wish to thank William Adler, Dan Baer, Albin Birger, Kelly Born, Jessica Brandt, David Broniatowski, Monica Bulger, Ciaran Cartmell, Mike Caulfield, Tímea Červeňová, Rama Elluru, Steven Feldstein, Beth Goldberg, Stephanie Hankey, Justin Hendrix, Vishnu Kannan, Jennifer Kavanagh, Rachel Kleinfeld, Samantha Lai, Laura Livingston, Peter Mattis, Tamar Mitts, Brendan Nyhan, George Perkovich, Martin Riedl, Ronald Robertson, Emily Roseman, Jen Rosiere Reynolds, Zeve Sanderson, Bret Schafer, Leah Selig Chauhan, Laura Smillie, Rory Smith, Victoria Smith, Kate Starbird, Josh Stearns, Gerald Torres, Meaghan Waff, Alicia Wanless, Laura Waters, Gavin Wilde, Kamya Yadav, and others for their valuable feedback and insights. Additional thanks to Joshua Sullivan for research assistance and to Alie Brase, Lindsay Maizland, Anjuli Das, Jocelyn Soly, Amy Mellon, and Jessica Katz for publications support. The final report reflects the views of the authors only. This research was supported by a grant from the Special Competitive Studies Project.

About the Authors

Jon Bateman is a senior fellow in the Technology and International Affairs Program at the Carnegie Endowment for International Peace. His research areas include disinformation, cyber operations, artificial intelligence, and techno-nationalism. Bateman previously was special assistant to Chairman of the Joint Chiefs of Staff General Joseph F. Dunford, Jr., serving as a speechwriter and the lead strategic analyst in the chairman’s internal think tank. He has also helped craft policy for military cyber operations in the Office of the Secretary of Defense and was a senior intelligence analyst at the Defense Intelligence Agency, where he led teams responsible for assessing Iran’s internal stability, senior-level decisionmaking, and cyber activities. Bateman is a graduate of Harvard Law School and Johns Hopkins University.

Dean Jackson is principal of Public Circle Research & Consulting and a specialist in democracy, media, and technology. In 2023, he was named an inaugural Tech Policy Press reporting fellow and an affiliate fellow with the Propaganda Research Lab at the University of Texas at Austin. Previously, he was an investigative analyst with the Select Committee to Investigate the January 6th Attack on the U.S. Capitol and project manager of the Influence Operations Researchers’ Guild at the Carnegie Endowment for International Peace. From 2013 to 2021, Jackson managed research and program coordination activities related to media and technology at the National Endowment for Democracy. He holds an MA in international relations from the University of Chicago and a BA in political science from Wright State University in Dayton, OH.


1 The cells of this table are color coded: green suggests the most positive assessment for each factor, while red is the least positive and yellow is in between. These overall ratings are a combination of various subfactors, which may be in tension: for example, an intervention can be highly effective but only for a short time or with high risk of second-order consequences.

A green cell means an intervention is well studied, likely to be effective, or easy to implement. For the first column, this means there is a large body of literature on the topic. While it may not conclusively answer every relevant question, it provides strong indicators of effectiveness, cost, and related factors. For the second column, a green cell suggests that an intervention can be highly effective at addressing the problem in a lasting way at a relatively low level of risk. For the third column, a green cell means that the intervention can quickly make a large impact at relatively low cost and without major obstacles to successful implementation.

A yellow cell indicates an intervention is less well studied (there is relevant literature but major questions about efficacy are unanswered or significantly underexplored), less efficacious (its impact is noteworthy but limited in size or duration, or it carries some risk of blowback), or faces nonnegligible hurdles to implementation, such as cost, technical barriers, or political opposition.

A red cell indicates that an intervention is poorly understood, with little literature offering guidance on key questions; that it is low impact, has only narrow use cases, or has significant second-order consequences; or that it requires an especially high investment of resources or political capital to implement or scale.


This report offers high-level, evidence-informed assessments of ten commonly proposed ways to counter disinformation. It summarizes the quantity and quality of research, the evidence of efficacy, and the ease of scalable implementation. Building on other work that has compiled policy proposals or collected academic literature, this report seeks to synthesize social science and practitioner knowledge for an audience of policymakers, funders, journalists, and others in democratic countries.1 Rather than recommending a specific policy agenda, it aims to clarify key considerations that leaders should weigh based on their national and institutional contexts, available resources, priorities, and risk tolerance.

To conduct this research, we compiled a list of nearly two dozen counter-disinformation measures frequently proposed by experts, scholars, and policymakers.2 We then selected ten for inclusion based on several factors. First, we prioritized proposals that had a fairly direct connection to the problem of disinformation. For example, we excluded antitrust enforcement against tech companies because it affects disinformation in an indirect way, making it difficult to evaluate in this report. Second, we focused on countermeasures that could plausibly be subject to meaningful empirical study. We therefore did not consider diplomatic efforts to build international norms against disinformation, for example, or changes to platforms’ legal liability as intermediaries. Third, we sought to cover a diverse range of interventions. This meant including actions implementable by the government, the private sector, and civil society; tactical measures as well as structural reforms; and multiple theories of change such as resilience, disruption, and deterrence.

The ten selected interventions became the subjects of this report’s ten case studies. Each case study defines the intervention, gives concrete use cases, and highlights additional reading. The case studies focus on three questions: How much is known about an intervention? How effective does it seem, given current knowledge? And how easy is it to implement at scale? To develop these case studies, we reviewed hundreds of academic papers, previous meta-analyses, programmatic literature, and other relevant materials. We also conducted a series of workshops and consultations with scholars, practitioners, policymakers, and funders. We drew on experts with domain knowledge to vet individual case studies, as well as those with a broader view of the counter-disinformation field to provide feedback on the project as a whole. The resulting report expresses the views of the authors alone.

Although this report reviews a number of important, commonly proposed policy ideas, it is not comprehensive. In particular, we did not study the following significant categories of long-term, large-scale change. First, political institutions could try to perform stronger gatekeeping functions. This may involve reforms of party primaries, redistricting processes, and campaign finance systems. Second, tech platforms might need stronger incentives and capacity to curb disinformation. This could involve new regulation, diversification of revenue, and market power reductions that enable users, advertisers, activists, and others to provide checks on major platforms. Third, the public may need more encouragement to value truth and place trust in truthful institutions and figures. This might involve addressing the many root causes of popular alienation, fear, and anger, such as with local community-building efforts, a reversal of geographic sorting, improvements to economic prospects, and healing of racial grievances. Any of these ideas would be daunting to implement, and none are easy to assess. But they all have serious potential to help counter disinformation—perhaps even more so than the ten interventions studied in this report.


1 See “Final Report: Commission on Information Disorder,” Aspen Institute, November 2021,; Daniel Arnaudo et al., “Combating Information Manipulation: A Playbook for Elections and Beyond,” National Democratic Institute, International Republican Institute, and Stanford Internet Observatory, September 2021,; “Center of Excellence on Democracy, Human Rights, and Governance: Disinformation Primer,” U.S. Agency for International Development, February 2021,; and Laura Courchesne, Julia Ilhardt, and Jacob N. Shapiro, “Review of Social Science Research on the Impact of Countermeasures Against Influence Operations,” Harvard Kennedy School Misinformation Review 2, no. 5 (September 2021):

2 This list was drawn from multiple sources, including Kamya Yadav, “Countering Influence Operations: A Review of Policy Proposals Since 2016,” Carnegie Endowment for International Peace, November 30, 2020,; a more detailed, unpublished database of policy proposals compiled by Vishnu Kannan in 2022; Courchesne, Ilhardt, and Shapiro, “Review of Social Science Research”; and “The 2022 Code of Practice on Disinformation,” European Commission, accessed January 27, 2023, These sources were supplemented by further literature review and expert feedback.

Challenges and Cautions

Before seeking to counter disinformation, policymakers should carefully consider what this idea means. “Disinformation,” usually defined as information known by the speaker to be false, is a notoriously tricky concept that comes with numerous limitations, contradictions, and risks.1

Conceptual Challenges

Identifying disinformation presents several puzzles. For one thing, labeling any claim as false requires invoking an authoritative truth. Yet the institutions and professions most capable of discerning the truth—such as science, journalism, and courts—are sometimes wrong and often distrusted. Moreover, true facts can be selectively assembled to create an overall narrative that is arguably misleading but not necessarily false in an objective sense. This may be even more common and influential than outright lies, yet it’s unclear whether it counts as disinformation. In fact, “disinformation” is frequently conflated with a range of other political and societal maladies such as polarization, extremism, and hate. All of these are technically distinct issues, though they can be causally related to disinformation and to each other. Finally, it is difficult to know whether someone spreading false claims does so intentionally. Disinformation typically passes through a long chain of both witting and unwitting speakers.

The challenges of the term “disinformation” are not merely theoretical; they have influenced public debates. Despite the word’s scientific-sounding imprimatur, it is often invoked quite loosely to denigrate any viewpoint seen as wrong, baseless, disingenuous, or harmful. Such usage has the effect of pathologizing swaths of routine discourse: after all, disagreements about what is wrong, baseless, disingenuous, or harmful are what drives democratic politics and social change. Moreover, today’s talk of “disinformation” can sometimes imply a more novel, solvable problem than really exists. Although the word has been familiar in the West for decades, it attained new currency just a few years ago after a series of catalyzing episodes—such as Russian election interference in the United States—involving social media. This led many people to see social media as the defining cause of disinformation, rather than one driver or manifestation of it. The messy battle for truth is, of course, an eternal aspect of human society.

For policymaking, reliance on a loaded but vague idea like “disinformation” brings several risks. When the term is used to imply that normal and necessary public discourse is dangerously disordered, it encourages the empowerment of technocrats to manage speech and, in turn, potentially erodes legal and normative boundaries that sustain democracy. Moreover, the term’s vagaries and contradictions are already well understood by segments of the public and have been seized upon, including by disinformers themselves, to undermine counter-disinformation efforts. In some cases, those accused of spreading disinformation have successfully sought to reclaim the term by arguing that counter-disinformation efforts are the real sources of disinformation, thus reversing the roles of perpetrator and victim.

This risk is most obvious in authoritarian regimes and flawed democracies, where leaders may suppress dissent by labeling it disinformation. But the problem can manifest in other ways too. A prominent U.S. example was the 2020 public letter by former intelligence officials warning that the then-recent disclosure of Hunter Biden’s laptop data “has all the classic earmarks of a Russian information operation.”2 Later, when the data’s authenticity was largely confirmed, those promoting the laptop story said the letter itself was a form of disinformation.3 Similar boomerang patterns have previously been seen with “fake news,” a phrase that originally described unethical content farms but was quickly repurposed to delegitimize truthful journalism. To be sure, such boomerangs often rest on exaggerated or bad faith claims. Yet they exploit a core truth: “disinformation” is a flawed, malleable term whose implied assertion of authority can lead to overreach and blowback.

For these and other reasons, a growing number of experts reject the term “disinformation.” Some prefer to focus instead on “misinformation” (which elides intent) or “influence/information operations” (which de-emphasizes falsity). Others favor more self-consciously political terms such as “propaganda” or “information warfare,” which they see as clearer warnings of the problem. A range of alternative conceptions have been proposed, including “malinformation” and “information disorder.” Recently, some experts have advocated holistic concepts, like “information ecology” or “information and society,” that shift attention away from individual actors or claims and toward larger social systems. Meanwhile, platforms have developed their own quasi-legalistic argot—such as Meta’s “coordinated inauthentic behavior”—to facilitate governance and enforcement.

There is also a growing set of scholars and commentators who believe the field itself, not just its terminology, must be fundamentally rethought.4 Some point out that disinformation and its ilk are elastic notions which tend to reflect the biases of whoever invokes them. Others observe that disinformation isn’t pervasive or influential enough to explain the ills often attributed to it. Several critics have gone so far as to label the disinformation crisis a moral panic, one suffered most acutely by elite groups. On this telling, privileged and expert classes—such as the White liberals who for decades dominated academia and journalism in the United States—have seized upon a perceived surge of disinformation to explain their recent loss of control over the national discourse. This story, rooted in nostalgia for a mythical era of shared truth, offers a comforting, depoliticized morality play: right-thinking in-groups are under siege by ignorant out-groups in the thrall of manipulative (often foreign) bogeymen. The narrative has troubling historical antecedents, such as baseless Cold War–era fears of communist “brainwashing” that led to curtailment of civil liberties in the West.

Despite all these complications and pitfalls, this report begrudgingly embraces the term “disinformation” for three primary reasons. First, it captures a specific, real, and damaging phenomenon: malicious falsehoods are undermining democratic stability and governance around the world. However difficult it may be to identify or define disinformation at the edges, a set of core cases clearly exists and deserves serious attention from policymakers. A paradigmatic example is the “Stop the Steal” movement in the United States. The claim that the 2020 presidential election was stolen is provably false, was put forward with demonstrated bad faith, and has deeply destabilized the country. Second, other phrases have their own problems, and no single term has yet emerged as a clearly better alternative. Third, “disinformation” remains among the most familiar terms for policymakers and other stakeholders who constitute the key audience for this report.

Evaluation Challenges

Beyond the conceptual issues, policymakers should also be aware of several foundational challenges in assessing the efficacy of disinformation countermeasures. Each of these challenges emerged time and again in the development of this report’s case studies.

  • The underlying problem is hard to measure. It is hard to know how well a countermeasure works if analysts don’t also know how much impact disinformation has, both before and after the countermeasure is implemented. In fact, countermeasures are only necessary insofar as disinformation is influential to begin with. Unfortunately, experts broadly agree that disinformation (like other forms of influence) is poorly understood and hard to quantify. A 2021 Princeton University meta-analysis commissioned by Carnegie found that “[e]mpirical research on how influence operations can affect people and societies—for example, by altering beliefs, changing voting behavior, or inspiring political violence—is limited and scattered.”5 It specifically noted that “empirical research does not yet adequately answer many of the most pressing questions facing policymakers” regarding the effectiveness of various influence tactics, the role of the medium used (such as specific online platforms), the duration of influence effects, and country-level differences. Until more is known about disinformation itself, the ability to assess countermeasures will remain limited.
  • Success can be defined in multiple ways. What makes an intervention successful in countering disinformation? An effective intervention might be one that stops someone from embracing a false belief, or discourages people from acting based on false claims, or slows the spread of false information, or protects the integrity of democratic decisionmaking, among other possibilities. All of these effects can be measured over varying time horizons. Additionally, effectiveness is tied to an intervention’s cost, scalability, and the willingness of key stakeholders to facilitate implementation. The risk of blowback is another factor: decisionmakers should consider potential second-, third-, and higher-order effects on the information environment. In short, there is no single way to understand success. Policymakers must decide this for themselves.
  • Policies can coincide, synergize, and conflict with each other. This report offers discrete evaluations of ten countermeasure types. In reality, multiple kinds of interventions should be implemented at the same time. Simultaneous, interconnected efforts are necessary to address the many complex drivers of disinformation. Policymakers and analysts must therefore avoid judging any one policy option as if it could or should provide a comprehensive solution. An ideal assessment would consider how several interventions can work together, including potential synergies, conflicts, and trade-offs. Such holistic analysis would be extremely difficult to do, however, and is beyond the scope of this report.
  • Subpopulations matter and may react differently. Many studies of disinformation countermeasures focus on their overall efficacy with respect to the general population, or the “average” person. However, atypical people—those at the tails of the statistical distribution—sometimes matter more. People who consume or share the largest amount of disinformation, hold the most extreme or conspiratorial views, have the biggest influence in their social network, or harbor the greatest propensity for violence often have disproportionate impact on society. Yet these tail groups are harder to study. Policymakers should take care not to assume that interventions which appear generally effective have the same level of impact on important tail groups. Conversely, interventions that look ineffective at a population level may still be able to influence key subpopulations.
  • Findings may not generalize across countries and regions. The feasibility and impact of an intervention can vary from place to place. For example, the United States is more polarized than most other advanced democracies, and it faces greater constitutional constraints and government gridlock. On the other hand, the United States has outsized influence over the world’s leading social media platforms and possesses relatively wealthy philanthropic institutions and, at the national level, a robust independent press. These kinds of distinctive characteristics will shape what works in the United States, while other countries must consider their own national contexts. Unfortunately, much of the available research focuses on the United States and a handful of other wealthy Western democracies. This report incorporates some examples from other countries, but geographic bias remains present.

These evaluation challenges have no easy solutions. Researchers are working to fill knowledge gaps and define clearer policy objectives, but doing so will take years or even decades. Meanwhile, policymakers must somehow forge ahead. Ideally, they will draw upon the best information available while remaining cognizant of the many unknowns. The following case studies are designed with those twin goals in mind.


1 Alicia Wanless and James Pamment, “How Do You Define a Problem Like Influence?,” Carnegie Endowment for International Peace, December 30 2019, For more on the distinction between misinformation and disinformation, see Dean Jackson, “Issue Brief: Distinguishing Disinformation From Propaganda, Misinformation, and ‘Fake News,’” National Endowment for Democracy, October 17, 2017,

2 Jim Clapper et al., “Public Statement on the Hunter Biden Emails,” Politico, October 19, 2020,

3 Luke Broadwater, “Officials Who Cast Doubt on Hunter Biden Laptop Face Questions,” New York Times, May 16, 2023,

4 See, for example, Joseph Bernstein, “Bad News: Selling the Story of Disinformation,” Harper’s, 2021,; Rachel Kuo and Alice Marwick, “Critical Disinformation Studies: History, Power, and Politics,” Harvard Kennedy School Misinformation Review, August 12, 2021,; Alice Marwick, Rachel Kuo, Shanice Jones Cameron, and Moira Weigel, “Critical Disinformation Studies: A Syllabus,” Center for Information, Technology, and Public Life, 2021,; Ben Smith, “Inside the ‘Misinformation’ Wars,” New York Times, November 28, 2021,; Matthew Yglesias, “The Misinformation Cope,” Slow Boring, April 20, 2022,; Théophile Lenoir, “Reconsidering the Fight Against Disinformation,” Tech Policy Press, August 1, 2022,; Dan Williams, “Misinformation Researchers Are Wrong: There Can’t Be a Science of Misleading Content,” Conspicuous Cognition, January 10, 2024,; and Gavin Wilde, “From Panic to Policy: The Limits of Propaganda and the Foundations of an Effective Response,” Texas National Security Review (forthcoming 2024).

5 Jon Bateman, Elonnai Hickok, Laura Courchesne, Isra Thange, and Jacob N. Shapiro, “Measuring the Effects of Influence Operations: Key Findings and Gaps From Empirical Research,” June 28, 2021, Carnegie Endowment for International Peace,

Case Study 1: Supporting Local Journalism

Key takeaways:

There is strong evidence that the decline of local news outlets, particularly newspapers, has eroded civic engagement, knowledge, and trust—helping disinformation to proliferate. Bolstering local journalism could plausibly help to arrest or reverse such trends, but this has not been directly tested. Cost is a major challenge, given the expense of quality journalism and the depth of the industry’s financial decline. Philanthropy can provide targeted support, such as seed money for experimentation. But a long-term solution would probably require government intervention and/or alternate business models. This could include direct subsidies (channeled through nongovernmental intermediaries) or indirect measures, such as tax exemptions and bargaining rights.

Key sources:

Description and Use Cases

Many analysts have called for investing in local journalism—especially print and digital media—as a way to counter disinformation. The hope is that high-quality local journalism can inform democratic deliberation, debunk false claims, and restore the feelings of trust and community that help to keep conspiracy theories at bay.1 More specifically, new financial investments would aim to halt or reverse the industry’s long-term financial deterioration. Local newspapers and other outlets have seen steady declines in ad revenue and readership for the last two decades, as the internet gave birth to more sophisticated forms of digital advertising and alternative sources of free information. According to one count, a fourth of the newspapers operating in the United States in 2004 had closed by the end of 2020.2 The COVID-19 pandemic accelerated this trend, causing widespread layoffs across print, broadcast, radio, and digital outlets.3 Such challenges have not been limited to the United States or Western countries: for example, COVID-19 “ravaged the revenue base” of Nigerian media organizations, according to one local publisher.4

New funding for local journalism could come from governments, philanthropists, commercial sources, or a combination of these. One model for government funding is the New Jersey Civic Information Consortium, a state-supported nonprofit. The consortium receives money from government and private sources, then disburses grants to projects that promote the “quantity and quality of civic information.”5 The use of a nonprofit intermediary aims to reduce the risk that government officials would leverage state funds to influence news coverage.6 Another model is for governments to use tax exemptions and other policy tools to financially boost the journalism industry without directly subsidizing it.7 In the United Kingdom, newspapers, books, and some news sites are exempt from the Value-Added Tax because of their public benefit.8 In Canada, people who purchase a digital news subscription can claim a tax exemption.9 Australia has taken another approach by passing legislation that empowers news publishers to jointly negotiate for compensation when platforms like Facebook and Google link to their content.10 Other advocates have proposed a tax on digital advertising that would be used to support journalism.11

Philanthropic support for local journalism can also come in various forms. Not-for-profit news outlets in North America currently get about half of their revenue from foundation grants, but national and global outlets receive more than two-thirds of these grant dollars.12 To bolster local outlets, a greater portion of grants could be redirected to them. The next largest source of funding for nonprofit newsrooms is individual gifts, which make up about 30 percent of revenue and primarily come from donations of $5,000 or more.13 However, small-dollar donations are growing; NewsMatch, a U.S. fundraising effort, encourages audiences to donate to local media organizations and matches individual donations with other sources of philanthropy. NewsMatch has raised more than $271 million since 2017.14

Multiple government, philanthropic, or commercial revenue streams can be combined in novel ways, as illustrated by Report for America. The initiative raised $8 million in 2022 to place reporting fellows in local newsrooms.15 A relatively small portion, about $650,000, was taxpayer money from the Corporation for Public Broadcasting.16 The remainder came from foundations and technology companies, matched dollar-for-dollar by contributions split between the local newsrooms themselves and other local funders.

How Much Do We Know?

Research is clear that the decline of local journalism is associated with the drivers of disinformation. However, the inverse proposition—that greater funding for local journalists will reduce disinformation—does not automatically follow and has not been empirically tested.

Going forward, decisionmakers and scholars could study the link between disinformation and the health of local media outlets more closely by monitoring and evaluating the impact of local news startups on a variety of metrics related to disinformation, such as polarization, professed trust in institutions like the media and government, civic engagement and voter turnout, and susceptibility to online rumors.

How Effective Does It Seem?

Studies suggest at least two mechanisms whereby the decline of local media outlets can fuel the spread of disinformation.

First, the decline contributes to civic ignorance and apathy as voters become less informed about the issues, candidates, and stakes in local elections. Research indicates that reduced access to local news is linked to lower voter turnout and civic engagement as well as increased corruption and mismanagement. At least one longitudinal study also finds a relationship between the decline of local news, on the one hand, and diminished civic awareness and engagement on the other hand.17 These conditions ultimately erode public trust, which can increase belief in misinformation and conspiracy theories.18 Conversely, scholarship has shown that strong media is linked to robust civic participation. Many studies correlate the existence of local newspapers with higher turnout in local elections. And, at an individual level, a person’s consumption of local political news is associated with higher likelihood to vote.19 These patterns can be seen in a variety of electoral contexts—including downballot and judicial elections—and across historical periods, despite changing technology.20 A study of U.S. history from 1869 to 2004 found that a community’s civic participation rose when its first newspaper was created, and that this connection persisted even after the introduction of radio and television.21

When local media disappears, lower-quality information sources can fill the gap as people look elsewhere for information.

Second, when local media disappears, lower-quality information sources can fill the gap as people look elsewhere for information. Social media has emerged as a primary alternative.22 Although social media platforms contain plenty of accurate and authoritative voices, they also create opportunities for low-quality and hyperpartisan personalities and outlets (some of which pose as local newspapers) that spread misleading, divisive content.23 Indeed, research shows a connection between the decline of local media and the rise of polarization. For example, one study found that communities that lost their local newspaper became more polarized as voters replaced information from local media with more partisan cues picked up elsewhere, such as national cable TV.24 To be sure, polarizing content should not be equated with disinformation. Nevertheless, most analysts believe the two are linked: as voters drift away from the “mainstream” of the political spectrum—often, but not always, toward the right—they may become more accepting of less credible alternative media sources and misleading claims that align with their partisan preferences and demonize political opponents.25

Given the evidence that local media declines breed susceptibility to disinformation, it is reasonable to predict that efforts to bolster local media could have the opposite effect. However, that prediction has not yet been empirically tested. It is possible, for example, that people who have drifted from traditional local journalism toward social media as an information source might have developed new habits that would be difficult to reverse. Likewise, communities that have suffered a general loss of civic engagement and trust due to the decline of local media might now have less interest or faith in a startup newsroom than they previously would have.

How Easily Does It Scale?

Reversing the decline of local journalism is an extremely costly proposition, at least in the United States, because the scale of downsizing has been so large. A Georgetown University study found that newspapers employed 150,000 fewer people in 2022 compared to the 1980s—a decline of 63 percent. Although web publishers have replaced about half of those jobs, replacing the rest would require tremendous investment. For example, the American Journalism Project raised over $100 million to partially fund thirty-three nonprofit newsrooms—a small fraction of the 2,100 newsrooms that closed in the United States in the past two decades.26 Washington Post columnist Perry Bacon Jr. estimated in 2022 that it would cost at least $10 billion per year to hire 87,000 new journalists—that is, to ensure that each U.S. congressional district had 200 journalists, plus operational support.27 More localized coverage could be even costlier. In 2022, Democracy Fund created a calculator to estimate the total cost of meeting the information needs of every community in the United States. Hiring several reporters to cover crucial issues in each township and municipality would cost $52 billion per year.28

Philanthropy can provide targeted investments in particularly needy areas—for example, communities too small or poor to sustain local media on their own—and offer seed money to run experiments. But given the sums required, a large-scale solution would demand some combination of long-term government support, new journalistic business models, or other structural changes in the marketplace. The Australian bargaining law provides one promising case study. While critics said the approach would be unlikely to generate much revenue and would mostly benefit large publishers, an Australian government review found that Google and Meta reached thirty agreements with publications of varying size, including some groups of outlets. In its first year, the law raised more than $140 million for these outlets, much of which was used to hire new journalists and purchase equipment.29 Similar schemes are now being implemented in Canada and under consideration in California—though these efforts, like the Australia law, have faced strong initial pushback from big tech companies.30


1 Consider David Salvo, Jamie Fly, and Laura Rosenberger, “The ASD Policy Blueprint for Countering Authoritarian Interference in Democracies,” German Marshall Fund, June 26, 2018,; “A Multi-dimensional Approach to Disinformation,” European Commission, 2018,; and Edward Lucas and Peter Pomeranzev, “Winning the Information War: Techniques and Counter-strategies to Russian Propaganda in Central and Eastern Europe,” Center for European Policy Analysis, 2016,

2 Tom Stites, “A Quarter of All U.S. Newspapers Have Died in 15 Years, a New UNC News Deserts Study Found,” Poynter Institute, June 24, 2020,

3 Penelope Muse Abernathy, “News Deserts and Ghost Newspapers: Will Local News Survive?,” University of North Carolina, 2020,; and “The Tow Center COVID-19 Newsroom Cutback Tracker,” Tow Center for Digital Journalism, September 9, 2020,

4 For example, see Dapo Olorunyomi, “Surviving the Pandemic: The Struggle for Media Sustainability in Africa,” National Endowment for Democracy, January 2021,

5 “About the Consortium,” New Jersey Civic Information Consortium, accessed January 27, 2023,

6 Anya Schiffrin, ed., In the Service of Power: Media Capture and the Threat to Democracy, Center for International Media Assistance, 2017,

7 Consider “INN Mission & History,” Institute for Nonprofit News, accessed January 27, 2023,

8 Jim Waterson, “VAT Ruling on Times Digital Edition Could Save News UK Millions,” Guardian, January 6, 2020,

9 “About the Digital News Subscription Tax Credit,” Government of Canada, accessed March 24, 2023,

10 “News Media Bargaining Code,” Australian Competition & Consumer Commission, accessed January 27, 2023,

11 Julia Angwin, “Can Taxing Big Tech Save Journalism?” Markup, July 16, 2022,

12 “INN Index 2022: Enduring in Crisis, Surging in Local Communities,” Institute for Nonprofit News, July 27, 2022,; and “Newsmatch,” Institute for Nonprofit News, accessed April 18, 2023,

13 “INN Index 2022,” Institute for Nonprofit News.

14 “Newsmatch,” Institute for Nonprofit News.

15 “About Us,” Report for America, accessed January 27, 2023,

16 “Supporting Report for America,” Report for America, accessed December 23, 2023,

17 Danny Hayes and Jennifer L. Lawless, “The Decline of Local News and Its Effects: New Evidence from Longitudinal Data,” Journal of Politics 80, no. 1 (January 2018):

18 The relationship between disinformation and trust in media, government, and other institutions is complex. Exposure to false content online is associated with lower trust in media but higher trust in government for conservatives when their preferred party is in power. Lack of trust in institutions is associated with higher belief in conspiracy theories, for example in the context of COVID-19 vaccination. See Katherine Ognyanova, David Lazer, Ronald E. Robertson, and Christo Wilson, “Misinformation in Action: Fake News Exposure Is Linked to Lower Trust in Media, Higher Trust in Government When Your Side Is in Power,” Harvard Kennedy School Misinformation Review, June 2, 2020,; and Will Jennings et al., “Lack of Trust, Conspiracy Beliefs, and Social Media Use Predict COVID-19 Vaccine Hesitancy,” Vaccines 9, no. 6 (June 2021): See also Jay Jennings and Meghan Rubado, “Newspaper Decline and the Effect on Local Government Coverage,” University of Texas at Austin, November 2019,; Jackie Filla and Martin Johnson, “Local News Outlets and Political Participation,” Urban Affairs Review 45, no. 5 (2010):; “2021 Edelman Trust Barometer,” Edelman, 2021,; and Jeffrey Hiday, “Combating Disinformation by Bolstering Truth and Trust,” RAND Corporation, May 24, 2020,

19 Martin Baekgaard, Carsten Jensen, Peter B. Mortensen, and Søren Serritzlew, “Local News Media and Voter Turnout,” Local Government Studies 40 (2014):

20 Christopher Chapp and Peter Aehl, “Newspapers and Political Participation: The Relationship Between Ballot Rolloff and Local Newspaper Circulation,” Newspaper Research Journal 42, no. 2 (2021):; and “Does Local Journalism Stimulate Voter Participation in State Supreme Court Elections?,” David Hughes, Journal of Law and Courts 8, no. 1 (2020):

21 Matthew Gentzkow, Jesse M. Shapiro, and Michael Sinkinson, “The Effect of Newspaper Entry and Exit on Electoral Politics,” American Economic Review 101 (December 2011): For a roundup of this research, see Josh Stearns and Christine Schmidt, “How We Know Journalism Is Good for Democracy,” Democracy Fund, September 15, 2022,

22 David S. Ardia, Evan Ringel, Victoria Ekstrand, and Ashley Fox, “Addressing the Decline of Local News, Rise of Platforms, and Spread of Mis- and Disinformation Online: A Summary of Current Research and Policy Proposals,” University of North Carolina, December 22, 2020,

23 Jessica Mahone and Philip Napoli, “Hundreds of Hyperpartisan Sites Are Masquerading as Local News. This Map Shows If There’s One Near You,” Nieman Lab, July 13, 2020,

24 Joshua P. Darr, Matthew P. Hitt, and Johanna L. Dunaway, “Newspaper Closures Polarize Voting Behavior,” Journal of Communication 68, no. 6 (December 2018):

25 See Imelda Deinla, Gabrielle Ann S. Mendoza, Kier Jesse Ballar, and Jurel Yap, “The Link Between Fake News Susceptibility and Political Polarization of the Youth in the Philippines,” Ateneo School of Government, Working Paper no. 21-029, November 2021,; and Mathias Osmundsen, Michael Bang Petersen, and Alexander Bor, “How Partisan Polarization Drives the Spread of Fake News,” Brookings Institution, May 13, 2021, In the United States, Yochai Benkler and others have argued that asymmetric polarization—with the right drifting from the center faster and farther than the left—has been driven at least in part by media dynamics. Specifically, right-leaning media across cable television, radio, and the internet are less connected to mainstream media than their left-leaning counterparts. See Yochai Benkler, Robert Faris, Hal Roberts, and Ethan Zuckerman, “Study: Breitbart-Led Right-Wing Media Ecosystem Altered Broader Media Agenda,” Columbia Journalism Review, March 3, 2017, Not all analysts believe polarization is a bad thing; for example, some argue that polarization provides voters more distinct choices and has led to increased political participation. Others have argued that polarization contributes to crisis flash points that disrupt a problematic status quo in ways that are ultimately healthy. Consider Jessica Rettig, “Why Political Polarization Might Be Good for America,” U.S. News, May 27, 2010,; see also “The US Is Suffering From Toxic Polarization. That’s Arguably a Good Thing,” Peter T. Coleman, Scientific American, April 2, 2021, The United States is an international outlier on polarization. A review by Jennifer McCoy and Benjamin Press found that the United States is “the only advanced Western democracy to have faced such intense polarization for such an extended period.” Their study suggests grim outcomes from high levels of polarization. McCoy and Press examine a sample of fifty-two democratic societies suffering “pernicious polarization,” defined “as the division of society into mutually distrustful political camps in which political identity becomes a social identity.” They find that half of cases faced democratic erosion, and fewer than a fifth were able to sustain a decline in pernicious polarization. Jennifer McCoy and Benjamin Press, “What Happens When Democracies Become Perniciously Polarized?” Carnegie Endowment for International Peace, January 18, 2022,

26 Stites, “Quarter of All U.S. Newspapers”; “Fiscal Year 2022 Operating Budget,” Corporation for Public Broadcasting, accessed January 27, 2023,; Anthony P. Carnevale and Emma Wenzinger, “Stop the Presses: Journalism Employment and Economic Value of 850 Journalism and Communication Programs,” Georgetown University Center on Education and Workforce, 2022,

27 Perry Bacon, Jr., “America Should Spend Billions to Revive Local News,” Washington Post, October 17, 2022,

28 “National Ecosystem Calendar,” Democracy Fund, accessed April 18, 2023,

29 See Brian Fung, “Meta Avoids Showdown Over News Content in US After Journalism Bargaining Bill Shelved,” CNN, December 7, 2022,; Joshua Benton, “Don’t Expect McConnell’s Paradox to Help News Publishers Get Real Money Out of Google and Facebook,” Nieman Lab, January 8, 2020,; Jeff Jarvis, “As Rupert Murdoch Works to Dismantle the Internet, Why Are Other Media Outlets Helping Him?,”Crikey, February 15, 2021,; Josh Frydenberg, “Review of the News Media and Digital Platforms Mandatory Bargaining Code,” Australian Department of the Treasury, February 2022,; and Anya Schiffrin, “Australia’s News Media Bargaining Code Pries $140 Million From Google and Facebook,” Poynter Institute, August 16, 2022,

30 Max Matza, “Google and Canada Reach Deal to Avert News Ban Over Online News Act,” BBC, November 29, 2023,; and Jaimie Ding, “California Bill Requiring Big Tech to Pay for News Placed on Hold Until 2024,” Los Angeles Times, July 7, 2023,

Case Study 2: Media Literacy Education

Key takeaways:

There is significant evidence that media literacy training can help people identify false stories and unreliable news sources. However, variation in pedagogical approaches means the effectiveness of one program does not necessarily imply the effectiveness of another. The most successful variants empower motivated individuals to take control of their media consumption and seek out high-quality information—instilling confidence and a sense of responsibility alongside skills development. While media literacy training shows promise, it suffers challenges in speed, scale, and targeting. Reaching large numbers of people, including those most susceptible to disinformation, is expensive and takes many years.

Key sources:

Description and Use Cases

Increasing individuals’ media literacy through education and training is one of the most frequently recommended countermeasures against disinformation.1 Proponents argue that “media literacy and critical thinking are the first barrier to deception” and that teaching people these skills therefore enables them to better identify false claims.2 The National Association for Media Literacy Education defines media literacy as “the ability to access, analyze, evaluate, create, and act using all forms of communication.” However, scholars point to conceptual confusion around the term, and practitioners take many different approaches.3 Common goals include instilling knowledge of the media industry and journalistic practices, awareness of media manipulation and disinformation techniques, and familiarity with the internet and digital technologies.

Media literacy education initiatives target a range of different audiences, occur in multiple settings, and use a variety of methods—including intensive classroom-based coursework as well as short online videos and games. Many programs focus on children and adolescents,4 with research suggesting that young people are less familiar with the workings of the internet and digital media and more susceptible to online hoaxes and propaganda than commonly assumed.5 For example, a 2016 study of over 7,800 students found many failed to distinguish sponsored content and untrustworthy websites in search results.6 Public education is therefore one major vehicle to reach large numbers of people early in their lives, alongside other kinds of youth programs. Aspects of media literacy have long been embedded in general education and liberal arts curricula in advanced democracies, especially in subjects that emphasize critical reading and thinking, such as language arts, essay writing, civics, and rhetoric. Public libraries have also historically promoted media literacy.

Not all media literacy programs target young people. After all, people don’t necessarily age out of their susceptibility to disinformation; in fact, older individuals seem more likely to share false stories on Facebook.7 Media literacy training for adults may happen at libraries, senior citizen centers, recreational events, or professional settings. Civil society and government agencies have also run public awareness campaigns and released gamified education tools. For example, Sweden established a Psychological Defence Agency in 2022. Its responsibilities include leading “training, exercises and knowledge development” to help residents “identify and counter foreign malign information influence, disinformation and other dissemination of misleading information directed at Sweden.”8

One valuable case study is the International Research and Exchanges Board (IREX)’s Learn to Discern program, which has used a “train the trainers” approach in Ukraine and a number of other countries since 2015. This program equips volunteers to deliver a media literacy curriculum to members of their community.9 Reaching more vulnerable adults (for example, racial and ethnic minorities and those with fewer economic resources, less education, or less experience with the internet) is a policy priority for governments focused on media literacy.10

How Much Do We Know?

The body of scholarship on media literacy is large relative to most other disinformation countermeasures. For example, a 2022 literature review on digital literacy—one component of media literacy—found forty-three English-language studies since 2001, with thirty-three of these published since 2017, when interest in the topic swelled.11 The existence of dedicated journals and conferences is another indicator of growth in this subfield. For example, the National Association for Media Literacy Education published the first issue of the Journal of Media Literacy Education in 2009.12 Other major repositories of research on media literacy include a database maintained by the United Nations Alliance of Civilizations.13

Review of this literature shows that specific media literacy approaches have a strong theoretical basis and a large body of experimental evidence. However, variation in pedagogical approaches means the effectiveness of one program does not necessarily imply the effectiveness of another.14 Moreover, the lack of robust mechanisms for collecting data on classroom activities is a recognized gap. In 2018, the Media Literacy Programme Fund in the United Kingdom (considered a leader in media literacy education) cited grants to support evaluation as a priority.15 Since then, several studies have conducted real-time evaluation and sought to measure lasting improvements in student performance. Additional studies could expand the menu of possible approaches to evaluation; also useful would be to examine further the effectiveness of media literacy training for atypical individuals at the extremes, such as those who are especially motivated by partisanship, conspiracy theories, or radical ideologies.

How Effective Does It Seem?

There is significant evidence that media literacy training can help people identify false stories and unreliable news sources.16 Scholars sometimes refer to this as inoculation, because “preemptively exposing, warning, and familiarising people with the strategies used in the production of fake news helps confer cognitive immunity when exposed to real misinformation.”17 One experiment found that playing an online browser game designed to expose players to six different disinformation strategies reduced subjects’ susceptibility to false claims, especially among those users who were initially most vulnerable to being misled. Such laboratory findings are bolstered by studies of larger, real-world interventions. An evaluation of IREX’s Learn to Discern program found durable increases in good media consumption habits, such as checking multiple sources, lasting up to eighteen months after delivery of the training. 18 Other studies support teaching students to read “laterally”—using additional, trusted sources to corroborate suspect information.19

Because media literacy comes in many forms, it is important to assess which variants are most effective at reducing belief in false stories so trainers and educators can prioritize them. Research suggests that the most successful variants empower motivated individuals to take control of their media consumption and seek out high-quality information. This has been described as “actionable skepticism,” or sometimes simply as “information literacy.”20 For example, a 2019 review in American Behavioral Scientist examined various factors that might enable someone to recognize false news stories. They found that people’s “abilities to navigate and find information online that is verified and reliable”—for example, differentiating between an encyclopedia and a scientific journal—was an important predictor. In contrast, subjects’ understanding of the media industry and journalistic practices or their self-reported ability to “critically consume, question, and analyze information” were not predictive.21 Later research based on survey data also supported these findings.22

The most successful variants empower motivated individuals to take control of their media consumption and seek out high-quality information.

Importantly, multiple studies have shown that effective media literacy depends not only on people’s skills but also on their feelings and self-perceptions. Specifically, individuals who feel confident in their ability to find high-quality news sources, and who feel responsible for proactively doing so, are less likely to believe misleading claims. This factor is often called an individual’s “locus of control,” and it has been identified as important in studies of multiple nationally and demographically diverse populations.23 People who purposefully curate their information diet are less likely to be misled; passive consumers, on the other hand, are more vulnerable. However, this may be truer of typical news consumers than of outliers like extremists and very motivated partisans. The latter groups might self-report confidence in curating their media diet while nevertheless selecting for misleading, radical, or hyper-partisan sources.

A growing body of recent literature based on large-scale classroom studies shows how specific techniques can provide news consumers with greater agency and ability to seek out accurate information.24 Whereas past forms of online media literacy education often focused on identifying markers of suspicious websites—like typographical errors or other indicators of low quality—these signs are less useful in the modern information environment, where sources of misinformation can have the appearance of high production value for low cost.25 Recent studies have shown that lateral reading is more effective.26 In one study of students at a public college in the northeastern United States, only 12 percent of subjects used lateral reading before receiving training on how to do so; afterward, more than half did, and students showed an overall greater ability to discern true claims from fictional ones.27 A similar study on university students in California found these effects endured after five weeks.28 Another one-day exercise with American middle school students found that students had a difficult time overcoming impressions formed from “superficial features” on websites and should be trained to recognize different types of information sources, question the motivation behind them, and—crucially—compare those sources with known trustworthy sites.29

Teaching people to recognize unreliable news sources and common media manipulation tactics becomes even more effective when participants are also able to improve their locus of control, according to academic research and program evaluations. In a study of media literacy among 500 teenagers, researchers found that students with higher locus of control were more resilient against false stories. In another study based on survey data, researchers found that individuals who exhibited high locus of control and the ability to identify false stories were more likely to take corrective action on social media, such as reporting to the platform or educating the poster.30 (The participatory nature of social media increases the importance of educating users not only on how to recognize untrustworthy content but also on how to respond to and avoid sharing it.31)

Evaluations of IREX’s Learn to Discern program in Ukraine and a similar program run by PEN America in the United States shed further light on locus of control. These curricula’s focus on identifying untrustworthy content led subjects to become overly skeptical of all media. While trainees’ ability to identify disinformation and their knowledge of the news media increased, their locus of control changed only slightly. Ultimately, trainees’ ability to identify accurate news stories did not improve, and they remained distrustful of the media as a whole.32 A major challenge, then, is news consumers who feel under threat from the information environment rather than empowered to inform themselves. One potential intervention point could be social media platforms, which can provide tools and make other design choices to help users compare on-platform information with credible external sources (see case study 4). This could reinforce users’ locus of control while assisting them in exercising it.

Educators should be mindful of media literacy expert Paul Mihailidis’s warning that “critical thought can quickly become cynical thought.”33 In a 2018 essay, media scholar danah boyd argued that individuals who are both cynical about institutions and equipped to critique them can become believers in, and advocates for, conspiracy theories and disinformation. To avoid this trap, media literacy education must be designed carefully. This means empowering people to engage with media critically, constructively, and discerningly rather than through the lenses of undifferentiated paranoia and distrust.34

How Easily Does It Scale?

While media literacy training shows promise, it suffers challenges from speed, scale, and targeting. Many approaches will take years to reach large numbers of people, including many vulnerable and hard-to-reach populations. Attempts to reach scale through faster, leaner approaches, like gamified online modules or community-based efforts to train the trainers, are highly voluntary and most likely to impact already motivated individuals rather than large percentages of the public.

Many media literacy projects are not particularly expensive to deliver to small audiences. However, achieving wide impact requires high-scale delivery, such as integrating media literacy into major institutions like public education—a costly proposition. When a proposed 2010 bill in the U.S. Congress, the Healthy Media for Youth Act, called for $40 million for youth media literacy initiatives, leading scholars deemed the amount insufficient and advocated for larger financial commitments from the government, foundations, and the private sector.35

Once the resources and curricula are in place, it will still take time to develop necessary infrastructure to implement large-scale media literacy programs. For example, hiring skilled educators is a critical yet difficult task. Studies from the European Union (EU) and South Africa both identified major deficiencies in teachers’ own abilities to define core media literacy concepts or practice those concepts themselves.36


1 For examples, see Lucas and Pomeranzev, “Winning the Information War”; Katarína Klingová and Daniel Milo, “Countering Information War Lessons Learned from NATO and Partner Countries: Recommendations and Conclusions,” GLOBSEC, February 2017,; Claire Wardle and Hossein Derakhshan, “Information Disorder: Toward an Interdisciplinary Framework for Research and Policy Making,” Council of Europe, September 2017,; Daniel Fried and Alina Polyakova, “Democratic Defense Against Disinformation,” Atlantic Council, February 2018,; “A Multi-Dimensional Approach,” European Commission; Erik Brattberg and Tim Maurer, “Russian Election Interference: Europe’s Counter to Fake News and Cyber Attacks,” Carnegie Endowment for International Peace, May 2018,; “Action Plan Against Disinformation,” European Commission, May 2018,; Fly, Rosenberger, and Salvo, “The ASD Policy Blueprint”; Jean-Baptiste Jeangène Vilmer, Alexandre Escorcia, Marine Guillaume, and Janaina Herrera, “Information Manipulation: A Challenge for Our Democracies,” French Ministry for Europe and Foreign Affairs and the Institute for Strategic Research, August 2018,; Todd C. Helmus et al., “Russian Social Media Influence: Understanding Russian Propaganda in Eastern Europe,” RAND Corporation, 2018,; and Paul Barrett, “Tackling Domestic Disinformation: What the Social Media Companies Need to Do,” New York University, March 2019,

2 Klingová and Milo, “Countering Information War.”

3 “Media Literacy Defined,” National Association for Media Literacy Education, accessed February 13, 2023, See also Monica Bulger and Patrick Davison, “The Promises, Challenges, and Futures of Media Literacy,” Data & Society, February 21, 2018,; and Géraldine Wuyckens, Normand Landry, and Pierre Fastrez, “Untangling Media Literacy, Information Literacy, and Digital Literacy: A Systematic Meta-review of Core Concepts in Media Education,” Journal of Media Literacy Education 14, no. 1 (2022):

4 Renee Hobbs, “Digital and Media Literacy: A Plan of Action,” Aspen Institute, 2010,; and “Online Media Literacy Strategy,” UK Department for Digital, Culture, Media, & Sport, July 2021,

5 Tiffany Hsu, “When Teens Find Misinformation, These Teachers Are Ready,” New York Times, September 8, 2022,; “A Global Study on Information Literacy: Understanding Generational Behaviors and Concerns Around False and Misleading Information Online,” Poynter Institute, August 2022,; and Elena-Alexandra Dumitru, “Testing Children and Adolescents’ Ability to Identify Fake News: A Combined Design of Quasi-Experiment and Group Discussions,” Societies 10, no. 3 (September 2020):

6 Sam Wineburg, Sarah McGrew, Joel Breakstone, and Teresa Ortega, “Evaluating Information: The Cornerstone of Civic Online Reasoning,” Stanford Digital Repository, November 22, 2016,

7 For evidence that older users are more likely to share false stories on Facebook, see Andrew Guess, Jonathan Nagler, and Joshua Tucker, “Less Than You Think: Prevalence and Predictors of Fake News Dissemination on Facebook,” Science Advances 5, no. 1 (2019):

8 Elisabeth Braw, “Create a Psychological Defence Agency to ‘Prebunk’ Fake News,” Prospect, December 8, 2022,; and Adela Suliman, “Sweden Sets Up Psychological Defense Agency to Fight Fake News, Foreign Interference,” Washington Post, January 6, 2022,

9 Erin Murrock, Joy Amulya, Mehri Druckman, and Tetiana Liubyva, “Winning the War on State-Sponsored Propaganda: Gains in the Ability to Detect Disinformation a Year and a Half After Completing a Ukrainian News Media Literacy Program,” Journal of Media Literacy Education 10, no. 2 (2018):

10 “Online Media Literacy Strategy,” UK Department for Digital, Culture, Media, & Sport; and Kara Brisson-Boivin and Samantha McAleese, “From Access to Engagement: Building a Digital Media Literacy Strategy for Canada,” MediaSmarts, 2022,

11 Hasan Tinmaz, Yoo-Taek Lee, Mina Fanea-Ivanovici, and Hasnan Baber, “A Systematic Review on Digital Literacy,” Smart Learning Environments 9 (2022),

12 “History,” National Association for Media Literacy Education, accessed February 13, 2023,

13 “Media & Information Literacy,” UN Alliance of Civilizations, accessed March 26, 2023,

14 “Media & Information Literacy,” UN Alliance of Civilizations.

15 “Online Media Literacy Strategy,” UK Department for Digital, Culture, Media, & Sport; “Media Literacy Programme Fund,” Government of the United Kingdom, accessed March 26, 2023,; and Bulger and Davison, “Promises, Challenges, and Futures.”

16 Consider Bulger and Davison, “Promises, Challenges, and Futures,” as well as Theodora Dame Adjin-Tettey, “Combating Fake News, Disinformation, and Misinformation: Experimental Evidence for Media Literacy Education,” Cogent Arts & Humanities 9 (2022):

17 Jon Roozenbeek and Sander van der Linden, “Fake News Game Confers Psychological Resistance Against Online Misinformation,” Palgrave Communications 5 (2019):

18 Murrock, Amulya, Druckman, and Liubyva, “Winning the War.”

19 Carl-Anton Werner Axelsson, Mona Guath, and Thomas Nygren, “Learning How to Separate Fake From Real News: Scalable Digital Tutorials Promoting Students’ Civic Online Reasoning,” Future Internet 13, no. 3 (2021):

20 Jennifer Fleming, “Media Literacy, News Literacy, or News Appreciation? A Case Study of the News Literacy Program at Stony Brook University,” Journalism & Mass Communication Educator 69, no. 2 (2013):

21 Because the measurement of media literacy was self-reported, the study posits this as an example of the “Dunning-Kruger effect”: an individual’s (over)confidence in their ability to critically consume media is related to their susceptibility to deception. See Mo Jones-Jang, Tara Mortensen, and Jingjing Liu, “Does Media Literacy Help Identification of Fake News? Information Literacy Helps, but Other Literacies Don’t,” American Behavioral Scientist (August 2019):'t.

22 Brigitte Huber, Porismita Borah, and Homero Gil de Zúñiga, “Taking Corrective Action When Exposed to Fake News: The Role of Fake News Literacy,” Journal of Media Literacy Education 14 (July 2022):

23 Murrock, Amulya, Druckman, and Liubyva, “Winning the War”; and “Impact Report: Evaluating PEN America's Media Literacy Program,” PEN America & Stanford Social Media Lab, September 2022, See also Yan Su, Danielle Ka Lai Lee, and Xizhu Xiao,  “‘I Enjoy Thinking Critically, and I’m in Control’: Examining the Influences of Media Literacy Factors on Misperceptions Amidst the COVID-19 Infodemic,” Computers in Human Behavior 128 (2022):, a study based on subjects in China. The similar findings between the United States, Ukraine, and China—despite significant differences in the three countries’ media systems and histories—is noteworthy.

24 See generally: Folco Panizza et al., “Lateral Reading and Monetary Incentives to Spot Disinformation About Science,” Scientific Reports 12 (2022):; Sam Wineburg et al., “Lateral Reading on the Open Internet: A District-Wide Field Study in High School Government Classes,” Journal of Educational Psychology 114, no. 5 (2022):; and Joel Breakstone et al., “Lateral Reading: College Students Learn to Critically Evaluate Internet Sources in an Online Course,” Harvard Kennedy School Misinformation Review 2 (2021),

25 For more on this method, its success in classroom trials, and its departure from previous forms of media literacy education, see D. Pavlounis, J. Johnston, J. Brodsky, and P. Brooks, “The Digital Media Literacy Gap: How to Build Widespread Resilience to False and Misleading Information Using Evidence-Based Classroom Tools,” CIVIX Canada, November 2021,

26 Axelsson, Guath, and Nygren, “Learning How to Separate.”

27 Jessica E. Brodsky et al., “Associations Between Online Instruction in Lateral Reading Strategies and Fact-Checking COVID-19 News Among College Students,” AERA Open (2021):

28 Sarah McGrew, Mark Smith, Joel Breakstone, Teresa Ortega, and Sam Wineburg, “Improving University Students’ Web Savvy: An Intervention Study,” British Journal of Educational Psychology 89, no. 3 (September 2019):

29 Angela Kohnen, Gillian Mertens, and Shelby Boehm, “Can Middle Schoolers Learn to Read the Web Like Experts? Possibilities and Limits of a Strategy-Based Intervention,” Journal of Media Literacy Education 12, no. 2 (2020):

30 Adam Maksl, Seth Ashley, and Stephanie Craft, “Measuring News Media Literacy,” Journal of Media Literacy Education 6 (2015),; and Huber, Borah, and Zúñiga, “Taking Corrective Action.”

31 Bulger and Davison, “Promises, Challenges, and Futures.”

32 Like Jones-Jang, Mortensen, and Liu, the authors of the IREX evaluation suggest that the “false sense of control” already felt by individuals who did not receive media literacy training may also partially explain the relatively small improvements in these subjects’ locus of control.

33 Paul Mihailidis, “Beyond Cynicism: Media Education and Civic Learning Outcomes in the University,” International Journal of Learning and Media 1, no. 3 (August 2009):

34 “You Think You Want Media Literacy… Do You?” danah boyd, apophenia, March 9, 2018,

35 Hobbs, “Digital and Media Literacy.”

36 Sandy Zinn, Christine Stilwell, and Ruth Hoskins, “Information Literacy Education in the South African Classroom: Reflections from Teachers’ Journals in the Western Cape Province,” Libri 66 (April 2016):; and Maria Ranieri, Isabella Bruni, and Anne-Claire Orban de Xivry, “Teachers’ Professional Development on Digital and Media Literacy. Findings and Recommendations From a European Project,” Research on Education and Media 9, no. 2 (2017):

Case Study 3: Fact-Checking

Key takeaways:

A large body of research indicates that fact-checking can be an effective way to correct false beliefs about specific claims, especially for audiences that are not heavily invested in the partisan elements of the claims. However, influencing factual beliefs does not necessarily result in attitudinal or behavioral changes, such as reduced support for a deceitful politician or a baseless policy proposal. Moreover, the efficacy of fact-checking depends a great deal on contextual factors—such as wording, presentation, and source—that are not well understood. Even so, fact-checking seems unlikely to cause a backfire effect that leads people to double down on false beliefs. Fact-checkers face a structural disadvantage in that false claims can be created more cheaply and disseminated more quickly than corrective information; conceivably, technological innovations could help shift this balance.

Key sources:

Description and Use Cases

Fact-checking, in this report, refers broadly to the issuance of corrective information to debunk a false or misleading claim. A 2020 global survey by Carnegie identified 176 initiatives focused on fact-checking and journalism, while the Duke University Reporters’ Lab counted more than 400 active fact-checking efforts across more than 100 countries in 2023.1 These initiatives come in many different forms. They include dedicated, stand-alone organizations, such as Snopes, as well as fact-checkers integrated into newspapers and TV programs. Some prioritize political claims, like the Washington Post’s “Fact Checker” and the website PolitiFact. Others address health claims, like the CoronaVirusFacts/DatosCoronaVirus Alliance Database led by the International Fact-Checking Network at the Poynter Institute.2

Collaborative fact-checking models uniting the efforts of several organizations have also emerged, like Verificado 2018, an effort to collect rumors and disinformation circulating on WhatsApp during the 2018 Mexican elections and deliver corrections through private messaging.3 Projects like this attempt to quickly reach a large audience through a medium people already use. Other initiatives in multiple countries have attempted to crowdsource from citizen fact-checkers.

In recent years, some social media companies have highlighted fact-checks on their platforms and used the assessments of fact-checkers to inform other policy actions. For example, Meta’s third-party fact-checking program routes Facebook and Instagram posts that contain potential falsehoods to fact-checkers certified through the International Fact-Checking Network and applies a label if the posts are false or disputed.4 (For more on social media labeling, see case study 4.) Beyond social media, fact-checks can also be disseminated on dedicated websites or during televised political debates, among other possibilities.

How Much Do We Know?

Fact-checking is well-studied—markedly more so than other interventions. Nearly 200 articles related to fact-checking published since 2013 were reviewed for this case study. However, the strong empirical research base also reveals that fact-checking’s effectiveness depends on a complex interplay of multiple factors which remain poorly understood. Research has only begun to probe the specific parameters that apparently affect fact-checking’s impact, such as format, language, and source. Additionally, much of the academic literature on fact-checking comes from laboratory studies based on unrepresentative samples of university students, or from online quizzes based on crowdsourcing platforms like Amazon’s Mechanical Turk—raising questions about the findings’ generalizability. Among other problems, the subjects of such studies may be more interested or engaged with fact-checking content presented to them by experimenters, as compared with members of the general public who encounter such content organically. More research evaluating the longitudinal impact of ongoing fact-checking efforts in a diverse set of real-time, real-world environments is still needed.

How Effective Does It Seem?

A number of studies suggest that it is easier to cause people to disbelieve false claims but harder to change the behaviors related to those beliefs. For example, international studies have shown fact-checks to have some success at changing beliefs about viral diseases, but they do not always lead to increased intent to receive vaccines or improved public health behaviors.5 This disconnect may be especially large for politically charged topics in divided societies. Fact-checking the claims of political figures has limited impact on voters’ support for a candidate or policy position—even when the voters can correctly reject false claims.6

In general, studies find strong evidence of confirmation bias: subjects are more susceptible to false claims that align with preexisting beliefs or allegiances and are more resistant to fact-checks associated with an opposing political party or its positions.7 In fact, research suggests that accuracy is not always a top-of-mind issue for news consumers. For example, one 2013 study suggested that individuals put more stock in the perceived trustworthiness (or sincerity) of a corrective source than in the source’s actual expertise on the relevant topic.8 In another study, right-leaning, U.S.-based participants who were asked to judge the validity of articles tended to provide “expressive” assessments—aimed more at demonstrating their partisan allegiance than at seriously evaluating a source’s credibility.9 To be sure, many studies of fact-checking and confirmation bias focus on U.S. audiences, where political polarization is especially strong.10 It is possible that partisan barriers to fact-checking are less present in more unified societies.11

Some research initially sparked concern that fact-checking might perversely cause audiences to double down on their false beliefs. The term “backfire effect” was initially coined to describe this behavior in a 2010 article by political scientists Brendan Nyhan and Jason Reifler and took root in American public consciousness after the 2016 U.S. presidential election.12 However, more recent research (including by Nyhan) suggests that backfiring may be a rare phenomenon.

The efficacy of fact-checks depends on many factors. The precise wording of fact-checks matters, with more straightforward refutations being more effective than nuanced explanations. Additionally, one 2015 study found that a fact-check that provides an alternative “causal explanation for an unexplained event is significantly more effective than a denial even when the denial is backed by unusually strong evidence.”13 In other words, replacing a false story with a true story works better than merely refuting the false story. However, many of these factors remain poorly understood; for example, research is inconclusive on whether fact-checks should repeat the false claim being debunked or avoid doing so.

The use of emotion and storytelling in fact-checks is another potentially important but under-researched area. One study found that “narrative correctives,” which embed fact-checks within an engaging story, can be effective—and stories that end on an emotional note, such as fear or anger, work better than those that do not.14 Another study suggested that anger and anxiety increase motivated reasoning and partisan reactions, although this did not seem to prevent fact-checks from influencing users.15

One of the most important outstanding research areas is the durability of fact-checks: how long is corrective information remembered and believed by the recipient? Studies have reached complicated or conflicting results. Some research, for example, has suggested that a recipient’s increase in knowledge of truthful information may last longer than any change in deeper beliefs or attitudes related to that knowledge.16 This finding highlights an important difference between informational knowledge and affective feeling—both of which influence people’s beliefs and behaviors. A 2015 study found evidence that misinformation affected the audience’s sentiment toward public figures even after false claims were immediately debunked.17

How Easily Does It Scale?

The large number of ongoing fact-checking efforts around the world indicates that this intervention can be undertaken at reasonable expense. Some efforts, such as those incorporated into for-profit journalistic enterprises, may even be self-sustaining—whether on their own or as part of a larger business model. Initiatives like the International Fact-Checking Network have received financial and other support from philanthropists, tech companies, and universities.

Fact-checking does face at least two scaling challenges. First, it often takes much more time and expertise to produce a fact-check than to generate the false content being debunked. So long as fact-checkers face this structural disadvantage, fact-checking cannot be a comprehensive solution to disinformation. Rather than scale up to match the full scope of false claims, fact-checkers must instead do triage. Second, fact-checks require distribution mechanisms capable of competing effectively with the spread of disinformation. This means finding ways to reach the audience segments most vulnerable to disinformation. The faster and the more frequent the fact-checks, the better. Ideally, fact-checking should occur before or at the same time as the false information is presented. But this is no easy task. Given the significant investments already being made to produce fact-checks, funders should ensure that distribution mechanisms are sufficient to fully leverage fact-checkers’ work.

Technological innovation may help to reduce the cost of producing high-quality fact-checks and enable their rapid dissemination. Crowdsourcing methods, such as Twitter’s Birdwatch (later renamed Community Notes on X), are one approach that merits further study.18 Others have begun to test whether generative AI can be used to perform fact-checks. While today’s generative AI tools are too unreliable to produce accurate fact-checks without human supervision, they may nevertheless assist human fact-checkers in certain research and verification tasks, lowering costs and increasing speed.19 Ultimately, both crowdsourcing and AI methods still depend on the availability of authoritative, discoverable facts by which claims can be assessed. Producing this factual baseline—whether through science, journalism, or other knowledge-seeking efforts—is an important part of the fact-checking cycle. This too requires funding.


1 Victoria Smith, “Mapping Worldwide Initiatives to Counter Influence Operations,” Carnegie Endowment for International Peace, December 14, 2020,; and “Fact-Checking,” Duke Reporter’s Lab, accessed January 27, 2023,

2 “The CoronaVirusFacts/DatosCoronaVirus Alliance Database,” Poynter Institute, accessed December 10, 2023,

3 “Verificado 2018,” Online Journalism Awards, accessed December 10, 2023,

4 “About Fact-Checking on Facebook and Instagram,” Meta, accessed March 22, 2023,

5 John M. Carey et al., “The Effects of Corrective Information About Disease Epidemics and Outbreaks: Evidence From Zika and Yellow Fever in Brazil,” Science Advances 6 (2020),; Jeremy Bowles, Horacio Larreguy, and Shelley Liu, “Countering Misinformation via WhatsApp: Preliminary Evidence From the COVID-19 Pandemic in Zimbabwe,” PLOS ONE 15 (2020),; and Sara Pluviano, Sergio Della Sala, and Caroline Watt, “The Effects of Source Expertise and Trustworthiness on Recollection: The Case of Vaccine Misinformation,” Cognitive Processing 21 (2020),

6 See Brendan Nyhan, Ethan Porter, Jason Reifler, and Thomas Wood, “Taking Fact-Checks Literally but Not Seriously? The Effects of Journalistic Fact-Checking on Factual Beliefs and Candidate Favorability,” Political Behavior 42 (2019):; Briony Swire-Thompson, Ullrich K. H. Ecker, Stephan Lewandowsky, and Adam J. Berinsky, “They Might Be a Liar But They’re My Liar: Source Evaluation and the Prevalence of Misinformation,” Political Psychology 41 (2020),; Oscar Barrera, Sergei Guriev, Emeric Henry, and Ekaterina Zhuravskaya, “Facts, Alternative Facts, and Fact Checking in Times of Post-Truth Politics,” Journal of Public Economics 182 (2017),; and Briony Swire, Adam J. Berinsky, Stephan Lewandowsky, and Ullrich K. H. Ecker, “Processing Political Misinformation: Comprehending the Trump Phenomenon,” Royal Society Open Science 4 (2017),

7 Antino Kim and Alan R. Dennis, “Says Who? The Effects of Presentation Format and Source Rating on Fake News in Social Media,” MIS Quarterly 43, no. 3 (2019):; Ethan Porter, Thomas J. Wood, and David Kirby, “Sex Trafficking, Russian Infiltration, Birth Certificates, and Pedophilia: A Survey Experiment Correcting Fake News,” Journal of Experimental Political Science 5, no. 2 (2018):; and Jeong-woo Jang, Eun-Ju Lee, and Soo Yun Shin, “What Debunking of Misinformation Does and Doesn’t,” Cyberpsychology, Behavior, and Social Networking 22, no. 6 (2019):

8 Jimmeka J. Guillory and Lisa Geraci “Correcting Erroneous Inferences in Memory: The Role of Source Credibility,” Journal of Applied Research in Memory and Cognition 2, no. 4 (2013):; and Pluviano and Della Sala, “Effects of Source Expertise.”

9 Maurice Jakesch, Moran Koren, Anna Evtushenko, and Mor Naaman, “The Role of Source, Headline and Expressive Responding in Political News Evaluation,” SSRN, January 31, 2019,

10 Consider Thomas Carothers and Andrew O’Donohue, “How Americans Were Driven to Extremes: In the United States, Polarization Runs Particularly Deep,” Foreign Affairs, September 25, 2019,

11 Consider Michael J. Aird, Ullrich K. H. Ecker, Briony Swire, Adam J. Berinsky, and Stephan Lewandowsky, “Does Truth Matter to Voters? The Effects of Correcting Political Misinformation in an Australian Sample,” Royal Society Open Science (2018),

12 The backfire effect was captured in Brendan Nyhan and Jason Reifler, “When Corrections Fail: The Persistence of Political Misperceptions,” Political Behavior 32 (2010): The popular online comic The Oatmeal featured commentary about the backfire effect, demonstrating its breakthrough into popular imagination; see “Believe,” The Oatmeal, accessed January 27, 2023, However, other studies have since called the effect into question. See Thomas Wood and Ethan Porter, “The Elusive Backfire Effect: Mass Attitudes’ Steadfast Factual Adherence,” Political Behavior 41 (2019):; see also Kathryn Haglin, “The Limitations of the Backfire Effect,” Research & Politics 4 (2017):; and Brendan Nyhan, “Why the Backfire Effect Does Not Explain the Durability of Political Misperceptions,” PNAS 118 (2020):

13 Brendan Nyhan and Jason Reifler, “Displacing Misinformation About Events: An Experimental Test of Causal Corrections,” Journal of Experimental Political Science 2 (2015):

14 Angeline Sangalang, Yotam Ophir, and Joseph N. Cappella, “The Potential for Narrative Correctives to Combat Misinformation,” Journal of Communication 69, no. 3 (2019):

15 Brian E. Weeks, “Emotions, Partisanship, and Misperceptions: How Anger and Anxiety Moderate the Effect of Partisan Bias on Susceptibility to Political Misinformation,” Journal of Communication 65, no. 4 (2015):

16 Ethan Porter and Thomas Wood, “The Global Effectiveness of Fact-Checking: Evidence From Simultaneous Experiments in Argentina, Nigeria, South Africa, and the United Kingdom,” PNAS 118, no. 37 (2021):; see also John M. Carey et al., “The Ephemeral Effects of Fact-Checks on COVID-19 Misperceptions in the United States, Great Britain and Canada,” Nature Human Behaviour 6 (2022),; and Patrick R. Rich and Maria S. Zaragoza, “Correcting Misinformation in News Stories: An Investigation of Correction Timing and Correction Durability,” Journal of Applied Research in Memory and Cognition 9, no. 3 (2020):

17 Emily Thorson, “Belief Echoes: The Persistent Effects of Corrected Misinformation,” Political Communication 33, no. 3 (2015):

18 Consider Mevan Babakar, “Crowdsourced Factchecking: A Pie in The Sky?” European Journalism Observatory, June 1, 2018, Studies suggest interventions from users can be as or more effective than interventions from experts: consider Leticia Bode and Emily K. Vraga, “See Something, Say Something: Correction of Global Health Misinformation on Social Media,” Health Communication 33, no. 9 (2018):; and Jonas Colliander, “This is Fake News: Investigating the Role of Conformity to Other Users’ Views When Commenting on and Spreading Disinformation in Social Media,” Computers in Human Behavior 97 (August 2019):

19 Sam Guzik, “AI Will Start Fact-Checking. We May Not Like the Results,” Nieman Lab, December 2022,; and Grace Abels, “Can ChatGPT Fact-Check? We Tested,” Nieman Lab, May 31, 2023,

Case Study 4: Labeling Social Media Content

Key takeaways:

There is a good body of evidence that labeling false or untrustworthy content with additional context can make users less likely to believe and share it. Large, assertive, and disruptive labels are the most effective, while cautious and generic labels often do not work. Reminders that nudge users to consider accuracy before resharing show promise, as do efforts to label news outlets with credibility scores. Different audiences may react differently to labels, and there are risks that remain poorly understood: labels can sometimes cause users to become either overly credulous or overly skeptical of unlabeled content, for example. Major social media platforms have embraced labels to a large degree, but further scale-up may require better information-sharing or new technologies that combine human judgment with algorithmic efficiency.

Key sources:

Description and Use Cases

Social media companies are increasingly applying labels to content on their platforms, some of which aim to help users assess whether information is trustworthy. In this report, “labeling” refers to the insertion of relevant context or advisories to inform or influence how content is viewed, though without directly fact-checking it. (For more on fact-checking, see case study 3.)

Labels can be applied to a social media account (for example, identifying it as state-sponsored media or satirical) or to individual posts. When a post links to another source, such as an external website, that source can be labeled (as with so-called nutrition labels that score news outlets by their adherence to journalistic practices). Alternatively, specific content or claims can be labeled—as disputed, potentially outdated, or fast-developing, for instance. Some labels are prominent, use firm language, and require a user to click before seeing or interacting with the content. Other labels are small, discreet, and neutrally worded.

Labels can be positive, like a digital signature that verifies video as authentic or a “verified” badge that purports to confirm an account’s identity. Other labels do not seek to inform users, per se, but rather admonish or “nudge” them to follow good information practices. For example, a user seeking to reshare an article may encounter a message that encourages them to first read the article and/or consider its accuracy; such “friction” in user interfaces seeks to promote more deliberate, reflective behavior. Additionally, many common platform design features can loosely be understood as labels. For example, platforms often display engagement data—such as the number of likes, shares, or views—alongside content. This data can influence users’ perceptions of the content’s accuracy and importance.1

Facebook was among the first platforms to label misleading content after public concern about so-called fake news and its influence on the 2016 U.S. presidential election.2 Other platforms, including Twitter (now X) and YouTube, have also implemented labels of various kinds—often spurred by major events such as the 2020 U.S. presidential election and the COVID-19 pandemic.

How Much Do We Know?

The academic literature on labeling is smaller than that on fact-checking but still large compared to other interventions. Social media companies began employing labels in earnest only in 2019, according to a Carnegie database.3 Independent studies of social media labels face methodological challenges due to researchers’ lack of access to private platform data on how users react to labels, though internal company research occasionally reaches the public domain through leaks, government inquiries, investigative journalism, or voluntary (if selective) disclosure. Laboratory experiments can be helpful, but they do not fully simulate key aspects of real-life social media usage.

How Effective Does It Seem?

Evidence suggests that large, prominent, and strongly worded labels can sometimes inhibit belief in and spread of false claims. However, other labels appear less effective. For example, studies show that labels which visually stand apart from the adjoining content are more effective than those that blend in. Similarly, labels that deliver a clear warning—for example, by pointing out that the content has previously appeared on an unreliable rumor site—are more effective than those that merely note a claim is “disputed.”4

Some internal research by platforms has also indicated that neutrally worded labels may be ineffective and can even lead users to gradually tune them out. During the COVID-19 pandemic, Facebook relied on independent fact-checkers to determine whether COVID-19 content was false or misleading; debunked content would then be labeled as such and algorithmically demoted. But “fact-checkers were unable to review an overwhelming majority of the content in their queue” because of resource limitations, so Facebook also applied neutral labels en masse to all other COVID-19 content. These labels provided context—“COVID-19 vaccines go through many tests for safety and effectiveness and are then monitored closely”—along with a link to authoritative information. According to Meta’s Oversight Board, however, “initial research showed that these labels may have [had] no effect on user knowledge and vaccine attitudes” and “no detectable effect on users’ likelihood to read, create or re-share” false claims.5 Facebook reduced and ultimately rolled back these labels after finding that users became less likely to click through to the information page after repeated label exposure.

Evidence suggests that large, prominent, and strongly worded labels can sometimes inhibit belief in and spread of false claims. However, other labels appear less effective.

Source ratings, either by fact-checkers or other users, have been shown to be effective at reducing engagement on articles with low scores. Specifically, labels that score a news source’s credibility can influence users’ willingness to like, comment on, or share posts containing links to news articles. This is a promising finding for projects like NewsGuard, which ranks news sites on a 100-point rubric based on best practices for credible and transparent journalism.6 However, empirical studies of NewsGuard have had mixed results. A 2022 study found, on the one hand, that exposure to labels did “not measurably improve news diet quality or reduce misperceptions, on average, among the general population.” On the other hand, there was also “suggestive evidence of a substantively meaningful increase in news diet quality among the heaviest consumers of misinformation.”7 This split finding may be considered successful or unsuccessful depending on the specific problem such labels are intended to address.

Recent research suggests that labels containing accuracy nudges, which simply encourage users to consider accuracy before sharing content, are particularly promising. This is perhaps surprising, as one might assume that social media users already seek to consume and share what they deem to be accurate information. Yet studies have highlighted a range of other motives—such as amusement and partisan signaling—that often influence user behavior.8 Despite these psychological tendencies, research suggests that most users nevertheless value accuracy and that labels reminding them to consider accuracy make them less likely to share misinformation.9 In fact, such labels can reduce—though not eliminate—subjects’ inclination to believe and share false stories that align with their political beliefs.10

Regardless of how labels are designed and implemented, the nature of the content or speaker being labeled can also influence user response. For example, New York University’s Center for Social Media and Politics found that during the 2020 election, tweets by then U.S. president Donald Trump which were labeled as disputed spread further than those without a label.11 This was not true for other politicians’ accounts in the sample, suggesting that labels on posts by extremely prominent individuals may perform differently from other labels. Additional research on this topic—for example, exploring figures other than Trump and metrics beyond spread of the post—would be valuable, because extremely prominent individuals are often responsible for a disproportionate amount of disinformation.

Like other interventions, labeling can sometimes have perverse effects. Several studies found evidence that labeling some articles as false or misleading led users to become more credulous toward the remaining unlabeled headlines.12 Researchers call this the “implied truth effect,” because users who become accustomed to seeing labels on some content may mistakenly assume that other content has also been vetted. Such a perverse effect, if prevalent, could have significant consequences: labeling efforts often have limited scope and therefore leave the vast majority of content unlabeled.

Paradoxically, there is also some evidence of an opposite dynamic: fact-checks or warning labels can sometimes increase overall audience skepticism, including distrust of articles that did not receive any rating.13 This might be called an “implied falsity effect.” Little is known about either effect and why one, or both, of them may be present under varying circumstances. It is possible that geographical, topical, or other unidentified factors may influence the effectiveness of labels and the risk of unintended consequences.14 Moreover, different audiences can respond differently to the same label.

Finally, it is worth remembering that labels explicitly focused on truth or reliability are not the only ways that platform interfaces actively shape how users perceive social media content. One study found that labeling posts with engagement metrics—such as number of “likes”—makes people more likely to share low-credibility, high-engagement posts.15 Researchers should continue to explore the influence of general user interface design on disinformation, including whether and how common design elements may alter the efficacy of interventions like labeling.

How Easily Does It Scale?

Major platforms’ embrace of labels has shown that they can be scaled to a significant degree. Labeling, in its various forms, has emerged as the dominant way that social media companies adjust their platforms’ design and functionality to counter disinformation and other kinds of influence operations. A Carnegie database of interventions announced by major platforms between 2014 and 2021 found a surge in labeling and redirection (a related measure) since 2019, with 77 of 104 total platform interventions falling into these two categories.16 Labels offer platforms a way of addressing disinformation without flatly banning or demoting content, actions that impinge more on users’ freedoms and tend to inspire stronger backlash. As a result of platforms’ experimentation with labels, technical barriers—such as load latency, user friction, and so forth—have been addressed or managed.

However, labeling still carries some of the scaling limitations of fact-checks. Meta’s experience with labeling COVID-19 information illustrates one of the choices facing platforms. They can rely on humans to apply more specific, opinionated, and ultimately effective labels to a smaller amount of content, or they can have algorithms automatically label more content with comparatively cautious, generic labels that tend to be less effective and are sometimes counterproductive. Technological innovations could help to further combine both techniques, as better algorithms do more labeling work under human supervision and/or empower humans to label more efficiently. Such innovations would test platforms’ willingness to apply strong labels to large amounts of content, potentially angering users who disagree with the labels. Future studies can continue to examine the specifics of labels and probe the platforms’ processes for applying them.17

The increasing number of platforms presents another scaling challenge. Content that is labeled on one platform may not be labeled on another. While some platforms shun labels based on an overall strategy of minimal content moderation, other platforms lack sufficient resources or simply haven’t faced the same public pressure as larger companies to confront disinformation. Outside organizations could explore whether prodding smaller platforms and offering them resources—such as technology, data, and best practices—might encourage more labeling.


1 Mihai Avram, Nicholas Micallef, Sameer Patil, and Filippo Menczer, “Exposure to Social Engagement Metrics Increases Vulnerability to Misinformation,” Harvard Kennedy School Misinformation Review 1 (2020),

2 Brian Stelter, “Facebook to Start Putting Warning Labels on ‘Fake News’,” CNN, December 15, 2016,

3 For data on the rise of labeling and redirection, see Kamya Yadav, “Platform Interventions: How Social Media Counters Influence Operations,” Carnegie Endowment for International Peace, January 25, 2021,

4 Björn Ross, Anna-Katharina Jung, Jennifer Heisel, and Stefan Stieglitz, “Fake News on Social Media: The (In)Effectiveness of Warning Messages” (paper presented at Thirty-Ninth International Conference on Information Systems, San Francisco, 2018),

5 “Policy Advisory Opinion 2022-01, Removal of COVID-19 Misinformation,” Oversight Board, April 2023,

6 “Rating Process and Criteria,” NewsGuard, accessed February 7, 2023,

7 Kevin Aslett et al., “News Credibility Labels Have Limited Average Effects on News Diet Quality and Fail to Reduce Misperceptions,” Science Advances 8, no. 18 (2022):

8 Alexander Bor et al., “‘Fact-Checking’ Videos Reduce Belief in, but Not the Sharing of Fake News on Twitter,” PsyArXiv, April 11, 2020,

9 Gordon Pennycook et al., “Shifting Attention to Accuracy Can Reduce Misinformation Online,” Nature 592 (2021):; Gordon Pennycook et al., “Fighting COVID-19 Misinformation on Social Media: Experimental Evidence for a Scalable Accuracy-Nudge Intervention,” Psychological Science 31, no. 7 (2020):

10 Timo K. Koch, Lena Frischlich, and Eva Lermer, “Effects of Fact-Checking Warning Labels and Social Endorsement Cues on Climate Change Fake News Credibility and Engagement on Social Media,” Journal of Applied Social Psychology 53, no. 3 (June 2023):; and Megan Duncan, “What’s in a Label? Negative Credibility Labels in Partisan News,” Journalism & Mass Communication Quarterly 99, no. 2 (2020):

11 Megan A. Brown et al., “Twitter Put Warning Labels on Hundreds of Thousands of Tweets. Our Research Examined Which Worked Best,” Washington Post, December 9, 2020,

12 Gordon Pennycook, Adam Bear, Evan T. Collins, and David G. Rand, “The Implied Truth Effect: Attaching Warnings to a Subset of Fake News Headlines Increases Perceived Accuracy of Headlines Without Warnings,” Management Science 66, no. 11 (November 2020):

13 Antino Kim, Patricia L. Moravec, and Alan R. Dennis, “Combating Fake News on Social Media With Source Ratings: The Effects of User and Expert Reputation Ratings,” Journal of Management Information Systems 36, no. 3 (2019):

14 Jan Kirchner and Christian Reuter, “Countering Fake News: A Comparison of Possible Solutions Regarding User Acceptance and Effectiveness,” Proceedings of the ACM on Human-Computer Interaction 4 (October 2020):; and Ciarra N. Smith and Holli H. Seitz, “Correcting Misinformation About Neuroscience via Social Media,” Science Communication 41, no. 6 (2019):

15 Avram, Micallef, Patil, and Menczer, “Exposure to Social Engagement Metrics.”

16 Yadav, “Platform Interventions.”

17 For an example of research that can be conducted with this data, see Samantha Bradshaw and Shelby Grossman, “Were Facebook and Twitter Consistent in Labeling Misleading Posts During the 2020 Election?” Lawfare, August 7, 2022,

Case Study 5: Counter-messaging Strategies

Key takeaways:

There is strong evidence that truthful communications campaigns designed to engage people on a narrative and psychological level are more effective than facts alone. By targeting the deeper feelings and ideas that make false claims appealing, counter-messaging strategies have the potential to impact harder-to-reach audiences. Yet success depends on the complex interplay of many inscrutable factors. The best campaigns use careful audience analysis to select the most resonant messengers, mediums, themes, and styles—but this is a costly process whose success is hard to measure. Promising techniques include communicating respect and empathy, appealing to prosocial values, and giving the audience a sense of agency.

Key sources:

Description and Use Cases

Counter-messaging, in this report, refers to truthful communications campaigns designed to compete with disinformation at a narrative and psychological level instead of relying solely on the presentation of facts. Counter-messaging is premised on the notion that evidence and logic aren’t the only, or even the primary, bases of what people believe. Rather, research has shown that people more readily accept claims which jibe with their preexisting worldviews and accepted stories about how the world works, especially if framed in moral or emotional terms.1Moreover, claims are more persuasive when the messenger is a trusted in-group member who appears to respect the audience members and have their best interests at heart. While such factors often facilitate the spread of disinformation, counter-messaging campaigns seek to leverage them in service of truthful ideas.

In a sense, counter-messaging is no different from ordinary political communication, which routinely uses narratives, emotion, and surrogate messengers to persuade. But counter-messaging is sometimes implemented with the specific goal of countering disinformation—often because purely rational appeals, like fact-checking, seem not to reach or have much impact on hard-core believers of false claims. By changing the narrative frame around an issue and speaking in ways designed to resonate, counter-messaging aims to make audiences more open to facts and less ready to accept sensational falsehoods.

One example comes from Poland, where xenophobia toward migrants from the Middle East during the Syrian civil war was fueled in part by false stories of disease and criminality.2 A Polish counter-messaging campaign called Our Daily Bread featured a video of refugees and other marginalized people baking bread, a cherished Polish activity. Rather than presenting facts and evidence about the impact of migration on Polish society or refuting false stories about migrants, the video instead used personal vignettes, evocative imagery, and unifying words. The video attracted significant media attention and was viewed more than 1 million times in the first day after its release.3 Similarly, many efforts to promote COVID-19 vaccines and counter disinformation about them employed themes of personal responsibility. Other such efforts focused on recruiting local doctors as messengers, based on the premise that many people trust their family doctors more than national authorities.4 Vaccine-related public messaging campaigns also partnered with Christian, Jewish, and Muslim faith leaders to reach religious communities in Israel, the United Kingdom, and the United States.5

As these examples indicate, counter-messaging is not always exclusively aimed at countering false claims; other common objectives include promoting desirable behaviors, bolstering social cohesion, and rallying support for government policies. Many initiatives have sought specifically to thwart terrorist recruitment under the banner of “countering violent extremism” and “deradicalization.” For example, the Redirect Method developed by Jigsaw and Moonshot used digital advertising to steer individuals searching for extremist content toward “constructive alternate messages.”6 Other approaches have used one-on-one online conversations or in-person mentorship relationships to dissuade those showing interest in extremism.7 While many of these efforts were designed to address Islamic extremists, they have also been applied to White supremacist and other hate groups.

How Much Do We Know?

For decades, disciplines such as social psychology, political science, communications, advertising, and media studies have researched issues relevant to counter-messaging. Fields that have themselves been subject to persistent disinformation—such as public health and climate science—have also devoted a great deal of attention to counter-messaging in recent years. Efforts to study and suppress hate and extremist groups are particularly relevant, because such groups often employ disinformation.8 Nevertheless, these bodies of knowledge, though replete with useful insights, have generally not used disinformation as their primary frame for evaluating the efficacy of counter-messaging. This leaves us to rely on analogies and parallels rather than direct evidence.

The relevant literature highlights how hard it is to assess the impact of any form of persuasion. For example, many studies of COVID-19-related counter-messages measured changes in subjects’ reported attitudes or beliefs but were unable to verify whether those shifts persisted or led to behavioral changes.9 Studies based on surveys or laboratory experiments are common, but these do not fully capture how audiences react in more natural settings. In the field of countering violent extremism, practitioners report lacking the expertise or resources to evaluate the impact of their work beyond using social media engagement metrics and their gut instinct.10 A review of online counter-extremism interventions similarly found “virtually all” of the evaluations included in the study measured processes, like social media engagement, not outcomes. The review offered several proposals for more impact-based assessments, such as the inclusion of calls to action like contacting a hotline, which can be quantified as a sign of behavior.11

How Effective Does It Seem?

The core insight of counter-messaging—that communications tailored to the narrative and psychological needs of a specific audience are more effective than generic, purely fact-based approaches—is well-established.12 Beyond this basic premise, however, it is difficult to generalize about counter-messaging because of the intervention’s breadth, diversity, and overlap with ordinary politics. Some forms seem capable of affecting individuals’ beliefs and, more rarely, influencing the behaviors informed by those beliefs. Yet success may often depend on the interplay of a large number of factors that can be difficult to discern or control. A granular understanding of the audience should, in theory, enable the selection of mediums, messengers, messages, styles, and tones most likely to resonate with them.13 In practice, developing this audience understanding is a difficult task and determining the best communication approaches is an evolving science at best.

One theme that emerges from many assessments of counter-messaging . . . is the importance of communicating respect and empathy.

One theme that emerges from many assessments of counter-messaging, including public health and counter-extremism interventions, is the importance of communicating respect and empathy. People are often put off by the sense that they are being debated or chastised.14 For example, counselors working with White supremacists had the most success in changing subjects’ views through sustained dialogue that avoided moral judgement.15 Encouraging empathy toward others, such as religious minorities or immigrants, can also be effective; one study found that such messages make individuals more likely to delete their previous hate speech and less likely use hate speech again in the future.16 Similar efforts may be useful in reaching the so-called moveable middle, such as social media spectators who do not spread hateful content or false information themselves but are open to persuasion in either direction. For example, a study on anti-Roma hate speech in Slovakia found more users left pro-Roma comments on anti-Roma posts after researchers intervened with counter-speech.17

Other studies have explored how moral and emotional framings affect audiences, including their perceptions of what is true. Studies of climate change skepticism found that the most effective messages for countering misinformation offer individuals the sense that they can take meaningful action, as opposed to messages that portray the world as doomed.18 A review of public health messaging found some audience segments were moved more by calls to protect themselves or loved ones than by appeals to social responsibility.19

The speaker of the counter-message seems to be quite important. Studies in the rural United States found that friends and family members, community organizations, religious leaders, and medical professionals were the most effective messengers in responding to COVID-19 rumors. In India, health professionals and peers were found to be the most trusted.20 Given the influence of informal messengers like social peers, analysts have considered the possibility of using them for official objectives.21 Volunteer groups countering disinformation, such as the Lithuanian Elves or the North Atlantic Fella Organization, can bring scale, authenticity, and creativity—traits that official efforts often lack.22 Likewise, organic content used to rebut extremist claims and narratives appears more persuasive than government-created content.

There is a risk that poorly designed counter-messaging campaigns can entrench or elevate the very views being rebutted.23 A U.S. Department of State campaign called Think Again, Turn Away illustrates this problem. The anti–Islamic State campaign, launched in 2013, engaged directly with extremists on Twitter but was ultimately deemed counterproductive. Its graphic content and combative tone increased the visibility of Islamic State accounts that replied to the campaign’s posts with anti-U.S. rhetoric, while forcing the State Department to engage on unflattering topics like the torture of Iraqi prisoners at the Abu Ghraib prison.24 Critics have claimed that Think Again, Turn Away was not focused on the drivers of online extremism and was too clearly affiliated with the U.S. government to serve as a credible messenger. These shortcomings point to the complexities of effective counter-messaging and the need to carefully think through message control, effective messengers, appropriate mediums, and characteristics of the target audience.

How Easily Does It Scale?

Counter-messaging faces implementation challenges due to its often reactive nature. Campaigns frequently arise in response to a belated recognition that disinformation narratives have already grown in strength and impact. Such narratives may have roots going back years, decades, or longer, and their adherents can build up psychological investments over a lifetime. The narratives underpinning disinformation also often evoke powerful emotions, like fear, which can be difficult to defuse once activated.25 To mitigate disinformation’s first-mover advantages, counter-messengers can try to anticipate such narratives before they spread—for example, predicting attacks on mail-in voting during the 2020 U.S. election—but this is not always feasible.

The need to tailor counter-messaging to a specific audience and context makes scaling more difficult. Reaching large audiences may require breaking them into identifiable subpopulations, each of which would then receive its own research, message development, and novel or even competing strategies. Opting instead for a more generic, large-scale campaign risks undercutting much of the specificity associated with effective counter-messaging. Moreover, broad campaigns increase the odds of misfires, such as the use of messages or messengers that persuade one audience while making another audience double down on its initial beliefs. Elevating rumors or extremist viewpoints is a particular concern. When a concerning narrative is not yet widespread, campaigners may want to pair strategic silence on the national stage with more discrete messaging that targets specific populations more likely to encounter the narrative.26 When the narrative at issue has already become popular, a broad counter-messaging strategy may be appropriate. New digital technologies have the potential to make counter-messaging cheaper and easier to scale, just as innovation can aid in spreading disinformation.

Given the costs of effective counter-messaging at scale, many campaigns seem only modestly funded. The State Department’s now-shuttered Center for Strategic Counterterrorism Communications spent only $6 million on digital outreach in 2012, the year before it launched Think Again, Turn Away.27 The center’s successor entity, the Global Engagement Center, had a budget of more than $74 million in 2020.28 Australia’s COVID-19 vaccine awareness campaign—which included multiple mediums and consultants for outreach to specific vulnerable communities—cost about $24 million.29 For comparison, major brands spend much, much more on advertising (about 10 percent of total revenue, according to one survey).30 Volunteer-driven efforts, like the North Atlantic Fella Organization, may be appealing partners for external funders due to their low cost and high authenticity. However, overt official support for such activities can diminish their credibility. Extremism scholar Benjamin Lee suggests that looser relationships involving “provision of tools and training” might mitigate this risk.31


1 See Laura Livingston, “Understanding the Context Around Content: Looking Behind Misinformation Narratives,” National Endowment for Democracy, December 2021,; and Rachel Brown and Laura Livingston, “Counteracting Hate and Dangerous Speech Online: Strategies and Considerations,” Toda Peace Institute, March 2019, Additionally, consider Claire Wardle, “6 Types of Misinformation Circulated This Election Season,” Columbia Journalism Review, November 18, 2016,; see also Paul Goble, “Hot Issue – Lies, Damned Lies and Russian Disinformation,” Jamestown Foundation, August 13, 2014, As another example, Charleston mass murderer Dylann Roof claimed to have been radicalized after a Google search for “black on White crime.” See Rebecca Hersher, “What Happened When Dylann Roof Asked Google for Information About Race?” NPR, January 10, 2017,

2 For analysis of anti-refugee and anti-migrant disinformation, see Judit Szakács and Éva Bognár, “The Impact of Disinformation Campaigns About Migrants and Minority Groups in the EU,” European Parliament, June 2021,

3 To be sure, view count alone does not imply effectiveness. For more about the Our Daily Bread campaign, see “Video Campaign Aims to Unify Poland Through the Power of Bread,” Olga Mecking, NPR, May 21, 2018,

4 Kevin B. O’Reilly, “Time for Doctors to Take Center Stage in COVID-19 Vaccine Push,” American Medical Association, May 21, 2021,; and Steven Ross Johnson, “Doctors Can Be Key to Higher COVID Vaccination Rates,” U.S. News & World Report, February 28, 2022,

5 Filip Viskupič and David L. Wiltse, “The Messenger Matters: Religious Leaders and Overcoming COVID-19 Vaccine Hesitancy,” Political Science & Politics 55, no. 3 (2022):; and Daniel Estrin and Frank Langfitt, “Religious Leaders Had to Fight Disinformation to Get Their Communities Vaccinated,” NPR, April 23, 2021,

6 “The Redirect Method,” Moonshot, accessed March 6, 2023,; and “ADL & Moonshot Partnered to Reduce Extremist Violence During US Presidential Election, Redirected Thousands Towards Safer Content,” Anti-Defamation League, February 1, 2021,

7 Jacob Davey, Jonathan Birdwell, and Rebecca Skellett, “Counter-conversations: A Model for Direct Engagement With Individuals Showing Signs of Radicalization Online,” Institute for Strategic Dialogue, February 2018,; and Jacob Davey, Henry Tuck, and Amarnath Amarasingam, “An Imprecise Science: Assessing Interventions for the Prevention, Disengagement and De-radicalisation of Left and Right-Wing Extremists,” Institute for Strategic Dialogue, 2019,

8 On the link between disinformation, hate speech, and hate crimes, consider Jonathan Corpus Ong, “Online Disinformation Against AAPI Communities During the COVID-19 Pandemic,” Carnegie Endowment for International Peace, October 19, 2021,

9 Consider Viskupič and Wiltse, “Messenger Matters”; Scott Bokemper et al., “Testing Persuasive Messaging to Encourage COVID-19 Risk Reduction,” PLOS ONE 17 (2022):; Rupali J. Limaye et al.,, “Message Testing in India for COVID-19 Vaccine Uptake: What Appeal and What Messenger Are Most Persuasive?,” Human Vaccines & Immunotherapeutics 18, no. 6 (2022):; and Lan Li, Caroline E. Wood, and Patty Kostkova, “Vaccine Hesitancy and Behavior Change Theory-Based Social Media Interventions: A Systematic Review,” Translational Behavioral Medicine 12, no. 2 (February 2022):

10 Davey, Tuck, and Amarasingam, “Imprecise Science.”

11 Todd C. Helmus and Kurt Klein, “Assessing Outcomes of Online Campaigns Countering Violent Extremism: A Case Study of the Redirect Method,” RAND Corporation, 2018, The metrics used to evaluate counter-messaging efforts should align with the messenger’s desired outcome, which is not always a direct change in the original speaker’s belief or behavior. Other goals of counter-messaging include influencing passive bystanders to speak out or showing solidarity with a victimized community. See Catherine Buerger, “Why They Do It: Counterspeech Theories of Change,” Dangerous Speech Project, September 26, 2022,; and Bianca Cepollaro, Maxime Lepoutre, and Robert Mark Simpson, “Counterspeech,” Philosophy Compass 18, no. 1 (January 2023):

12 Limaye et al., “Message Testing in India”; and Li, Wood, and Kostkova, “Vaccine Hesitancy.”

13 “Key Takeaways from Civil Society in the Visegrád Region: Fall 2019 Practitioner Convening,” Over Zero, 2019,

14 Consider Viskupič and Wiltse, “Messenger Matters.”

15 Davey, Birdwell, and Skellett, “Counter-Conversations”; and Davey, Tuck, and Amarasingam, “Imprecise Science.”

16 Dominik Hangartner et al, “Empathy-based Counterspeech Can Reduce Racist Hate Speech in a Social Media Field Experiment,” PNAS 118 (2021),

17 Buerger, “Why They Do It.”

18 Alysha Ulrich, “Communicating Climate Science in an Era of Misinformation,” Intersect: The Stanford Journal of Science, Technology, and Society 16 (2023),

19 Sanchin Banker and Joowon Park, “Evaluating Prosocial COVID-19 Messaging Frames: Evidence from a Field Study on Facebook,” Judgment and Decision Making 15 (2023),