• Research
  • Strategic Europe
  • About
  • Experts
Carnegie Europe logoCarnegie lettermark logo
EUUkraine
  • Donate
Measuring the Efficacy of Influence Operations Countermeasures: Key Findings and Gaps From Empirical Research
Research

Measuring the Efficacy of Influence Operations Countermeasures: Key Findings and Gaps From Empirical Research

Research shows that fact-checking can reduce the harmful impacts of false information. But beyond that, we know relatively little about the efficacy of counter-influence measures being implemented or considered by platforms, governments, and civil society.

Link Copied
By Jon Bateman, Elonnai Hickok, Jacob N. Shapiro, Laura Courchesne, Julia Ilhardt
Published on Sep 21, 2021
Project hero Image

Project

Partnership for Countering Influence Operations

The goal of the Partnership for Countering Influence Operations (PCIO) is to foster evidence-based policymaking to counter threats in the information environment. Key roadblocks as found in our work include the lack of: transparency reporting to inform what data is available for research purposes; rules guiding how data can be shared with researchers and for what purposes; and an international mechanism for fostering research collaboration at-scale.

Learn More

Over the past few years, institutions around the world have been scrambling to do more against malign influence operations. Major social media platforms have announced expansions of content moderation efforts, more takedowns of harmful influence campaigns, and many new product features designed to counter influence operations.1 Governments have proposed, amended, or implemented dozens of new laws addressing misinformation.2 And hundreds of civil society organizations are now dedicated to addressing influence operations.3 Yet the efficacy of these efforts remains unclear. We know relatively little about what kinds of countermeasures actually work to prevent influence operations, limit their spread, or curb their harmful impacts.

There is a dearth of rigorous, independent empirical research on influence operations countermeasures. Social media platforms rarely reveal the results of their internal studies, while academic research remains sparse, scattered across disciplines, and not synthesized for policymakers.4 This means that policymakers are left to make most decisions based on anecdotes or intuition. The results are more likely to be ineffective, costly, or counterproductive compared to the outcomes of evidence-based policy.

To assess what is known about countermeasure efficacy and to identify remaining gaps, the Partnership for Countering Influence Operations commissioned Princeton University’s Empirical Studies of Conflict Project to carry out a systematic review of studies. Laura Courchesne, Julia Ilhardt, and Jacob N. Shapiro sought out academic studies that (1) examined a specific group of people who viewed real or experimental influence operations, (2) compared measurable outcomes (behaviors or beliefs) of subjects exposed to a countermeasure versus those who were not, (3) met minimum standards of statistical credibility, and (4) had relevance for real-world policy. They identified 223 studies published since 1972 that met all four criteria. The research is presented in the article “Review of Social Science Research on the Impact of Countermeasures Against Influence Operations,” published in the September 2021 issue of Misinformation Review.5

The review confirmed the value of fact-checking. But it also highlighted enormous gaps in empirical knowledge about the most widely used and frequently proposed kinds of countermeasures.

Key Insights

Fact-Checking

The vast majority of studies in this dataset focused on various forms of fact-checking that occur in close proximity to the influence operation itself. Examples include a news article that reports but then refutes a politician’s false claim or a social media platform that adds source labels to help users better evaluate misleading content. Overall, the literature suggests that fact-checking can reduce the impact of false information on individuals’ beliefs as well as their propensity to share dis/misinformation with others.

There is also promising, though less conclusive, evidence on the efficacy of countermeasures that are similar to fact-checking. These include “prebunking” (in other words, preemptively refuting weakened versions of misinformation narratives), media literacy prompts (for example, encouraging people to think about accuracy), and crowdsourcing the identification of misinformation.

Design Factors

That said, fact-checking and related countermeasures are not all equal. Specific design choices appear to play a significant role in their efficacy. For example, one study found that video fact-checks were more effective than long-form article fact-checks.6 Another found that providing information on the trustworthiness of sources made refutations of misinformation more effective.7

Fact-checks appear to work best when they are both prominent and precise. YouTube’s state media label became more effective when the color was changed to better stand out from the background.8 Meanwhile, several studies suggest a “tainted truth” effect: if warnings about misinformation are themselves overstated, then people may reject even accurate warnings in the future and become more distrusting in general.

While these studies seem to offer several clear policy lessons, it is difficult to generalize from a small body of research. To fully assess efficacy, specific fact checking efforts and similar countermeasures should be tested in their unique contexts whenever possible.

Key Gaps

Despite the encouraging data on fact-checking, this review indicated that we know little about countermeasures overall. Most key countermeasures have yet to be studied in a rigorous way. The high-quality studies that do exist have significant methodological limitations that reduce their relevance to current policy debates.

Substantive Gaps

There is virtually no highly credible research on many of the policies most frequently proposed by experts or implemented by platforms, governments, or civil society. Understudied policy areas include the following:

  • Deterring or disrupting bad actors—for example, deplatforming, takedowns, sanctions, indictments, or public attributions.
  • Enhancing content moderation—for example, widening the scope of community standards prohibitions or adding more human or artificial intelligence enforcement capability.
  • Adjusting recommendation algorithms—for example, suppressing sensational content or giving users more choice over their algorithm.
  • Limiting microtargeting—for example, improving users’ data privacy or restricting advertisers’ microtargeting options.
  • Building societal trust—for example, strengthening journalistic institutions, incorporating media literacy into educational curricula, or bolstering confidence in election processes.
  • Altering incentives—for example, taking antitrust enforcement actions against platforms or demonetizing bad actors on platforms.
  • Informing policymakers—for example, expanding data and information sharing, improving research, or creating more international coordination.

Additionally, no study in our dataset examined the efficacy of redirection, an important countermeasure similar to fact-checking.9 Redirection occurs when platforms invite users to access authoritative content in another location, either on or off the platform. (Unlike labeling, redirection requires the user to click through to the corrected content.) Redirection has become by far the most common product feature that platforms use to combat influence operations.10

Methodological Gaps

Several aspects of these studies raise questions about their applicability to real-world situations. Most studies we reviewed (194 out of 223, or 87 percent) involved survey experiments, lab experiments, or simulated social media environments. These were labeled as “experimental” or “simulated social media.” Additionally, only a small fraction of studies examined how countermeasures mitigated the impact of misinformation on actual off-line behavior (6 percent) and/or online behavior (2 percent). The vast majority of studies looked instead at how countermeasures affect people’s beliefs, knowledge, or stated behavioral intentions.

The studies in our dataset overwhelmingly involved U.S. subjects, who were usually recruited from universities or Amazon’s Mechanical Turk, a crowdsourcing marketplace. This makes it harder to generalize the research findings to other countries or even to the U.S. population as a whole, because different populations sometimes react differently to the same countermeasures. For example, one study found significantly different efficacy in American versus Australian voters.11

Finally, platform- or venue-specific studies tend to focus on Facebook, Twitter, and traditional journalistic outlets. While these are all important channels in the spread of influence operations and highly salient to policymakers, there has been very little focus on other major platforms such as YouTube or Instagram. Research has also neglected newer, smaller, and/or non-U.S.-based platforms, as well as multiplatform countermeasures (like those associated with the Global Internet Forum to Counter Terrorism).

Looking Ahead

Empirical research on influence operations countermeasures is still nascent. Only 10 percent of the studies (22 out of 223 meeting our selection criteria) predated 2010. Thankfully, research activity is rapidly growing as democracies have become more concerned about influence operations and funders have dedicated more resources to studying the problem. Sixty-two percent of the studies in our dataset have been published since 2019.

Nevertheless, the research gaps identified in this review will not be remedied easily. They stem from multiple structural factors including lack of data access, inadequate funding, misaligned professional incentives, disciplinary silos, and nonstandard terms and methodologies.12 Addressing these gaps will require new models of collaboration that bring together academic, platform, and government capabilities.13 Only then will we be able to develop a firm foundation for evidence-based policy decisions and to systematically track the efficacy of countermeasures over time and in different contexts.

Loading...

View the Database

About the Authors

Jon Bateman is a fellow in the Cyber Policy Initiative of the Technology and International Affairs Program at the Carnegie Endowment for International Peace.

Elonnai Hickok is a nonresident scholar and an independent expert examining how technology and policy can impact and benefit society.

Jacob N. Shapiro is a professor of politics and international affairs at Princeton University. His research covers conflict, economic development, and security policy.

Laura Courchesne is a PhD candidate in international relations at the University of Oxford and a research fellow at the Empirical Studies of Conflict Project at Princeton University.

Julia Ilhardt graduated from Princeton University’s School of Public and International Affairs, where she did research with the Empirical Studies of Conflict Project. She is now a High Meadows Fellow at the Environmental Defense Fund.

Notes

1 Kamya Yadav, “Platform Interventions: How Social Media Counters Influence Operations,” Carnegie Endowment for International Peace, January 25, 2021, https://carnegieendowment.org/2021/01/25/platform-interventions-how-social-media-counters-influence-operations-pub-83698; Jon Bateman, Natalie Thompson, and Victoria Smith, “How Social Media Platforms’ Community Standards Address Influence Operations,” Carnegie Endowment for International Peace, April 1, 2021, https://carnegieendowment.org/2021/04/01/how-social-media-platforms-community-standards-address-influence-operations-pub-84201. 

2 Kamya Yadav, Ulaş Erdoğdu, Samikshya Siwakoti, Jacob N. Shapiro, and Alicia Wanless, “Countries Have More Than 100 Laws on the Books to Combat Misinformation. How Well Do They Work?,” Bulletin of the Atomic Scientists, May 13, 2021, https://thebulletin.org/premium/2021-05/countries-have-more-than-100-laws-on-the-books-to-combat-misinformation-how-well-do-they-work.  

3 Victoria Smith, “Mapping Worldwide Initiatives to Counter Influence Operations,” Carnegie Endowment for International Peace, December 14, 2020, https://carnegieendowment.org/2020/12/14/mapping-worldwide-initiatives-to-counter-influence-operations-pub-83435.

4 Yadav, “Platform Interventions.”

5 Laura Courchesne, Julia Ilhardt, and Jacob Shapiro, “Review of Social Science Research on the Impact of Countermeasures Against Influence Operations,” Harvard Kennedy School Misinformation Review, September 13, 2021, https://doi.org/10.37016/mr-2020-79.

6 Dannagal G. Young, Kathleen Hall Jamieson, Shannon Poulsen, and Abigail Goldring, “Fact-Checking Effectiveness as a Function of Format and Tone: Evaluating FactCheck.org and FlackCheck.org,” Journalism & Mass Communication Quarterly 95, no. 1 (2017): 49–75.

7 Ullrich K. H. Ecker and Luke M. Antonio, “Can You Believe It? An Investigation Into the Impact of Retraction Source Credibility on the Continued Influence Effect,” Memory & Cognition, 49 (2021): 631–644, https://doi.org/10.3758/s13421-020-01129-y.

8 Jack Nassetta and Kimberly Gross, “State Media Warning Labels Can Counteract the Effects of Foreign Misinformation,” Harvard Kennedy School Misinformation Review, October 30, 2020, https://misinforeview.hks.harvard.edu/article/state-media-warning-labels-can-counteract-the-effects-of-foreign-misinformation.

9 The body of research on social media interventions has continued to expand from the time this literature review was undertaken (in the second half of 2020).

10 Yadav, “Platform Interventions.”

11 Briony Swire-Thompson, Ullrich K. H. Ecker, Stephan Lewandowsky, and Adam J. Berinsky, “They Might Be a Liar but They’re My Liar: Source Evaluation and the Prevalence of Misinformation,” Political Psychology 41, no. 1 (2019): 21–34.

12 Victoria Smith and Natalie Thompson, “Survey on Countering Influence Operations Highlights Steep Challenges, Great Opportunities,” Carnegie Endowment for International Peace, December 7, 2020, https://carnegieendowment.org/2020/12/07/survey-on-countering-influence-operations-highlights-steep-challenges-great-opportunities-pub-83370.

13 Jacob N. Shapiro, “Research Collaboration on Influence Operations Between Industry and Academia: A Way Forward,” Carnegie Endowment for International Peace, December 3, 2020, https://carnegieendowment.org/2020/12/03/research-collaboration-on-influence-operations-between-industry-and-academia-way-forward-pub-83332.

About the Authors

Jon Bateman
Jon Bateman

Senior Fellow and Co-Director, Technology and International Affairs Program

Jon Bateman is a senior fellow and co-director of the Technology and International Affairs Program at the Carnegie Endowment for International Peace.

Elonnai Hickok

Former Nonresident Scholar , Technology and International Affairs Program

Elonnai Hickok was a nonresident scholar and an independent expert examining how technology and policy can impact and benefit society.

Jacob N. Shapiro

Nonresident Scholar, Technology and International Affairs

Jacob Shapiro is a nonresident scholar in the Carnegie Technology and International Affairs Program.

Laura Courchesne

Julia Ilhardt

Authors

Jon Bateman
Senior Fellow and Co-Director, Technology and International Affairs Program
Jon Bateman
Elonnai Hickok
Former Nonresident Scholar , Technology and International Affairs Program
Elonnai Hickok
Jacob N. Shapiro
Nonresident Scholar, Technology and International Affairs
Jacob N. Shapiro
Laura Courchesne
Julia Ilhardt
IranTechnology

Carnegie does not take institutional positions on public policy issues; the views represented herein are those of the author(s) and do not necessarily reflect the views of Carnegie, its staff, or its trustees.

More Work from Carnegie Europe

  • Commentary
    The Iran War’s Dangerous Fallout for Europe

    The drone strike on the British air base in Akrotiri brings Europe’s proximity to the conflict in Iran into sharp relief. In the fog of war, old tensions in the Eastern Mediterranean risk being reignited, and regional stakeholders must avoid escalation.

      Marc Pierini

  • Commentary
    Strategic Europe
    The EU Needs a Third Way in Iran

    European reactions to the war in Iran have lost sight of wider political dynamics. The EU must position itself for the next phase of the crisis without giving up on its principles.

      Richard Youngs

  • Trump United Nations multilateralism institutions 2236462680
    Article
    Resetting Cyber Relations with the United States

    For years, the United States anchored global cyber diplomacy. As Washington rethinks its leadership role, the launch of the UN’s Cyber Global Mechanism may test how allies adjust their engagement.

      • Christopher Painter

      Patryk Pawlak, Chris Painter

  • Commentary
    Strategic Europe
    Europe on Iran: Gone with the Wind

    Europe’s reaction to the war in Iran has been disunited and meek, a far cry from its previously leading role in diplomacy with Tehran. To avoid being condemned to the sidelines while escalation continues, Brussels needs to stand up for international law.

      Pierre Vimont

  • Turkey Erdogan Caucasus Central Asia
    Article
    How Turkey Can Help the Economies of the South Caucasus to Diversify

    Over the past two decades, regional collaboration in the South Caucasus has intensified. Turkey and the EU should establish a cooperation framework to accelerate economic development and diversification.

      • Feride Inan
      • Güven Sak
      • Berat Yücel

      Feride İnan, Güven Sak, Berat Yücel

Get more news and analysis from
Carnegie Europe
Carnegie Europe logo, white
Rue du Congrès, 151000 Brussels, Belgium
  • Research
  • Strategic Europe
  • About
  • Experts
  • Projects
  • Events
  • Contact
  • Careers
  • Privacy
  • For Media
  • Gender Equality Plan
Get more news and analysis from
Carnegie Europe
© 2026 Carnegie Endowment for International Peace. All rights reserved.