• Research
  • Strategic Europe
  • About
  • Experts
Carnegie Europe logoCarnegie lettermark logo
EUUkraine
  • Donate
{
  "authors": [
    "Martin Innes"
  ],
  "type": "other",
  "centerAffiliationAll": "dc",
  "centers": [
    "Carnegie Endowment for International Peace"
  ],
  "collections": [],
  "englishNewsletterAll": "ctw",
  "nonEnglishNewsletterAll": "",
  "primaryCenter": "Carnegie Endowment for International Peace",
  "programAffiliation": "TIA",
  "programs": [
    "Technology and International Affairs"
  ],
  "projects": [
    "Partnership for Countering Influence Operations"
  ],
  "regions": [
    "North America",
    "United States",
    "Iran"
  ],
  "topics": [
    "Political Reform",
    "Technology"
  ]
}

Source: Getty

Other

Using Criminology to Counter Influence Operations: Disrupt, Displace, and Deter

Lessons learned from other disciplines show that sometimes playing the long game is the best approach.

Link Copied
By Martin Innes
Published on Oct 28, 2020
Project hero Image

Project

Partnership for Countering Influence Operations

The goal of the Partnership for Countering Influence Operations (PCIO) is to foster evidence-based policymaking to counter threats in the information environment. Key roadblocks as found in our work include the lack of: transparency reporting to inform what data is available for research purposes; rules guiding how data can be shared with researchers and for what purposes; and an international mechanism for fostering research collaboration at-scale.

Learn More

Contemporary high-profile crises and political events seem to be magnets for misinformation and disinformation. Messages constructed and communicated as part of influence operations by a range of actors with a variety of motives seek to hack public perceptions and political agendas by exploiting technological, political, and sociopsychological vulnerabilities. Because of this complexity, the development of countermeasures must integrate policy approaches and concepts from different disciplines.

The study of policing and crime management offers insights into how to combat influence operations.1 There are many parallels between these operations and complex criminal problems like terrorism, drug trafficking, and organized crime. Each exploits underlying social vulnerabilities, is perpetrated by organized groups, and requires sustained long-term effort to combat. In many countries, law enforcement agencies—like the U.S. Federal Bureau of Investigation—are already involved in countering influence operations, yet key lessons from criminology have not been fully absorbed or applied across the field, including by social media platforms. Policymakers more frequently view influence operations through intelligence, military, or political prisms, while ignoring valuable concepts from criminology.

Three key concepts in particular can be distilled and usefully applied to counter influence operations: disruption, displacement, and deterrence. Criminologists often apply these concepts separately, but they can also be blended together into a “3-D” policy framework.

Disruption

Disruption—used to impede the continuation of harmful behavior—has become an increasingly important tactical option to counter organized crime and terrorism. It involves increasing bad actors’ costs or level of effort, thereby reducing the frequency, intensity, or scale of their activity. Disruption can be a cost-efficient intervention when the scope of criminal behavior outpaces police resources—a problem obviously relevant to the fight against influence operations. By taking steps to disrupt criminal activity, law enforcement can address a problem without launching a full investigation or gathering the evidence needed to support a prosecution in court.

Tech platforms already engage in a form of disruption when they take down accounts being used in known influence operations. But criminologists would argue that social media accounts should not necessarily be taken down or suspended immediately after an influence operation is discovered. The timing of such actions should instead be calculated to maximize disruption of the adversary, especially in advance of significant democratic events.

In taking this approach, tech platforms and researchers should monitor suspicious accounts of interest to build up an intelligence picture of their internal and cross-platform connections, with subsequent interventions reflecting the resulting insights. This intelligence could also be used to construct a second wave of disruptions to further destabilize the operation if attempts to reestablish it are detected.

Tech platforms are not always up front about their enforcement practices, so it’s hard to determine how often they take this kind of approach. It is clear, however, that effective disruption strategies are not easy to implement. The strategic decision to temporarily allow bad behavior to continue requires making difficult practical and ethical judgments. Law enforcement has a wealth of expertise that platforms should draw on in making these important decisions.

Displacement

The second “D” in the 3-D framework is displacement, which is used to shift criminal activity into more manageable forms. Displacement works by applying pressure at different points to produce different results. For example, displacements targeting the methods criminals use can cause them to develop new ones, temporal displacements can shift the time a crime occurs, and spatial displacements can shift the location where a crime takes place.

Policies designed to displace may seem ineffective at first glance, but they can actually be quite valuable. For example, in 2018, CYBERCOM reportedly conducted cyber operations against Russia’s Internet Research Agency to thwart influence operations in the lead-up to the U.S. midterm election. Some analysts promptly criticized this action as “more annoying than deterring in the long run” to Russia.2 Yet displacement theory suggests that a temporary effect during an especially sensitive time is an important victory, even if influence operations later resume.

The two above concepts, disruption and displacement, can be blended to deliver reinforcing effects targeting different components of an influence operation. Thus, some social media accounts identified as so-called lead influencers might be subject to targeted disruptions, while others performing a more enabling role could be the focus for displacement. Responses might be delivered in waves to increase both effects.

Deterrence

The third transferable concept from criminology is deterrence. While disruption and displacement seek to mitigate bad actor behavior and its effects, deterrence focuses on prevention. Governments and tech platforms have both sought to deter influence operations, but policymakers—especially in national security circles—often reason through outdated and inappropriate analogies to nuclear deterrence.3 The study of criminology offers an alternative view of deterrence that might point in other directions.

Criminological theory differentiates between specific and general deterrence. Specific deterrence aims to dissuade perpetrators from repeating their own crimes in the future, whereas general deterrence discourages others from engaging in similar crimes.4 Unfortunately, current approaches by social media companies do not seem to have a sufficient general deterrence effect on digital influence operation capabilities and capacities. For example, influence operations appear to be emanating from more and more countries despite continued account takedowns by platforms like Twitter.

Any strategy to deter influence operations will likely face practical limits, but some impact might be possible through creative approaches. So far, tech platform interventions have focused much more on the originators of disinformation or harmful content than on the accounts that wittingly or unwittingly amplify such content. This gap presents an underexplored opportunity for achieving specific deterrence. For example, tech platforms might begin temporarily suspending accounts that repost known disinformation or harmful content multiple times within a certain period. This would raise the costs of carelessly disseminating harmful material and thereby potentially reduce its spread.

Applying the Framework

As policymakers apply the 3-D framework, they should look for ways to maximize its effectiveness. For instance, creating a new organization to coordinate implementation of these interventions across platforms could significantly enhance their impact. Decisions about how, when, and why to implement disruptive, displacing, or deterring interventions should not be in the hands of individual platforms. As private companies, tech platforms have perverse incentives to protect their reputations at the expense of the public good and are ill-equipped to face the complex ethical challenges of combating influence operations (especially when they span multiple platforms). The fact that tech platforms control access to basic information about these harmful activities and existing countermeasures only underscores the need for new and more inclusive institutional models.

Thus, the organization should serve as a joint multidisciplinary fusion center that includes representatives from all key tech platforms and government agencies, as well as the operational research community, to enable full-spectrum information and intelligence sharing. This center would become the decisionmaking venue for implementing 3-D interventions and other countermeasures that focus on protecting the public, not corporate reputations. Leadership and coordination of the fusion center would need to be independent, but funding could be based on a contribution model from tech platforms. This type of organization would be justified on the grounds that a more comprehensive cross-platform collaborative approach would deliver enhanced impact, as well as increase public transparency and accountability. Facebook’s recent creation of an independent Oversight Board suggests that platforms see merit in outsourcing controversial decisions to external bodies with greater legitimacy. 

Organizational innovations of this type are increasingly important because influence operations are becoming increasingly sophisticated, and countermeasures therefore pose complex ethical challenges for society. Committing to managing public harm will sometimes require balancing near-term and long-term risks and impacts. Where there is a clear and present danger, for instance, a rapid response is required, but immediate action is not always preferable. Depending on the situation, it may be better to develop an intelligence picture over time, through careful monitoring, to enhance the breadth and intensity of future interventions and their effects.

A joint fusion center with a 3-D perspective would bring much-needed structure and vision to the fight against influence operations. For now, tech platforms still seem reactive and scrambled—implementing new interventions for each crisis rather than pursuing a comprehensive, long-term approach. Facebook’s wide-ranging takedowns of QAnon pages and accounts in early October 2020 and Twitter’s repeated flagging of misleading messages from President Donald Trump are good examples of precedent-setting policies based on rapidly unfolding events. A more calculated, strategic perspective is needed, and criminology can help.

There have been important recent developments in countering influence operations, including better coordination between platforms and with outside stakeholders. But there is still room for new approaches. The 3-D framework—operationalized through a fusion center bringing together the main players—could help combat the complex threat of influence operations, just as disruption, displacement, and deterrence have done with other complex criminological challenges.

Notes

1 In the United States and many other countries, there is an ongoing societal reckoning over institutional failures by police—including racism, inappropriate use of force, and lack of accountability and transparency. Despite any real-world shortfalls of police departments, the academic discipline of criminology still has valuable insight on issues of importance that those working to counter influence operations can learn from.

2 Ellen Nakashima, “U.S. Cyber Command Operation Disrupted Internet Access of Russian Troll Factory on Day of 2018 Midterms,” Washington Post, February 27, 2019, https://www.washingtonpost.com/world/national-security/us-cyber-command-operation-disrupted-internet-access-of-russian-troll-factory-on-day-of-2018-midterms/2019/02/26/1827fc9e-36d6-11e9-af5b-b51b7ff322e9_story.html.

3 This is issue is further explored by James Pamment and Henrik Agardh-Twetman; see “Can There Be a Deterrence Strategy for Influence Operations?” Journal of Information Warfare 18, no. 3 (2019): 123–235.

4 David M. Kennedy, Deterrence and Crime Prevention: Reconsidering the Prospect of Sanction  (London: Routledge Studies in Crime and Economics, 2008).

About the Author

Martin Innes

Martin Innes is director of the Crime and Security Research Institute, director of the Universities’ Police Science Institute, and a professor in the School of Social Sciences at Cardiff University.

Martin Innes

Martin Innes is director of the Crime and Security Research Institute, director of the Universities’ Police Science Institute, and a professor in the School of Social Sciences at Cardiff University.

Political ReformTechnologyNorth AmericaUnited StatesIran

Carnegie does not take institutional positions on public policy issues; the views represented herein are those of the author(s) and do not necessarily reflect the views of Carnegie, its staff, or its trustees.

More Work from Carnegie Europe

  • Commentary
    Strategic Europe
    Taking the Pulse: Is France’s New Nuclear Doctrine Ambitious Enough?

    French President Emmanuel Macron has unveiled his country’s new nuclear doctrine. Are the changes he has made enough to reassure France’s European partners in the current geopolitical context?

      • Rym Momtaz

      Rym Momtaz, ed.

  • Commentary
    The Iran War’s Dangerous Fallout for Europe

    The drone strike on the British air base in Akrotiri brings Europe’s proximity to the conflict in Iran into sharp relief. In the fog of war, old tensions in the Eastern Mediterranean risk being reignited, and regional stakeholders must avoid escalation.

      Marc Pierini

  • Commentary
    Strategic Europe
    The EU Needs a Third Way in Iran

    European reactions to the war in Iran have lost sight of wider political dynamics. The EU must position itself for the next phase of the crisis without giving up on its principles.

      Richard Youngs

  • Trump United Nations multilateralism institutions 2236462680
    Article
    Resetting Cyber Relations with the United States

    For years, the United States anchored global cyber diplomacy. As Washington rethinks its leadership role, the launch of the UN’s Cyber Global Mechanism may test how allies adjust their engagement.

      • Christopher Painter

      Patryk Pawlak, Chris Painter

  • Commentary
    Strategic Europe
    Europe on Iran: Gone with the Wind

    Europe’s reaction to the war in Iran has been disunited and meek, a far cry from its previously leading role in diplomacy with Tehran. To avoid being condemned to the sidelines while escalation continues, Brussels needs to stand up for international law.

      Pierre Vimont

Get more news and analysis from
Carnegie Europe
Carnegie Europe logo, white
Rue du Congrès, 151000 Brussels, Belgium
  • Research
  • Strategic Europe
  • About
  • Experts
  • Projects
  • Events
  • Contact
  • Careers
  • Privacy
  • For Media
  • Gender Equality Plan
Get more news and analysis from
Carnegie Europe
© 2026 Carnegie Endowment for International Peace. All rights reserved.