• Research
  • Emissary
  • About
  • Experts
Carnegie Global logoCarnegie lettermark logo
DemocracyIran
  • Donate
{
  "authors": [
    "Katherine Charlet",
    "Danielle Citron"
  ],
  "type": "commentary",
  "centerAffiliationAll": "dc",
  "centers": [
    "Carnegie Endowment for International Peace"
  ],
  "collections": [
    "Deepfakes"
  ],
  "englishNewsletterAll": "ctw",
  "nonEnglishNewsletterAll": "",
  "primaryCenter": "Carnegie Endowment for International Peace",
  "programAffiliation": "TIA",
  "programs": [
    "Technology and International Affairs"
  ],
  "projects": [
    "Partnership for Countering Influence Operations"
  ],
  "regions": [
    "North America",
    "United States",
    "Iran"
  ],
  "topics": [
    "Political Reform",
    "Democracy",
    "Technology"
  ]
}

Source: Getty

Commentary

Campaigns Must Prepare for Deepfakes: This Is What Their Plan Should Look Like

It is only a matter of time before maliciously manipulated or fabricated content surfaces of a major presidential candidate in 2020. Here is what every campaign needs to do in advance of a deepfake emergency.

Link Copied
By Katherine Charlet and Danielle Citron
Published on Sep 5, 2019
Program mobile hero image

Program

Technology and International Affairs

The Technology and International Affairs Program develops insights to address the governance challenges and large-scale risks of new technologies. Our experts identify actionable best practices and incentives for industry and government leaders on artificial intelligence, cyber threats, cloud security, countering influence operations, reducing the risk of biotechnologies, and ensuring global digital inclusion.

Learn More
Project hero Image

Project

Partnership for Countering Influence Operations

The goal of the Partnership for Countering Influence Operations (PCIO) is to foster evidence-based policymaking to counter threats in the information environment. Key roadblocks as found in our work include the lack of: transparency reporting to inform what data is available for research purposes; rules guiding how data can be shared with researchers and for what purposes; and an international mechanism for fostering research collaboration at-scale.

Learn More

It is only a matter of time before maliciously manipulated or fabricated content surfaces of a major presidential candidate in 2020. The video manipulation of House Speaker Nancy Pelosi in May demonstrates the speed with which even a “cheap fake” can spread. But the technology is quickly getting more sophisticated, and we must prepare for “deepfakes”—fully synthesized audio or video of someone saying or doing something they did not say or do. Soon (meaning months, not years), it may be impossible to tell real videos from fake ones. The truth will have a tough time emerging in a deepfake-ridden marketplace of ideas.

Doctored media, typically in the form of short videos or audio clips, could be used to embarrass, defame, or otherwise damage candidates for office. Recent advances in artificial intelligence have increased the realism of deepfakes and substantially cut the resources necessary to create them. On August 9, a deepfake of Democratic National Committee Chair Tom Perez was presented to a conference room of hackers, who largely failed to realize that anything was amiss.

The key is in the timing. Imagine the night before an election, a deepfake is posted showing a candidate making controversial remarks. The deepfake could tip the election and undermine people’s faith in elections. This is not hypothetical. In the past six months, manipulated media has targeted a senior Malaysian minister, Donald Trump, and others.

It does not matter that digital fakery can, for the present moment, be detected pretty easily. People have a visceral reaction to video and audio. They believe what their eyes and ears are telling them—even if all signs suggest that the video and audio content is fake. If the video and audio is provocative, then it will surely go viral. Studies show that people are ten times more likely to spread fake news than accurate stories because fakery evokes a stronger emotional reaction. So no matter how unbelievable deepfakes are, the damage will still be real.

Even if a deepfake appears weeks before an election, it can spread far and wide. Thus, campaigns have to act immediately to combat the spread and influence of deepfakes.

Here is what every campaign needs to do in advance of a deepfake emergency:

  1. Issue a statement that the candidate will not knowingly disseminate fake or manipulated media of opponents and urge campaign supporters to abide by the same commitment. This is not a post-truth world. Every candidate has more to lose than to gain if this kind of media becomes commonly deployed in political competition. Candidates can show their values and leadership by denouncing manipulated media and pushing other candidates to do the same. They could issue a statement or sign the Election Pledge, which promises that their campaign will avoid “doctored audios/videos or images that impersonate other candidates, including deepfake videos.”
  2. Get familiar with the terms of service and community guidelines for social media platforms on this issue, as well as the processes to report inappropriate content. As demonstrated by the Pelosi video, platforms have different user speech policies about fake or inappropriately manipulated content. Facebook did not remove the Pelosi video from its platform. Instead, the company displayed a warning that the video was misleading. Twitter’s response protocol is less clear, though they appear to be working on the issue internally. Crucial is identifying contacts at the dominant platforms to whom the campaign can rapidly report a problem.
  3. Designate a team ready to manage an incident. This would include, for example, a campaign manager, legal counsel, operations director, communications director, and the social media team. It would also include policy leads, since fakes could arise on policy matters, or because a nation-state might be involved in its distribution.
  4. Obtain a briefing on key trends and threats from knowledgeable experts. Campaigns need to stay abreast of the latest threats and technology trends; who is involved in creating and distributing fakes; and ways and reasons why a candidate might be faked. The Carnegie Endowment for International Peace, WITNESS, First Draft News, the Partnership on AI, and several top university scholars are leading important related efforts.
  5. Conduct an internal red teaming exercise to prepare for the range of ways a fake could target the candidate or campaign. What topics might be used to drive divisions among candidates in a field? Does the candidate have an advantage that a foreign government might want to undermine?
  6. Establish relationships with company officials that will be helpful during an incident. Campaigns should establish points-of-contact with policy and “trust and safety” offices at major online platforms as well as certified third-party fact-checkers, such as the Associated Press, Check Your Fact, Factcheck.org, Lead Stories, PolitiFact, and Science Feedback. They should talk to contacts at large media outlets about their procedures for verifying media related to campaigns.
  7. Establish procedures to quickly access original video and/or audio footage. Rapid access to the original media clip will help the campaign analyze the falsehood and counter its spread by providing the truthful version.
  8. Prepare contingency web content or templates that could be swiftly used to counter false claims. The campaign should know where and how it would post a statement about an incident; the original audio/video (as relevant and available); and links to third-party fact-checkers or analyses by digital forensic experts.

And what should a campaign do once a deepfake has been released? Though it’s impossible to predict exactly what steps are necessary, the campaign will need to first assess the situation, next counter the falsehood, and finally repair and prevent future damage.

First, campaigns have to assess the potential damage. How harmful is the digital impersonation and how fast is it spreading? A fake video of a candidate saying she prefers Coke to Pepsi is no big deal, but one where the candidate falsely appears saying or doing something despicable could endanger the candidacy and the democratic process. Digital impersonations undermine people’s ability to make informed choices about candidates for office. Voters would be misled.

Countering the video will require quick action. Social media platforms should remove, block, demonetize, or decrease the visibility of digital impersonations and shut down any bots spreading them. Campaigns should be ready to issue statements, post true content, or other evidence to oppose the false narrative.  

Repairing and preventing future damage means tackling the political impact of the video, especially if it lingers in key voter groups or demographics. Campaigns should go to those groups to conduct dedicated outreach dispelling the falsehood. They should take stock of—and share—the lessons learned for, sadly, the next attack.

Disruptive digital impersonations are coming, whether via hostile state actors or individuals.  Every campaign should start preparing now.  

Special thanks to Miles R. McCain for his contributions to this article.

Danielle Citron is vice president of the Cyber Civil Rights Initiative and a professor of law at Boston University School of Law where she teaches and writes about privacy, free speech, and civil procedure.

About the Authors

Katherine Charlet

Former Director, Technology and International Affairs Program

Katherine Charlet was the inaugural director of Carnegie’s Technology and International Affairs Program.

Danielle Citron

Danielle Citron is vice president of the Cyber Civil Rights Initiative and a professor of law at Boston University School of Law where she teaches and writes about privacy, free speech, and civil procedure.

Authors

Katherine Charlet
Former Director, Technology and International Affairs Program
Danielle Citron

Danielle Citron is vice president of the Cyber Civil Rights Initiative and a professor of law at Boston University School of Law where she teaches and writes about privacy, free speech, and civil procedure.

Political ReformDemocracyTechnologyNorth AmericaUnited StatesIran

Carnegie does not take institutional positions on public policy issues; the views represented herein are those of the author(s) and do not necessarily reflect the views of Carnegie, its staff, or its trustees.

More Work from Carnegie Endowment for International Peace

  • Commentary
    Strategic Europe
    There Is No Shortcut for Europe in Armenia

    Europe has an interest in supporting Armenian leader Nikol Pashinyan as he tries to make peace with neighbors and loosen ties with Russia. But it is depersonalized support in the long term, not quickfire flash, that will win the day.

      Thomas de Waal

  • Article
    From Labor Scarcity to AI Society: Governing Productivity in East Asia

    The debate over AI and work too often centers on displacement. Facing aging populations and shrinking workforces, East Asian policymakers view AI not as a threat, but as a cross-sectoral workforce strategy.

      Darcie Draudt-Véjares, Sophie Zhuang

  • Article
    Governing AI in the Shadow of Giants: Korea’s Strategic Response to Great Power AI Competition

    In its version of an AI middle power strategy, Seoul is pursuing alignment with the United States not as an endpoint but as a strategy to build industrial and geopolitical leverage. Whether this balance holds remains an open question.

      Darcie Draudt-Véjares, Seungjoo Lee

  • Research
    For People, Planet, and Progress: Perspectives from India's AI Impact Summit

    This collection of essays by scholars from Carnegie India’s Technology and Society program traces the evolution of the AI summit series and examines India’s framing around the three sutras of people, planet, and progress. Scholars have catalogued and assessed the concrete deliverables that emerged and assessed what the precedent of a Global South country hosting means for the future of the multilateral conversation.

      • +3

      Nidhi Singh, Tejas Bharadwaj, Shruti Mittal, …

  • Article
    The Iran War Shows the Limits of U.S. Power

    If Washington cannot adapt to the ongoing transformations of a multipolar world, its superiority will become a liability.

      Amr Hamzawy

Get more news and analysis from
Carnegie Endowment for International Peace
Carnegie global logo, stacked
1779 Massachusetts Avenue NWWashington, DC, 20036-2103Phone: 202 483 7600
  • Research
  • Emissary
  • About
  • Experts
  • Donate
  • Programs
  • Events
  • Blogs
  • Podcasts
  • Contact
  • Annual Reports
  • Careers
  • Privacy
  • For Media
  • Government Resources
Get more news and analysis from
Carnegie Endowment for International Peace
© 2026 Carnegie Endowment for International Peace. All rights reserved.