• Research
  • Emissary
  • About
  • Experts
Carnegie Global logoCarnegie lettermark logo
DemocracyIran
  • Donate
{
  "authors": [
    "Charlotte Stanton"
  ],
  "type": "questionAnswer",
  "centerAffiliationAll": "dc",
  "centers": [
    "Carnegie Endowment for International Peace"
  ],
  "collections": [
    "Artificial Intelligence",
    "Deepfakes",
    "Tech in Context"
  ],
  "englishNewsletterAll": "ctw",
  "nonEnglishNewsletterAll": "",
  "primaryCenter": "Carnegie Endowment for International Peace",
  "programAffiliation": "TIA",
  "programs": [
    "Technology and International Affairs",
    "Carnegie California"
  ],
  "projects": [
    "Partnership for Countering Influence Operations"
  ],
  "regions": [
    "Iran"
  ],
  "topics": [
    "Political Reform",
    "Technology"
  ]
}

Source: Getty

Q&A

How Should Countries Tackle Deepfakes?

The technology to create sophisticated fake videos—deepfakes—is getting more advanced with serious implications for governments and businesses.

Link Copied
By Charlotte Stanton
Published on Jan 28, 2019
Program mobile hero image

Program

Technology and International Affairs

The Technology and International Affairs Program develops insights to address the governance challenges and large-scale risks of new technologies. Our experts identify actionable best practices and incentives for industry and government leaders on artificial intelligence, cyber threats, cloud security, countering influence operations, reducing the risk of biotechnologies, and ensuring global digital inclusion.

Learn More
Program mobile hero image

Program

Carnegie California

Carnegie California links developments in California and the West Coast with national and global conversations around technology, democracy, and trans-Pacific relationships. At a distance from national capitals, and located in one of the world’s great experiments in pluralist democracy, Carnegie California engages a wide array of stakeholders as partners in its research and policy engagement.


Learn More
Project hero Image

Project

Partnership for Countering Influence Operations

The goal of the Partnership for Countering Influence Operations (PCIO) is to foster evidence-based policymaking to counter threats in the information environment. Key roadblocks as found in our work include the lack of: transparency reporting to inform what data is available for research purposes; rules guiding how data can be shared with researchers and for what purposes; and an international mechanism for fostering research collaboration at-scale.

Learn More

What are deepfakes?

Deepfakes are hyperrealistic video or audio recordings, created with artificial intelligence (AI), of someone appearing to do or say things they actually didn’t. The term deepfake is a mash-up of deep learning, which is a type of AI algorithm, and fake.

How do they work?

The algorithm underpinning a deepfake superimposes the movements and words of one person onto another person. Given example videos of two people, an impersonator and a target, the algorithm generates a new synthetic video that shows the targeted person moving and talking in the same way as the impersonator. The more video and audio examples the algorithm can learn from, the more realistic its digital impersonations are.

How easy are they to make?

Until recently, only special effects experts could make realistic-looking and -sounding fake videos. But today, AI allows nonexperts to make fakes that many people would deem real. And although the deep- learning algorithms they rely on are complex, there are user-friendly platforms that people with little to no technical expertise can use to create deepfakes. The easiest among these platforms allow anyone with access to the internet and pictures of a person’s face to make a deepfake. Tutorials are even available for people who want step-by-step instructions.

How can you spot a deepfake?

Deepfakes can be difficult to detect. They don’t have any obvious or consistent signatures, so media forensic experts must sometimes rely on subtle cues that are hard for deepfakes to mimic. Telltale signs of a deepfake include abnormalities in the subject’s breathing, pulse, or blinking. A normal person, for instance, typically blinks more often when they are talking than when they are not. The subjects in authentic videos follow these patterns, whereas in deepfakes they don’t.

What kinds of damage could deepfakes cause in global markets or international affairs?

Deepfakes could incite political violence, sabotage elections, and unsettle diplomatic relations. Earlier this year, for instance, a Belgian political party published a deepfake on Facebook that appeared to show U.S. President Donald Trump criticizing Belgium’s stance on climate change. The unsophisticated video was relatively easy to dismiss, but it still provoked hundreds of online comments expressing outrage that the U.S. president would interfere in Belgium’s internal affairs.

A woman views a manipulated video that changes what was said by U.S. President Donald Trump and former president Barack Obama.

Deepfakes created by German researchers of Russian President Vladimir Putin, former U.S. president George W. Bush, and President Trump further show the potential for political misuse. In a video released alongside the study, a split screen shows a researcher and a politician. As the researcher speaks and changes his facial expressions, the on-screen politician appears to follow suit.

Deepfakes also could be used to humiliate and blackmail people or attack organizations by presenting false evidence that their leaders have behaved badly, perhaps to the point of resulting in a plunge in stock prices. Because people are wired to believe what they see, deepfakes are an especially insidious form of deception—a problem made worse by how quickly and easily social media platforms can spread unverified information.

A proliferation of deepfakes could even cast doubt on videos that are real by making it easier for someone caught behaving badly in a real video to claim that the video was a deepfake. Two U.S. law professors, Robert Chesney and Danielle Citron, have called this effect the liar’s dividend: as the public becomes more aware of deepfakes, they will become more skeptical of videos in general, and it will become more plausible to dismiss authentic video as fake.

Do deepfakes have any positive applications?

Yes, they do. One of the best examples is by the ALS Association. The association has teamed up with a company called Lyrebird to use voice-cloning technology, the same technology underpinning deepfakes, to help people with ALS (also known as Lou Gehrig’s disease). The project records the voices of people with ALS so they can be digitally recreated in the future—a very useful application of the technology.

What are governments doing to defend against the harm that deepfakes could cause?

So far, the European Union (EU) has taken the most forward-looking steps to defend against all forms of deliberate disinformation, including deepfakes.

Earlier this year, Brussels published a strategy for tackling disinformation, which includes relevant guidelines for defending against deepfakes. Across all forms of disinformation, the guidelines emphasize the need for public engagement that would make it easier for people to tell where a given piece of information has come from, how it was produced, and whether it is trustworthy. The EU strategy also calls for the creation of an independent European network of fact-checkers to help analyze the sources and processes of content creation.

In the United States, lawmakers from both parties and both chambers of Congress have voiced concerns about deepfakes. Most recently, Representatives Adam Schiff and Stephanie Murphy as well as former representative Carlos Curbelo wrote a letter asking the director of national intelligence to find out how foreign governments, intelligence agencies, and individuals could use deepfakes to harm U.S. interests and how they might be stopped.

China is an interesting case to watch. I have not seen any government statements or actions expressing concern about deepfakes. However, China’s state-run news agency, Xinhua, recently experimented with using digitally generated anchors to deliver the news.

What more could countries do?

One thing countries could do is define inappropriate uses of deepfakes. Because deepfakes are used in different contexts and for different purposes—good and bad—it’s critical for society to decide which uses are acceptable and which are not. Doing so would help social media companies police their platforms for harmful content.

Governments, in particular, could make it easier for social media platforms to share information about deepfakes with each other, news agencies, and nongovernmental watchdogs. A deepfakes information sharing act, akin to the U.S. Cybersecurity Information Sharing Act of 2015, for example, could allow platforms to alert each other to a malicious deepfake before it spreads to other platforms and alert news agencies before the deepfake makes it into the mainstream news cycle.

At a minimum, governments need to fund the development of media forensic techniques for detecting deepfakes. There is currently an arms race between automated techniques that create deepfakes and forensic techniques that can detect them. In the United States, the Defense Advanced Research Projects Agency (DARPA) is investing in forensic detection techniques. It’s critical that such investments continue, if not increase, to keep up with the pace of new deepfake algorithms.

How urgent is the problem?

So far, deepfakes have not been deployed to incite violence or disrupt an election. But the technology needed to do so is available. That means that there is a shrinking window of opportunity for countries to safeguard against the potential threats from deepfakes before they spark a catastrophe.

About the Author

Charlotte Stanton

Former Director, Silicon Valley Office

Charlotte Stanton was the inaugural director of the Silicon Valley office of the Carnegie Endowment for International Peace as well as a fellow in Carnegie’s Technology and International Affairs Program.

    Recent Work

  • Q&A
    The World Isn’t Ready for AI to Upend the Global Economy

      Steven Weber, Charlotte Stanton

  • Article
    What the Machine Learning Value Chain Means for Geopolitics
      • +3

      Charlotte Stanton, Vivien Lung, Nancy (Hanzhuo) Zhang, …

Charlotte Stanton
Former Director, Silicon Valley Office
Political ReformTechnologyIran

Carnegie does not take institutional positions on public policy issues; the views represented herein are those of the author(s) and do not necessarily reflect the views of Carnegie, its staff, or its trustees.

More Work from Carnegie Endowment for International Peace

  • Photo of Balen Shah taking a selfie with a group of Nepali adults and children.
    Article
    A New Generation Takes Power in Nepal

    The incoming government has swept Nepal’s election. The real work begins now.

      Amish Raj Mulmi

  • U.S. President Donald Trump (C) oversees "Operation Epic Fury" with (L-R) Central Intelligence Agency Director John Ratcliffe, U.S. Secretary of State Marco Rubio and White House Chief of Staff Susie Wiles at Mar-a-Lago on February 28, 2026 in Palm Beach, Florida. President Trump announced today that the United States and Israel had launched strikes on Iran targeting political and military leaders, as well as Iran’s ballistic missile and nuclear programs. (Photo by Daniel Torok/White House via Getty Images)
    Paper
    Operation Epic Fury and the International Law on the Use of Force

    Assessing U.S. compliance with the international laws of war is essential at a time when these frameworks are already fraying.

      • Federica D'Alessandra

      Federica D’Alessandra

  • A Black man pulls a trolley. He is small in the bottom center of the frame; in the background are stacks of large, colorful shipping containers and the parts of a large crane or similar piece of equipment.
    Article
    Africa’s Global Economic Edge: Advancing Strategic Sectors

    In key sectors such as critical minerals, specialty agriculture, and fintech, Africa can become a global powerhouse by investing more in manufacturing, value-add, and scaling.

      • Kholofelo Kugler

      Kholofelo Kugler, Georgia Schaefer-Brown

  • Commentary
    Diwan
    Iran Rewrites Its War Strategy

    In an interview, Hamidreza Azizi discusses how Tehran has adapted in real time to the conflict with the United States and Israel.

      Michael Young

  • Xi walking into a room with people standing and applauding around him
    Commentary
    Emissary
    The Xi Doctrine Zeros in on “High-Quality Development” for China’s Economic Future

    In the latest Five-Year Plan, the Chinese president cements the shift to an innovation-driven economy over a consumption-driven one.

      • Damien Ma

      Damien Ma

Get more news and analysis from
Carnegie Endowment for International Peace
Carnegie global logo, stacked
1779 Massachusetts Avenue NWWashington, DC, 20036-2103Phone: 202 483 7600Fax: 202 483 1840
  • Research
  • Emissary
  • About
  • Experts
  • Donate
  • Programs
  • Events
  • Blogs
  • Podcasts
  • Contact
  • Annual Reports
  • Careers
  • Privacy
  • For Media
  • Government Resources
Get more news and analysis from
Carnegie Endowment for International Peace
© 2026 Carnegie Endowment for International Peace. All rights reserved.