Source: Getty
Q&A

How Should Countries Tackle Deepfakes?

The technology to create sophisticated fake videos—deepfakes—is getting more advanced with serious implications for governments and businesses.

Published on January 28, 2019

What are deepfakes?

Deepfakes are hyperrealistic video or audio recordings, created with artificial intelligence (AI), of someone appearing to do or say things they actually didn’t. The term deepfake is a mash-up of deep learning, which is a type of AI algorithm, and fake.

How do they work?

The algorithm underpinning a deepfake superimposes the movements and words of one person onto another person. Given example videos of two people, an impersonator and a target, the algorithm generates a new synthetic video that shows the targeted person moving and talking in the same way as the impersonator. The more video and audio examples the algorithm can learn from, the more realistic its digital impersonations are.

How easy are they to make?

Until recently, only special effects experts could make realistic-looking and -sounding fake videos. But today, AI allows nonexperts to make fakes that many people would deem real. And although the deep- learning algorithms they rely on are complex, there are user-friendly platforms that people with little to no technical expertise can use to create deepfakes. The easiest among these platforms allow anyone with access to the internet and pictures of a person’s face to make a deepfake. Tutorials are even available for people who want step-by-step instructions.

How can you spot a deepfake?

Deepfakes can be difficult to detect. They don’t have any obvious or consistent signatures, so media forensic experts must sometimes rely on subtle cues that are hard for deepfakes to mimic. Telltale signs of a deepfake include abnormalities in the subject’s breathing, pulse, or blinking. A normal person, for instance, typically blinks more often when they are talking than when they are not. The subjects in authentic videos follow these patterns, whereas in deepfakes they don’t.

What kinds of damage could deepfakes cause in global markets or international affairs?

Deepfakes could incite political violence, sabotage elections, and unsettle diplomatic relations. Earlier this year, for instance, a Belgian political party published a deepfake on Facebook that appeared to show U.S. President Donald Trump criticizing Belgium’s stance on climate change. The unsophisticated video was relatively easy to dismiss, but it still provoked hundreds of online comments expressing outrage that the U.S. president would interfere in Belgium’s internal affairs.

A woman views a manipulated video that changes what was said by U.S. President Donald Trump and former president Barack Obama.

Deepfakes created by German researchers of Russian President Vladimir Putin, former U.S. president George W. Bush, and President Trump further show the potential for political misuse. In a video released alongside the study, a split screen shows a researcher and a politician. As the researcher speaks and changes his facial expressions, the on-screen politician appears to follow suit.

Deepfakes also could be used to humiliate and blackmail people or attack organizations by presenting false evidence that their leaders have behaved badly, perhaps to the point of resulting in a plunge in stock prices. Because people are wired to believe what they see, deepfakes are an especially insidious form of deception—a problem made worse by how quickly and easily social media platforms can spread unverified information.

A proliferation of deepfakes could even cast doubt on videos that are real by making it easier for someone caught behaving badly in a real video to claim that the video was a deepfake. Two U.S. law professors, Robert Chesney and Danielle Citron, have called this effect the liar’s dividend: as the public becomes more aware of deepfakes, they will become more skeptical of videos in general, and it will become more plausible to dismiss authentic video as fake.

Do deepfakes have any positive applications?

Yes, they do. One of the best examples is by the ALS Association. The association has teamed up with a company called Lyrebird to use voice-cloning technology, the same technology underpinning deepfakes, to help people with ALS (also known as Lou Gehrig’s disease). The project records the voices of people with ALS so they can be digitally recreated in the future—a very useful application of the technology.

What are governments doing to defend against the harm that deepfakes could cause?

So far, the European Union (EU) has taken the most forward-looking steps to defend against all forms of deliberate disinformation, including deepfakes.

Earlier this year, Brussels published a strategy for tackling disinformation, which includes relevant guidelines for defending against deepfakes. Across all forms of disinformation, the guidelines emphasize the need for public engagement that would make it easier for people to tell where a given piece of information has come from, how it was produced, and whether it is trustworthy. The EU strategy also calls for the creation of an independent European network of fact-checkers to help analyze the sources and processes of content creation.

In the United States, lawmakers from both parties and both chambers of Congress have voiced concerns about deepfakes. Most recently, Representatives Adam Schiff and Stephanie Murphy as well as former representative Carlos Curbelo wrote a letter asking the director of national intelligence to find out how foreign governments, intelligence agencies, and individuals could use deepfakes to harm U.S. interests and how they might be stopped.

China is an interesting case to watch. I have not seen any government statements or actions expressing concern about deepfakes. However, China’s state-run news agency, Xinhua, recently experimented with using digitally generated anchors to deliver the news.

What more could countries do?

One thing countries could do is define inappropriate uses of deepfakes. Because deepfakes are used in different contexts and for different purposes—good and bad—it’s critical for society to decide which uses are acceptable and which are not. Doing so would help social media companies police their platforms for harmful content.

Governments, in particular, could make it easier for social media platforms to share information about deepfakes with each other, news agencies, and nongovernmental watchdogs. A deepfakes information sharing act, akin to the U.S. Cybersecurity Information Sharing Act of 2015, for example, could allow platforms to alert each other to a malicious deepfake before it spreads to other platforms and alert news agencies before the deepfake makes it into the mainstream news cycle.

At a minimum, governments need to fund the development of media forensic techniques for detecting deepfakes. There is currently an arms race between automated techniques that create deepfakes and forensic techniques that can detect them. In the United States, the Defense Advanced Research Projects Agency (DARPA) is investing in forensic detection techniques. It’s critical that such investments continue, if not increase, to keep up with the pace of new deepfake algorithms.

How urgent is the problem?

So far, deepfakes have not been deployed to incite violence or disrupt an election. But the technology needed to do so is available. That means that there is a shrinking window of opportunity for countries to safeguard against the potential threats from deepfakes before they spark a catastrophe.