Source: Getty
commentary

Campaigns Must Prepare for Deepfakes: This Is What Their Plan Should Look Like

It is only a matter of time before maliciously manipulated or fabricated content surfaces of a major presidential candidate in 2020. Here is what every campaign needs to do in advance of a deepfake emergency.

by Katherine Charlet and Danielle Citron
Published on September 5, 2019

It is only a matter of time before maliciously manipulated or fabricated content surfaces of a major presidential candidate in 2020. The video manipulation of House Speaker Nancy Pelosi in May demonstrates the speed with which even a “cheap fake” can spread. But the technology is quickly getting more sophisticated, and we must prepare for “deepfakes”—fully synthesized audio or video of someone saying or doing something they did not say or do. Soon (meaning months, not years), it may be impossible to tell real videos from fake ones. The truth will have a tough time emerging in a deepfake-ridden marketplace of ideas.

Doctored media, typically in the form of short videos or audio clips, could be used to embarrass, defame, or otherwise damage candidates for office. Recent advances in artificial intelligence have increased the realism of deepfakes and substantially cut the resources necessary to create them. On August 9, a deepfake of Democratic National Committee Chair Tom Perez was presented to a conference room of hackers, who largely failed to realize that anything was amiss.

The key is in the timing. Imagine the night before an election, a deepfake is posted showing a candidate making controversial remarks. The deepfake could tip the election and undermine people’s faith in elections. This is not hypothetical. In the past six months, manipulated media has targeted a senior Malaysian minister, Donald Trump, and others.

It does not matter that digital fakery can, for the present moment, be detected pretty easily. People have a visceral reaction to video and audio. They believe what their eyes and ears are telling them—even if all signs suggest that the video and audio content is fake. If the video and audio is provocative, then it will surely go viral. Studies show that people are ten times more likely to spread fake news than accurate stories because fakery evokes a stronger emotional reaction. So no matter how unbelievable deepfakes are, the damage will still be real.

Even if a deepfake appears weeks before an election, it can spread far and wide. Thus, campaigns have to act immediately to combat the spread and influence of deepfakes.

Here is what every campaign needs to do in advance of a deepfake emergency:

  1. Issue a statement that the candidate will not knowingly disseminate fake or manipulated media of opponents and urge campaign supporters to abide by the same commitment. This is not a post-truth world. Every candidate has more to lose than to gain if this kind of media becomes commonly deployed in political competition. Candidates can show their values and leadership by denouncing manipulated media and pushing other candidates to do the same. They could issue a statement or sign the Election Pledge, which promises that their campaign will avoid “doctored audios/videos or images that impersonate other candidates, including deepfake videos.”
  2. Get familiar with the terms of service and community guidelines for social media platforms on this issue, as well as the processes to report inappropriate content. As demonstrated by the Pelosi video, platforms have different user speech policies about fake or inappropriately manipulated content. Facebook did not remove the Pelosi video from its platform. Instead, the company displayed a warning that the video was misleading. Twitter’s response protocol is less clear, though they appear to be working on the issue internally. Crucial is identifying contacts at the dominant platforms to whom the campaign can rapidly report a problem.
  3. Designate a team ready to manage an incident. This would include, for example, a campaign manager, legal counsel, operations director, communications director, and the social media team. It would also include policy leads, since fakes could arise on policy matters, or because a nation-state might be involved in its distribution.
  4. Obtain a briefing on key trends and threats from knowledgeable experts. Campaigns need to stay abreast of the latest threats and technology trends; who is involved in creating and distributing fakes; and ways and reasons why a candidate might be faked. The Carnegie Endowment for International Peace, WITNESS, First Draft News, the Partnership on AI, and several top university scholars are leading important related efforts.
  5. Conduct an internal red teaming exercise to prepare for the range of ways a fake could target the candidate or campaign. What topics might be used to drive divisions among candidates in a field? Does the candidate have an advantage that a foreign government might want to undermine?
  6. Establish relationships with company officials that will be helpful during an incident. Campaigns should establish points-of-contact with policy and “trust and safety” offices at major online platforms as well as certified third-party fact-checkers, such as the Associated Press, Check Your Fact, Factcheck.org, Lead Stories, PolitiFact, and Science Feedback. They should talk to contacts at large media outlets about their procedures for verifying media related to campaigns.
  7. Establish procedures to quickly access original video and/or audio footage. Rapid access to the original media clip will help the campaign analyze the falsehood and counter its spread by providing the truthful version.
  8. Prepare contingency web content or templates that could be swiftly used to counter false claims. The campaign should know where and how it would post a statement about an incident; the original audio/video (as relevant and available); and links to third-party fact-checkers or analyses by digital forensic experts.

And what should a campaign do once a deepfake has been released? Though it’s impossible to predict exactly what steps are necessary, the campaign will need to first assess the situation, next counter the falsehood, and finally repair and prevent future damage.

First, campaigns have to assess the potential damage. How harmful is the digital impersonation and how fast is it spreading? A fake video of a candidate saying she prefers Coke to Pepsi is no big deal, but one where the candidate falsely appears saying or doing something despicable could endanger the candidacy and the democratic process. Digital impersonations undermine people’s ability to make informed choices about candidates for office. Voters would be misled.

Countering the video will require quick action. Social media platforms should remove, block, demonetize, or decrease the visibility of digital impersonations and shut down any bots spreading them. Campaigns should be ready to issue statements, post true content, or other evidence to oppose the false narrative.  

Repairing and preventing future damage means tackling the political impact of the video, especially if it lingers in key voter groups or demographics. Campaigns should go to those groups to conduct dedicated outreach dispelling the falsehood. They should take stock of—and share—the lessons learned for, sadly, the next attack.

Disruptive digital impersonations are coming, whether via hostile state actors or individuals.  Every campaign should start preparing now.  

Special thanks to Miles R. McCain for his contributions to this article.

Danielle Citron is vice president of the Cyber Civil Rights Initiative and a professor of law at Boston University School of Law where she teaches and writes about privacy, free speech, and civil procedure.