Europe has an interest in supporting Armenian leader Nikol Pashinyan as he tries to make peace with neighbors and loosen ties with Russia. But it is depersonalized support in the long term, not quickfire flash, that will win the day.
Thomas de Waal
{
"authors": [
"Katherine Charlet",
"Danielle Citron"
],
"type": "commentary",
"centerAffiliationAll": "dc",
"centers": [
"Carnegie Endowment for International Peace"
],
"collections": [
"Deepfakes"
],
"englishNewsletterAll": "ctw",
"nonEnglishNewsletterAll": "",
"primaryCenter": "Carnegie Endowment for International Peace",
"programAffiliation": "TIA",
"programs": [
"Technology and International Affairs"
],
"projects": [
"Partnership for Countering Influence Operations"
],
"regions": [
"North America",
"United States",
"Iran"
],
"topics": [
"Political Reform",
"Democracy",
"Technology"
]
}Source: Getty
It is only a matter of time before maliciously manipulated or fabricated content surfaces of a major presidential candidate in 2020. Here is what every campaign needs to do in advance of a deepfake emergency.
It is only a matter of time before maliciously manipulated or fabricated content surfaces of a major presidential candidate in 2020. The video manipulation of House Speaker Nancy Pelosi in May demonstrates the speed with which even a “cheap fake” can spread. But the technology is quickly getting more sophisticated, and we must prepare for “deepfakes”—fully synthesized audio or video of someone saying or doing something they did not say or do. Soon (meaning months, not years), it may be impossible to tell real videos from fake ones. The truth will have a tough time emerging in a deepfake-ridden marketplace of ideas.
Doctored media, typically in the form of short videos or audio clips, could be used to embarrass, defame, or otherwise damage candidates for office. Recent advances in artificial intelligence have increased the realism of deepfakes and substantially cut the resources necessary to create them. On August 9, a deepfake of Democratic National Committee Chair Tom Perez was presented to a conference room of hackers, who largely failed to realize that anything was amiss.
The key is in the timing. Imagine the night before an election, a deepfake is posted showing a candidate making controversial remarks. The deepfake could tip the election and undermine people’s faith in elections. This is not hypothetical. In the past six months, manipulated media has targeted a senior Malaysian minister, Donald Trump, and others.
It does not matter that digital fakery can, for the present moment, be detected pretty easily. People have a visceral reaction to video and audio. They believe what their eyes and ears are telling them—even if all signs suggest that the video and audio content is fake. If the video and audio is provocative, then it will surely go viral. Studies show that people are ten times more likely to spread fake news than accurate stories because fakery evokes a stronger emotional reaction. So no matter how unbelievable deepfakes are, the damage will still be real.
Even if a deepfake appears weeks before an election, it can spread far and wide. Thus, campaigns have to act immediately to combat the spread and influence of deepfakes.
Here is what every campaign needs to do in advance of a deepfake emergency:
And what should a campaign do once a deepfake has been released? Though it’s impossible to predict exactly what steps are necessary, the campaign will need to first assess the situation, next counter the falsehood, and finally repair and prevent future damage.
First, campaigns have to assess the potential damage. How harmful is the digital impersonation and how fast is it spreading? A fake video of a candidate saying she prefers Coke to Pepsi is no big deal, but one where the candidate falsely appears saying or doing something despicable could endanger the candidacy and the democratic process. Digital impersonations undermine people’s ability to make informed choices about candidates for office. Voters would be misled.
Countering the video will require quick action. Social media platforms should remove, block, demonetize, or decrease the visibility of digital impersonations and shut down any bots spreading them. Campaigns should be ready to issue statements, post true content, or other evidence to oppose the false narrative.
Repairing and preventing future damage means tackling the political impact of the video, especially if it lingers in key voter groups or demographics. Campaigns should go to those groups to conduct dedicated outreach dispelling the falsehood. They should take stock of—and share—the lessons learned for, sadly, the next attack.
Disruptive digital impersonations are coming, whether via hostile state actors or individuals. Every campaign should start preparing now.
Special thanks to Miles R. McCain for his contributions to this article.
Danielle Citron is vice president of the Cyber Civil Rights Initiative and a professor of law at Boston University School of Law where she teaches and writes about privacy, free speech, and civil procedure.
Former Director, Technology and International Affairs Program
Katherine Charlet was the inaugural director of Carnegie’s Technology and International Affairs Program.
Danielle Citron
Danielle Citron is vice president of the Cyber Civil Rights Initiative and a professor of law at Boston University School of Law where she teaches and writes about privacy, free speech, and civil procedure.
Carnegie does not take institutional positions on public policy issues; the views represented herein are those of the author(s) and do not necessarily reflect the views of Carnegie, its staff, or its trustees.
Europe has an interest in supporting Armenian leader Nikol Pashinyan as he tries to make peace with neighbors and loosen ties with Russia. But it is depersonalized support in the long term, not quickfire flash, that will win the day.
Thomas de Waal
The debate over AI and work too often centers on displacement. Facing aging populations and shrinking workforces, East Asian policymakers view AI not as a threat, but as a cross-sectoral workforce strategy.
Darcie Draudt-Véjares, Sophie Zhuang
In its version of an AI middle power strategy, Seoul is pursuing alignment with the United States not as an endpoint but as a strategy to build industrial and geopolitical leverage. Whether this balance holds remains an open question.
Darcie Draudt-Véjares, Seungjoo Lee
This collection of essays by scholars from Carnegie India’s Technology and Society program traces the evolution of the AI summit series and examines India’s framing around the three sutras of people, planet, and progress. Scholars have catalogued and assessed the concrete deliverables that emerged and assessed what the precedent of a Global South country hosting means for the future of the multilateral conversation.
Nidhi Singh, Tejas Bharadwaj, Shruti Mittal, …
If Washington cannot adapt to the ongoing transformations of a multipolar world, its superiority will become a liability.
Amr Hamzawy