Synthetic Media and Potential Safeguards

In the lead-up to the 2016 U.S. presidential election, disinformation on social media platforms threatened the integrity of the electoral process. Specifically, Russian troll farms and bot armies created thousands of social media accounts that spread false information and stoked political tensions in an attempt to sow confusion and impact the election outcome. Within the United States, extremist groups latched onto these efforts and spread their own disinformation and propaganda with the goal of affecting election outcomes. In the wake of the election, other countries experienced similar challenges online.

Social media platforms have assured the public that maintaining election integrity is one of their top priorities ahead of the 2020 U.S. presidential election. However, synthetic media and other forms of manipulated media present new and critical challenges for platforms in maintaining this integrity.

To help address these challenges, Carnegie’s Silicon Valley office is convening a series of private, nonattribution dialogues where platform representatives and leading experts share insights about different facets of the problem and solutions. The first convening, in 2018, sketched out the problem space and potential safeguards. Subsequent convenings in 2019 have sought to advance feasible safeguards ahead of the 2020 U.S. election.

The convenings are conducted under the Chatham House Rule. Summaries are available for meetings where participants permit a nonattribution disclosure.


Please note...

You are leaving the website for the Carnegie-Tsinghua Center for Global Policy and entering a website for another of Carnegie's global centers.