Background

One of the primary challenges that platforms have with synthetic and manipulated media is the blurry boundary between appropriate and inappropriate content. On June 19, 2019, the Carnegie Endowment for International Peace assembled a group of experts and platform representatives to generate possible definitions of inappropriate synthetic and manipulated media that can inform platform content moderation policies.

Because any definition of inappropriate would be founded in legal precedent and ethical practice, two presentations set the stage for discussion by surveying, respectively, the legal and ethical landscapes around synthetic and manipulated media and their implications for the 2020 U.S. election.

Legal Landscape Around Synthetic and Manipulated Media

Courts in the United States have consistently shown an unwillingness to regulate political speech because freedom of expression is one of the most cherished U.S. values. The only area of political speech where the Supreme Court has upheld regulations is political speech associated with voter suppression or speech intended to interfere with the mechanics of elections. In Minnesota Voters Alliance v. Mansky, for instance, Chief Justice John Roberts wrote that the Supreme Court did not “doubt that the State may prohibit messages intended to mislead voters about voting requirements and procedures.” Synthetic and manipulated media that interferes with voting mechanics or provides intentionally misleading voter information may therefore be the easiest political content to regulate.

Outright fraud regarding a candidate’s words or deeds will be more complex and harder to regulate. Regulating such content using the same mechanisms that police financial fraud, for example, is not possible. But tech companies are not bound by the First Amendment in any direct sense, and thus they can regulate such content on their platforms in ways that a government entity might not. Platforms thus would be free as a legal matter to decide that false or misleading speech will not be tolerated during an election cycle.

Ethical Landscape Around Synthetic and Manipulated Media

Given the challenges associated with using a legal framework to evaluate synthetic and manipulated media, the platforms could use an ethical framework to make decisions about such content. It is important that platforms adopt some ethical framework because if they allow the amplification of inappropriate content, they will have failed to protect the values their platforms claim to create. Consider as an example a malicious synthetic video that disrupts a local race far away from the United States. Such a video not only undermines the integrity of that local election but also undermines all democratic elections and by extension democracy and free speech everywhere. The platform’s decision to host that video impacts many individuals beyond those involved in the local race.

Ethical analysis ultimately comes down to weighing the interests and rights of relevant individuals and groups against each other. Examples of relevant interests include: the interest in having accurate information about a political candidate; the interest in creative expression, including satire; the interest in electing candidates who will advance one’s personal projects; and many more. Importantly, not all interests and rights are equally powerful or relevant, which impacts the reconciliation or adjudication between them. When viewed in this context, the doctored video of Nancy Pelosi, speaker of the U.S. House of Representatives, should have been taken down immediately, since it harmed many people’s interests while advancing those of very few. The removal is necessary because there is no ethical analysis that can support the creation and distribution of the video. No ethical analysis would result in the conclusion that the Pelosi video was appropriate because it advanced the narrow interests of an individual at the expense of many other interests.

In the United States, there has traditionally been tremendous faith in the “conflict of ideas” and an assumption that the democractic process and the collision of ideas will weed out the worst ideas, including false information. The empirical reality is different. Modern life requires citizens to rely on heuristics and to offload information gathering to experts when forming opinions. Often, information gathering is offloaded to people whom citizens trust most, whom they likely believe are similar and like-minded to them. This places platforms in a unique position where they simultaneously provide accurate information as well as opportunities for people to manipulate each other with false information.

What Constitutes Inappropriate Synthetic and Manipulated Media in the Context of the 2020 U.S. Election?

The assembled platforms and experts divided into four groups. Each group proposed a definition of inappropriate synthetic/manipulated media that platforms could integrate into their content moderation policies.

  • Clearly false information about the mechanics of an election (for example, “Democrats vote on Monday, Republicans vote on Tuesday”) is inappropriate. The falsification or fabrication of a candidate saying something horrendous is also inappropriate (such as fabricated audio of Candidate X saying something racist). Such content should be removed and rules should be developed to triage the less harmful content. This definition could allow some problematic content to circulate but the most harmful content would be stopped.
  • Content likely to deceive a reasonable person that a member of a political campaign did or said something that they did not do or say is clearly inappropriate. This definition is purposefully narrow and focused on contents’ effects. What matters is that people are being deceived. The definition deliberately defines the harmful effect as misleading people about something a political operative said or did that they did not do or say. In other words, it goes beyond simply misleading, which could be many things (for example, political ads). Misrepresentations that paint someone in a positive light (such as a synthesized video of someone saving a child from a burning building) could still harm an election.
  • Any content that is created synthetically with the intent to manipulate public opinion during an election or cause reputational harm to a candidate for public office will be subject to removal and ongoing assessment of its veracity based on source and third-party fact-checking. This definition is candidate-centric but not limited to false representations of candidate actions or speech. It identifies the chief problem as synthetic content that is trying to be passed off as real in contrast to a parody that is clearly not trying to pass itself off as real.
  • In discussing content that might be subject to potential policy enforcement action, the fourth group discussed the differences between image, audio, or video content that has been edited or synthesized, the extent to which clarity or quality adjustments might make a difference in how media is perceived, and differences between misleading and deceptive media. The group cautioned against having a discrete or finite list of topics (for example, political categories) to enforce against, given the likelihood of unforeseen scenarios in the future. During the discussion, some raised the issue that, in a global context the word “mislead” could be misapplied in a way that favors those in power, who could deem any unwanted manipulated media as “misleading.”

Across the definitions, there was consensus around several points:

  • Any definition of inappropriate synthetic/manipulated media should cover video, audio, and image content.
  • Content modification that is not apparent and therefore trying to pass as real is clearly inappropriate, although there was debate around the right standard for measuring what is and is not apparent. What is not apparent to an average or reasonable person in a global context may be especially difficult to implement.
  • Three definitions explicitly identified some form of election-related content as inappropriate, however the definitions varied in terms of the type of election-related content they considered inappropriate. Generally, there were three types of inappropriate election-related content identified: candidate-centric (for example, misrepresenting candidate speech or action), voter-centric (such as inducing voters to have false beliefs), and election-centric (as in undermining election mechanics).
  • Two definitions deemed any misrepresentation of a person’s actions or speech (for example, fabricated content of someone appearing to do or say something they did not do or say) inappropriate.

In sum, the convening provided an opportunity for platform representatives and leading experts, all of whom were actively engaged in the discussion, to try to resolve the blurry boundary between appropriate and inappropriate synthetic and manipulated media. There was considerable debate around whether platform definitions of inappropriate synthetic and manipulated media should focus on the consequential harm to users and/or the malicious intent of the content creators. At question was whether the fundamental offense or platform violation should be the creator’s intent to disrupt an election—regardless of how effective the attempt is—or the possibility that people might be misled by the content. There was a general consensus that actions through platforms, not Congress, offer the quickest route to reducing the likelihood that synthetic and manipulated media disrupt the 2020 U.S. election and that platforms need to create policies for managing such content. Experts also felt that policies should be enacted soon in order to safeguard the upcoming election even if the policies will necessarily evolve over the long term. There was also agreement that election-related synthetic or manipulated content that is trying to pass as real is very concerning and clearly unacceptable, particularly in the immediate lead-up to an election.

Ultimately, platforms will independently determine whether the points of agreement and proposed definitions are desirable and feasible in the context of their unique legal, ethical, and commercial environments.