Source: Getty
commentary

The Legal, Ethical, and Efficacy Dimensions of Managing Synthetic and Manipulated Media

Carnegie has commissioned pieces on the legal, ethical, and efficacy dimensions of election-related synthetic and manipulated media.

by Charlotte StantonThomas E. KadriDavid DanksJack Parker, and Megan Metzger
Published on November 15, 2019

Ahead of the U.S. 2020 presidential election, the Carnegie Endowment for International Peace convened more than 100 experts from three dozen organizations inside and outside Silicon Valley in private meetings to help address the challenges that synthetic and manipulated media pose for industry, government, and society more broadly. Among other things, the meetings developed a common understanding of the potential for synthetic and manipulated media circulated on technology platforms to disrupt the upcoming presidential election, generated definitions of “inappropriate” election-related synthetic and manipulated media that have informed platform content moderation policies, and equipped platforms with playbooks of effective and ethical responses to synthetic and manipulated media. Carnegie commissioned four short papers on the legal, ethical, and efficacy dimensions of election-related synthetic and manipulated media to brief meeting participants, and now the broader public, on the state-of-play.

The Legal Implications of Synthetic and Manipulated Media

Thomas E. Kadri

Digital falsifications pose dangers for social media, governments, and society. In particular, the rise of “digitized impersonations” increasingly concern lawmakers and scholars who recognize the risks they pose to both individuals and society.1 To address these risks, the Carnegie Endowment for International Peace convened a series of meetings aimed at reducing opportunities for digital forgeries to subvert the upcoming 2020 U.S. election. This memo informs the series by outlining the potential legal implications of synthetic or manipulated media with a view to helping platforms define what constitutes proper and improper digital falsifications in the context of the election.

Media can take various forms, and rapidly developing technology will surely lead to new types in the future. This memo focuses on just two kinds: synthetic media and manipulated media. For the purposes of this series, “synthetic media”—sometimes called deepfakes—are digital falsifications of images, video, and audio created using an editing process that is automated through AI techniques, whereas “manipulated media” are any other digital falsification of images, video, and audio.2

Not all uses of synthetic or manipulated media are harmful. Indeed, they can serve many laudable purposes. Consider, for example, the enhancements they could bring in the realm of education. In teaching about the assassination of former president John F. Kennedy, these media could allow people to hear the speech he was due to give on the day of his death in his own voice, as one UK-based company has now done.3 Similarly, imagine the powerful artistic uses of these media, such as the digital manipulation in Forrest Gump where doctored video footage portrayed three past presidents saying things they never said.4 Synthetic and manipulated media can also enhance autonomy and equality, particularly for people with disabilities who might use the technology to virtually engage with life experiences that would be impossible in a conventional sense.5 Moreover, these media can spur valuable political speech, as when a Belgian political party created a deepfake depicting U.S. President Donald Trump giving a fictional address where he says: “As you know I had the balls to withdraw from the Paris climate agreement. And so should you.”6 Although Trump never used those words in abandoning the agreement, the political party used this tool to “start a public debate” to “draw attention to the necessity to act on climate change.”7

But some digital falsifications are not so salutary. Indeed, many uses could lead to grave individual and social harms—particularly in the political context. Consider this list of hypothetical scenarios catalogued by Robert Chesney and Danielle Citron:

  • Fake videos could feature public officials taking bribes, displaying racism, or engaging in adultery.
  • Politicians and other government officials could appear in locations where they were not, saying or doing horrific things that they did not.
  • Fake videos could place them in meetings with spies or criminals, launching public outrage, criminal investigations, or both.
  • Soldiers could be shown murdering innocent civilians in a war zone, precipitating waves of violence and even strategic harms to a war effort.
  • A deep fake might falsely depict a white police officer shooting an unarmed black man while shouting racial epithets.
  • A fake audio clip might “reveal” criminal behavior by a candidate on the eve of an election.
  • Falsified video appearing to show a Muslim man at a local mosque celebrating the Islamic State could stoke distrust of, or even violence against, that community.
  • A fake video might portray an Israeli official doing or saying something so inflammatory as to cause riots in neighboring countries, potentially disrupting diplomatic ties or sparking a wave of violence.
  • False audio might convincingly depict U.S. officials privately “admitting” a plan to commit an outrage overseas, exquisitely timed to disrupt an important diplomatic initiative.
  • A fake video might depict emergency officials “announcing” an impending missile strike on Los Angeles or an emergent pandemic in New York City, provoking panic and worse.8

Falsifications like these could spread with devastating effect during election season. They could erode the public’s sense of trust in the news—or even in the very idea of truth—upon which an informed electorate depends. Worse still, a well-timed release of a convincing digital falsification could sway an election if enough voters believed it and the candidate had no time to debunk it effectively.

What are the potential legal responses to digital falsifications? An outright legal ban on synthetic and manipulated media would violate the First Amendment because “falsity alone” does not remove expression from First Amendment protection, and many digital falsifications would be constitutionally protected speech.9 As a result, the mere specter of the First Amendment curtails many legislative efforts to regulate these media. Nevertheless, the following legal regimes have the potential to address certain problems posed by digital falsifications in ways consistent with the First Amendment.10

Intellectual Property: One potential source of legal liability could be copyright law. Because some digital falsifications rely on copyrighted content, unauthorized use of that content could lead to monetary damages and a notice-and-takedown procedure to remove it. The person who created the content usually owns the copyright, and thus a person may have a legal claim if she is depicted in synthetic or manipulated media that uses material of her own creation. Significant legal hurdles will arise, however, because defendants will argue that the falsification is “fair use” of the copyrighted material, intended for educational, artistic, or other expressive purposes—a defense to liability under copyright law that in part turns on the question whether the falsification is sufficiently “transformed” from the original such that it receives protection.

Right of Publicity: Another option might be the right of publicity—a state-law tort that prohibits unauthorized use of a person’s name, likeness, or other indicia of identity.11 Again, however, many digital falsifications will be immune from liability under this tort because of First Amendment concerns, as well as related statutory and common-law exceptions for material that is “newsworthy” or in the “public interest.”12 Some states also restrict the tort’s scope to “commercial” uses of a person’s identity, such as in advertisements, meaning that many digital falsifications in the election context will not be covered. Despite these constitutional barriers, claims brought against creators of digital falsifications that inflict grave dignitary harms might survive First Amendment scrutiny, though this theory has not been tested in the courts.13

Defamation & False Light: A more fruitful avenue might be civil tort claims for defamation and false light, which target certain types of falsehoods. Public officials and public figures could sue if a convincing digital falsification amounted to a defamatory statement of fact made with actual malice—that is, made with knowledge that the statement was false or with reckless disregard as to its falsity.14 Private individuals, meanwhile, need show only that the creator was negligent as to the falsity of any defamatory statement. Similarly, liability could arise if a digital falsification places a person in a “false light” by creating a harmful and false implication in the public’s eye, though the victim would have to show actual malice if the falsification was related to a “matter of public concern.”15

Intentional Infliction of Emotional Distress (IIED): A final tort that might come into play is IIED, but only if the creator of a piece of synthetic or manipulated media “intentionally or recklessly engaged in extreme and outrageous conduct that caused the plaintiff to suffer severe emotional distress.”16 This is a high bar to meet, made higher by First Amendment concerns: as in defamation, a public figure’s IIED claim resting on allegations of falsity must satisfy the strictures of actual malice, and there is also robust constitutional protection for satire and speech on matters of public concern.17

Criminal Law: Some digital falsifications might implicate various criminal laws. If a digital falsification targeted particular individuals by using any “interactive computer service or electronic communication system” to “intimidate” them in ways “reasonably expected to cause substantial emotional distress,” the creator might have violated federal cyberstalking law.18 Some states also make it a crime to knowingly and credibly impersonate another person online with intent to “harm, intimidate, threaten, or defraud” that person,19 and it is a federal crime to impersonate a federal official in order to defraud others of something of value.20 A few states also have criminal defamation laws, though prosecutions under these laws must at a minimum satisfy the same constitutional standards as the civil defamation claims discussed above.21 Finally, some states have criminalized the intentional use of lies to impact elections, but most of these laws have been struck down as unconstitutional.22

Administrative Law: There may be narrow circumstances in which digital falsifications could be addressed through administrative law. The Federal Trade Commission could regulate synthetic or manipulated media that amount to deceptive or unfair commercial acts and practices, but this remit would likely cover only those media that take the form of advertising related to “food, drugs, devices, services, or cosmetics.”23 Although the Federal Communications Commission might seem like a better fit, that agency currently appears to lack both jurisdiction and interest to regulate content circulated on social media.24 Lastly, the Federal Election Commission is empowered to regulate campaign speech, but it does not regulate the truth of campaign-related speech, nor is it likely to assert or receive this power due to the constitutional, practical, and political concerns that would accompany such efforts.25 There are election-related rules concerning financing—for example, regulations demanding transparency of funding for political advertisements—but social media are not currently subject to jurisdiction in this context.26 This might change if Congress adopts the Honest Ads Act, but efforts appear to have stalled on that front for now.27

Five final points are essential to understanding the legal landscape around digital falsifications. First, difficulties of attribution will often impede attempts to hold creators of harmful falsifications liable; tracking down the people who create them is usually difficult and costly. Second, and relatedly, perpetrators often live outside of the United States and thus may be beyond the reach of the U.S. legal process. Third, it can be expensive and risky to bring civil claims, and victims of harmful falsifications may fear that litigation will trigger even more unwanted attention—sometimes known as the Streisand effect.28

Fourth, legal liability may depend on whether a digital falsification is believable, but each case will present different issues on this front. For example, if a deepfake portrayed a presidential candidate saying something racist, she would likely have to show that people reasonably believed she made the racist statements in order to successfully bring a defamation claim. If the deepfake were unbelievable, courts would more likely view it as satire or parody and thus deem it protected under the First Amendment.29 Although the relevance of believability will depend on the type of legal claim and the facts of each case, it is safe to say that believable falsifications are both more likely to be legally problematic and less likely to receive First Amendment protection.

Last but certainly not least, Section 230 of the Communications Decency Act will largely immunize social media from most of the potential legal liability discussed in this memo. If a third party posts a digital falsification on an online platform, the platform cannot be held liable for hosting it even if the third party could be, unless hosting the content violates federal criminal or intellectual property law. At the very least, this means that platforms are not legally responsible for user-generated falsifications that would otherwise run afoul of laws concerning the right of publicity, defamation, false light, or IIED.

Section 230 is especially important here in two respects. First, platforms cannot be sued for displaying most content republished from other sources or generated by users because the law expressly prohibits courts from treating platforms as “publishers” of that content. Second, platforms can filter and block whatever content they like without fear that they will be liable for leaving up some types of content while taking down others. This combination gives platforms wide discretion to allow or prohibit digital falsifications as they see fit. Ultimately, due to a combination of Section 230 and the First Amendment, platforms will be largely free to regulate digital falsifications however they wish as a matter of private governance of online speech.

Thomas E. Kadri is a resident fellow at Yale Law School and an adjunct professor at New York Law School.

Notes

1 Robert Chesney and Danielle Citron, “Deep Fakes: A Looming Challenge for Privacy, Democracy, and National Security,” California Law Review 107 (forthcoming 2019), draft available at https://ssrn.com/abstract=3213954. Chesney and Citron’s path-breaking article on the issue of deepfakes is worth reading in full and supports much of this memo.

2 See ibid.; and James Vincent, “Why We Need a Better Definition of ‘Deepfake,’” Verge, May 22, 2018, https://www.theverge.com/2018/5/22/17380306/deepfake-definition-ai-manipulation-fake-news.

3 “John F Kennedy’s Lost ‘Last’ Speech Recreated,” BBC, March 16, 2018, https://www.bbc.com/news/av/uk-scotland-43436361/john-f-kennedy-s-lost-last-speech-recreated.

4 “Forrest Gump JFK ‘I Gotta Pee’ Scene,” YouTube, posted by Rnastershake, February 28, 2010, https://www.youtube.com/watch?v=JSEdBNslGOk; “Forrest Gump - Shot In The Buttocks,” YouTube, posted by TheGameCube64Guy, January 14, 2013, https://www.youtube.com/watch?v=mIWd3T1xjec; and “Jan. 9: Richard Nixon (with Forrest Gump),” YouTube, posted by Birthday Hall of Fame, January 9, 2015, https://www.youtube.com/watch?v=qJpDMPnmrBU.

5 Allie Volpe, “Deepfake Porn Has Terrifying Implications. But What If It Could Be Used for Good?,” Men’s Health, April 13, 2018, https://www.menshealth.com/sex-women/a19755663/deepfakes-porn-reddit-pornhub (giving the example of people suffering from physical disabilities interposing their faces along with those of their consenting partners into pornographic videos); see also Chesney and Citron, “Deep Fakes.”

6 Hans von der Burchard, “Belgian Socialist Party Circulates ‘Deep Fake’ Donald Trump Video,” Politico, May 21, 2018, https://www.politico.eu/article/spa-donald-trump-belgium-paris-climate-agreement-belgian-socialist-party-circulates-deep-fake-trump-video/.

7 Ibid.

8 Chesney and Citron, “Deep Fakes.”

9 See United States v. Alvarez, 567 U.S. 709, 719 (2012) (plurality opinion).

10 See generally Alan Chen & Justin Marceau, “High Value Lies, Ugly Truths, and the First Amendment,” Vanderbilt Law Review 68, no. 6 (2015): 1480.

11 See Thomas E. Kadri, “Fumbling the First Amendment: The Right of Publicity Goes 2–0 Against Freedom of Expression,” Michigan Law Review 112 (2014): 1519, 1521.

12 See generally Thomas E. Kadri, “Drawing Trump Naked: Curbing the Right of Publicity to Protect Public Discourse,” Maryland Law Review 78 (2019): 899, 928–31.

13 See ibid. at 948–58.

14 See New York Times Co. v. Sullivan, 376 U.S. 254 (1964).

15 See Time, Inc. v. Hill, 385 U.S. 374, 390 (1967).

16 See Snyder v. Phelps, 562 U.S. 443, 451 (2011)).

17 See Hustler Magazine v. Falwell, 485 U.S. 46 (1988); Snyder, 562 U.S. at 451–54.

18 18 U.S.C. § 2261A(2).

19 See, for example, California Penal Code § 528.5 (2009).

20 18 U.S.C. § 912 (2009) (“Whoever falsely assumes or pretends to be an officer or employee acting under the authority of the United States or any department, agency or officer thereof, and acts as such, . . . shall be fined under this title or imprisoned not more than three years, or both.”).

21 See Eugene Volokh, “Anti-Libel Injunctions and the Criminal Libel Connection,” University of Pennsylvania Law Review 167 (forthcoming 2019) (draft available at https://ssrn.com/abstract=3372064).

22 See, for example, Susan B. Anthony List v. Driehaus, 814 F.3d 466 (6th Cir. 2016) (striking down an Ohio election-lies law as a content-based restriction of “core political speech” that lacked sufficient tailoring).

23 5 U.S.C. § 52.

24 Chesney and Citron, “Deep Fakes.”

25 Ibid.

26 Ibid.

27 See Alexander Bolton, “McConnell Works to Freeze Support for Dem Campaign Finance Effort,” Hill, March 8, 2019, https://thehill.com/homenews/senate/433154-mcconnell-works-to-freeze-support-for-dem-campaign-finance-effort.

28 See Julie E. Cohen, “Law for the Platform Economy,” UC Davis Law Review 51 (2017): 133, 149–50 (“Efforts to remove hurtful material typically backfire by drawing additional attention to it, intensifying and prolonging the unwanted exposure”).

29 See Hustler Magazine, 485 U.S. at. 57.

 

The Un/ethical Status of Synthetic Media

David Danks and Jack Parker

Methods to edit, modify, manipulate, and even create media—video, images, audio—have rapidly become more sophisticated and difficult-to-detect. At the same time, these methods have become more widespread and easy-to-use, leading to an unsurprising proliferation of synthetic and manipulated media of all sorts. More precisely, we focus on two types of media:1

  • Synthetic media: Digital falsifications of images, video, and audio that are created with one of several methods involving machine or AI techniques
  • Manipulated media: Any other digital alterations or manipulations of images, video, and audio (perhaps with significant human input or guidance), including so-called cheapfakes2

In this memo, we consider the ethical (rather than legal) status of synthetic or manipulated media. In general, synthetic and manipulated media will almost never be unconditionally ethically good or bad. Rather, the ethical status of some media will depend on contextual features such as the relevant individuals, their mental states and intentions, the possible impacts, and more. As a result, we focus here on a framework for understanding and analyzing synthetic/manipulated media in complex contexts so that we can develop answers to: What is the ethical status—good, bad, mixed, complicated—of some such media?

The preceding question is actually ambiguous in an important way. On the one hand, we might be asking about the ethical status of some particular piece of synthetic or manipulated media in some particular context. For example, we might be asking a very local question about a specific video that seemingly shows a political candidate saying something that they do not believe. On the other hand, we might be focused on some such media as an instance of a broader type. For example, we might want to know the ethical status of any synthetic or manipulated media that is developed with the intent of disrupting a legitimate election. Platforms (and others) are unlikely to be in a position to judge every case in its full complexity. We thus focus more on the latter question, since it can lead to the development of ethically grounded policies and principles (albeit, with exceptions or edge cases). We present a collection of such principles in the last section.

Interests and Rights

We adopt an expansive view of ethics as asking how one ought to act (or not) in order to realize one’s values and advance one’s (moral) interests. On this view, ethics is fundamentally a positive activity that seeks to determine what one should do, rather than a negative view that focuses on prohibitions.3 What makes ethics hard is that each of us endorses values that can come into conflict, and those conflicts are difficult to resolve, particularly if our values are not directly comparable. What are the relative weights, for example, of a candidate’s right to protect her reputation and a voter’s free speech rights? These challenges need not lead to paralysis; we can make principled decisions despite them. In fact, the standard metaethical frameworks—utilitarianism, deontology, consequentialism, virtue ethics, and so fort—provide ways to help resolve these conflicts. And in the case of synthetic and manipulated media, we can make significant progress without selecting a particular metaethical framework.

An analysis of the ethical status of synthetic/manipulated media in context should be grounded in the interests and rights of relevant individuals and groups. Examples of relevant interests include: having accurate information about a political candidate; creative expression, including satire; electing candidates who will advance one’s personal projects; and many more. Importantly, not all interests and rights are equally powerful or relevant, which impacts the reconciliation or adjudication between them. The present section focuses on articulating those diverse (potentially relevant) rights and interests for different roles or stakeholders. The subsequent section then outlines a process for integrating or reconciling these diverse interests and rights, along with plausible guiding principles.

Synthetic/Manipulated Media Producers

Producers of synthetic and manipulated media, whether private individuals, companies, political campaigns, advocacy groups, foreign governments, or others, are one relevant group. The diversity of potential producers means that some interests and rights might not be relevant in a particular case or context. The major rights and interests that should be considered from the producer’s perspective are:

  • rights to free expression through the creation of artwork, satire, and other original content;
  • interests in political participation (or manipulation) through creation of material that conveys a political message;
  • interests in entertainment for oneself or others; and
  • justifiable desires to use the media to advance their other interests, though the moral weight of these desires may be relatively weak compared to other interests.

For example, consider a political ad produced by an advocacy group that includes modified video that shows a political opponent engaging in heinous (though not illegal) behavior. The advocacy group is realizing its interests in free expression and political participation, and presumably also advancing other interests (for example, raising the public profile of the group). These are thus the interests that will need to be weighed against the interests of other relevant groups, most notably the political opponent, of course.

Content Platforms

Synthetic and manipulated media in isolation are relatively innocuous; for example, a deepfake of a political candidate is unlikely to raise any challenging ethical issues if it never moves beyond the creator’s computer. Platforms such as Twitter, Facebook, and YouTube provide mechanisms for widespread dissemination of such media, perhaps guided by the platform or platform users. These platforms themselves thus have interests (and perhaps rights) that are relevant to understanding the ethical status of (potentially widespread) synthetic and manipulated media:

  • interest in promoting widespread platform use (partly in service of their profit interest);
  • interest in maintaining objectivity and impartiality (and having this objectivity be recognized by others); and
  • interest in not implementing costly (in time, effort, or complexity) responses to synthetic and manipulated media.

People’s reactions to these media can potentially harm each of the first two interests of the platforms, and so the synthetic or manipulated media would be prima facie ethically problematic. As a concrete example, uploading distorted audio of a political candidate can undermine a platform’s legitimate interest in being perceived as a truthful or accurate source of information. Moreover, synthetic and manipulated media that are difficult to detect are arguably more ethically problematic, in part because they impose further costs on platforms without providing any benefit to them.

Platform Users and Media Consumers

Individuals who consume or view synthetic/manipulated media also have rights and interests, though the diversity of this group inevitably leads to a diversity of interests and rights. Consumers do not necessarily form a coherent or integrated group, and so their interests may be more diffuse. One subset of consumers includes those who are directly affected by the relevant media (such as a political candidate who has a deepfake made about her); we discuss those individuals in the next subsection. Despite these issues, we can identify some rights and interests that plausibly hold for most “not directly affected” consumers:

  • interests in helpful and/or truthful information for decisionmaking purposes;
  • right to political participation in ways that permit the expression of their values; and
  • general interests in not being deceived or manipulated without their knowledge.

Modern life depends on various divisions of labor, including cognitive or informational labor. We all have a vested interest in being able to outsource certain kinds of investigations to trusted sources. An individual voter, for example, might lack the time or background knowledge to fully understand all of a candidate’s position papers, and so might turn to a trusted news source for summaries. This whole system, and so the quality of political participation within this system, depends on people not being deceived about what they are seeing, hearing, or reading.4 Evaluation of the ethical status of synthetic and manipulated media must consider the extent to which these interests are undermined by that type of media.

Directly Impacted Individuals

In many cases, synthetic and manipulated media are produced and directly targeted with the explicit intent of doing harm to some person or group. Many of the most obvious worries about synthetic/manipulated media and the 2020 election (including examples mentioned above) involve specific intent to harm the political prospects or reputation of an individual or political party. Even when the relevant media is not created with that conscious intent, we still must consider key rights and interests of those who are directly impacted or harmed by it:

  • rights associated with privacy (by the target);
  • interests in preserving control of one’s own image and likeness;
  • interests in minimizing psychological harms;
  • right to physical safety, which may be impacted if the synthetic/manipulated media could prompt physical actions or retaliations; and
  • interests in minimizing reputational or professional harms due to falsified claims.

While these interests are quite weighty, it is important to recognize that they do not trump all other considerations. For example, the public’s interest in knowing about a political candidate’s decisionmaking might be stronger than the candidate’s right to privacy. At the same time, these interests will typically play a significant role in ethical evaluations.

Broader Political Community

Another major concern about synthetic and manipulated media is that their mere existence can sow doubts about the veracity of video, images, and audio. These doubts are antithetical to the need, in political contexts, to have verifiable information about the positions and statements of candidates. If any video can be undermined simply by crying “deepfake!” then it is difficult to see how any sort of reason-based debate and discourse could be possible. That is, these media potentially harm interests of the broader political community:

  • interest in political debate occurring in a climate free from distrust;
  • interest in maintaining direct evidence as a key part of our decisionmaking; and
  • interest in ensuring quality evidence is available for political debates.

These interests are almost certainly not threatened by any particular instance of synthetic or manipulated media. Rather, they are harmed when such media are (perceived to be) widespread. Moreover, we cannot address these impacts simply by explicitly labeling synthetic or manipulated media as such. Even explicitly labeled media can have longer-term impacts on people’s cognition and beliefs about, say, a particular candidate. Hence, overall political debate could be undermined even if we were perfectly accurate at detecting and labeling synthetic and manipulated media.

Broader Social Community

The impacts of synthetic and manipulated media are not restricted to the political sphere, but extend more generally to social communities. For example, a deepfake that slanders a political candidate could also impact the broader community (social, physical, and so on) in which the target lives. The social ties that are vital to a community can be undermined as a result of an attack. We thus must include the relevant values for social communities:

  • interest in maintaining a general value of truth (even if somewhat amorphous); and
  • interests in maintaining trust between members of the community, at least in ways that help the community flourish.

In contrast with the interests of the broader political community, community interests can be threatened by a single instance of synthetic or manipulated media. A single well-designed attack can have a disproportionate impact on the relationships that hold a community together, particularly in smaller communities, where political leaders often play a large social role.

Toward Some Answers

In an ideal world, the first step in ethical evaluation would be to identify the interests that are relevant in some particular case. Alongside this identification, one should try to determine the relative weights of those interests. The goal is not to develop a precise mathematical model that can be optimized—rights and interests rarely permit that degree of precision. Rather, this step aims to ensure that no relevant factors are omitted, and that we have a basic understanding of the ways that they interact. The second step is then to reconcile or aggregate all of these interests to determine whether some class of synthetic or manipulated media in context is overall ethically valuable.5

We do not live in an ideal world. Platforms, consumers, and the targets of synthetic or manipulated media attacks are rarely in a position to perform a full, complete analysis. We might successfully complete these steps for isolated instances, but we have every reason to expect an onslaught of both types of media in advance of the 2020 election. It will not be feasible to approach these evaluations on a wholly case-by-case basis. We should instead look for rough principles or guidelines that are ethically grounded, and that can serve as heuristics to evaluate the ethical status of some synthetic or manipulated media. The principles will almost certainly be wrong for unusual cases, but should provide appropriate guidance for the easy cases.

One key observation is that the majority of interests outlined above would almost always be harmed, rather than advanced, by synthetic/manipulated media that intended to influence the 2020 U.S. presidential election. Since many of those interests are prima facie equally weighty, we should (from an ethical perspective) view such media relevant to the 2020 election with significant skepticism.

In particular: synthetic and manipulated media with intent to influence the 2020 election should be treated, by default, as ethically problematic (unless additional arguments are provided that the media advances morally weightier interests).6 For example, our default should be that the satire interest of a producer of some media is not stronger, and probably much weaker, than the public’s interest in having accurate information about a candidate’s views, and so we ought to find such synthetic media to be unethical. This default could potentially be overcome in particular cases, but that would require further argument by the defender of the synthetic or manipulated media.

Of course, some threats to various interests would be minimized if the synthetic or manipulated media were known to be false, so clear labeling of synthetic and manipulated media reduces this burden of proof. For example, a deepfake that falsely shows a candidate endorsing a position might be less ethically problematic if it were clearly and persistently marked as a deepfake. Notice that the heuristic principle here is that labeling reduces, but does not eliminate, this burden. Even labeled synthetic/manipulated media can still significantly impair weighty interests (since people cognitively struggle to remember what exactly is fiction), and so be unethical.

We can advance further by considering whether the manipulation or falsification actually harms people’s interests. For example, changing the color of a candidate’s shirt or digitally removing a scratch would be unlikely to harm people’s interests in truth, accurate information, and so forth. Hence, another heuristic is: If the manipulation or falsification does not affect the truthfulness of the core content of the media, then the burden of proof is significantly lower. Arguably, people expect politicians to exaggerate in political campaigns, so consumers’ generalized interest in not being deceived regularly is relatively weak(er) in this context. Obviously, there are hard questions about how to determine the core message of some media, particularly since manipulations can mean very different things to different groups. We thus have a process proposal: Intent and content of synthetic and manipulated media should be assessed by diverse groups. Depending on the case, these should include members of relevant special populations.

These heuristic principles are a start, but some cases of synthetic or manipulated media will require a closer analysis. In these cases, we might need to acquire more information before we can reconcile conflicting interests and values. Most notably, people’s willingness to defend or promote their interests can provide noisy information about the moral weight of those interests, at least from the perspective of that individual. For example, we might not initially know the full strength of a political candidate’s interest in controlling her own image, since some care very deeply while others have resigned themselves to losing control. If the candidate takes concrete (and costly) steps to try to control her image, then we can conclude that the interest is important for this candidate (and so more synthetic or manipulated media might be unethical because it violates that important interest). People’s willingness to act can provide valuable information about which interests to prioritize as we try to reach a final evaluation of some synthetic or manipulated media.

David Danks is the L. L. Thurstone Professor of Philosophy and Psychology, and head of the Department of Philosophy, at Carnegie Mellon University.

Jack Parker is a PhD candidate in the Department of Philosophy at Carnegie Mellon University.

Notes

1 Currently, there is little agreement about terminology in this space. We recognize that other people might use different terms or meanings, but we think these definitions provide reasonable starting points.

2 One open technical question is whether bright lines can be drawn between falsification (for example, replacing a face in a video) and mere manipulation (such as simple Snapchat filters). We do not address this issue here since all of these can be ethical or not (for example, falsification in a fictional movie; mere manipulation in an ethically problematic cheapfake).

3 Of course, these are sometimes two sides of the same coin.

4 Of course, that is not the same as saying that people should only be presented with true things. Fiction can be highly enjoyable when we know it is fiction.

5 As mentioned above, one key function of metaethical frameworks is to provide specific tools and methods to integrate diverse, competing rights and interests. So if we are willing to commit to a particular framework (such as utilitarianism), then we have a specific (though not always easy-to-use) way to determine the ethical status of some synthetic media.

6 We emphasize again that we are focused on the ethical, not legal, status of synthetic/manipulated media. Even if we adopt this heuristic principle, there might not be legal means to restrict the production and promulgation of such media.

 

Effectiveness of Responses to Synthetic and Manipulated Media on Social Media Platforms

Megan Metzger

Synthetic and manipulated media present new challenges for social media platforms, governments, and international security more broadly. The Carnegie Endowment for International Peace convened a series of meetings to develop a playbook of effective and ethical responses to synthetic and manipulated media ahead of the U.S. 2020 presidential election. To inform that meeting, this memo synthesizes the current evidence about the effectiveness of policy responses to inappropriate synthetic and manipulated media on social media platforms. Since research on synthetic and manipulated media is still nascent, this memo draws heavily on research on other types of mis/disinformation, while integrating the limited emerging research specific to synthetic and manipulated media.

For the purposes of this memo, effectiveness may be conceptualized by two metrics. First, how effectively a policy reduces exposure to mis/disinformation, which can be measured by how many times users click on, engage with, or spread a piece of mis/disinformation. Second, how effectively a policy reduces users’ belief in a given piece of mis/disinformation. Strong strategies to combat disinformation will likely use both measures in some combination, and some individual policies may impact both metrics.

Reducing Exposure

There are a range of policy responses that can help reduce user exposure to mis/disinformation. The most obvious is to remove content and prevent it from being re-uploaded.1 In theory, successfully removing content should completely prevent further exposure on any given platform.2 If not all platforms remove the content, however, users may still be exposed on other platforms. They also may be exposed if mainstream media outlets pick up and distribute the content, as happened in May 2019 with the manipulated video of Speaker of the U.S House of Representatives Nancy Pelosi.

Another way to reduce exposure is to reduce the visibility of content algorithmically, often called “downranking.” Platforms report that downranking can dramatically reduce exposure. Facebook, for example, downranked the Pelosi video, and has claimed that downranking reduces the spread of identified content by approximately 80 percent.3 It should be noted that downranking does not apply in the context of platforms without an algorithmic curation component, like WhatsApp or other messaging services. Both removal and downranking can also be applied to user accounts or to groups but publicly available research regarding the effectiveness of these responses is not available.

A relatively new response implemented by WhatsApp is to limit the sharing of content to minimize virality. A recent study by Melo et al. found that this response can reduce the velocity with which content spreads by an order of magnitude but doesn’t prevent highly viral content from reaching all members of a network (in this case public WhatsApp groups).4 Platforms could consider other ways to limit sharing that target synthetic/manipulated media more directly to increase the effectiveness in relation to that content.

Finally, platforms can demonetize accounts or content, making it so that they cannot earn ad dollars on the site. Academic research has not evaluated the impact of this strategy.

Takeaways
  • Removal, when possible, will eliminate exposure on one platform, but users may still be exposed on other platforms if platform response is not uniform, and if content is picked up by the mainstream media.
  • Downranking can substantially reduce exposure, without requiring full removal.
  • There is no publicly available research on other strategies such as demonetization or removal of users. More research would be valuable in this area.
  • Limiting content sharing shows potential at reducing the spread of content, but may work least well with the most viral content.

Reducing Belief

If the content in question remains on platforms, another response to counter inappropriate synthetic and manipulated media is to try to reduce its impact on people’s beliefs. One common strategy for attempting to reduce belief in mis/disinformation is through labeling. Some findings from the literature on correcting misinformation give insights into which labels might be most effective at reducing belief in mis/disinformation on platforms. These include:

  • Labels with specific corrections are more effective than those providing general information.5
  • Providing alternative factual information or a detailed explanation of why information is false makes it more likely that people will reduce their belief in misinformation.6
  • When misinformation is repeated in the context of a label, the correction can backfire by further entrenching people’s belief in the misinformation.7 This backfire effect may be mitigated if the repetition is used to highlight discrepancies between the misinformation and the factual information.8 These findings suggest that labels should accompany someone’s first exposure to misinformation, exclude the misinformation, and/or highlight discrepancies between misinformation and fact.
  • Pre-exposure warnings are typically more effective than post-exposure warnings,9 especially when the warning educates people about the impacts of receiving misinformation.10
  • All else equal, people prefer simple labels to complicated ones.11
  • Labels that are framed to affirm (or at least not challenge) the user’s worldview or identity are more effective.12
  • Labels are more likely to be effective when the source of the information is highly credible and/or trusted.13 In particular, corrections are more effective when they come from someone in the user’s social network,14 from somebody perceived to share their political or cultural beliefs,15 or from a credible, trusted source.16

A key takeaway here is that how platforms label mis/disinformation is consequential. For example, in one study researchers found that “rated false” was a more effective label than “disputed” at reducing belief in misinformation.17 Facebook found, in fact, that a related strategy, providing additional context from credible sources, was more effective than only labeling. They began surfacing related articles instead of providing “disputed” labels.18 Leticia Bode and Emily K. Vraga found this response reduced belief in misinformation.19 The mis/disinformation literature further suggests that providing warning labels before people encounter mis/disinformation may reduce belief (and one can imagine they might also reduce exposure if people then choose not to view the content at all). One implication of this for platforms might be that providing a warning before playing a video or requiring users to opt in to the video before it plays may reduce its impact. Additionally, tailoring the source of labels may increase impact, although scaling this may be prohibitive.

The lessons above about correcting mis/disinformation may apply in the context of synthetic and manipulated media, but there are reasons to think that videos may be different. A long-standing literature in psychology has found a “picture superiority effect,” where all else equal, people are more likely to remember images than to remember words.20 As a result, mis/disinformation from manipulated videos may be “stickier” in people’s memories than other forms of mis/disinformation, which could have implications for video labeling. Ullrich K. H. Ecker and coauthors, for instance, have shown that when correcting misinformation, the strength of the signal of the correction needs to increase in relation to the strength of the signal of the misinformation.21 They consider this in terms of number of exposures, but this may mean that text-based labels on manipulated videos are less effective because the visual cue is stronger. Hence, future research could explore more visual approaches to correction, such as video warnings before synthetic/manipulated media.

To date, two studies have explored the efficacy of approaches to combating misinformation specifically in synthetic and manipulated media. First, recent work by Cristian Vaccari and Andrew Chadwick finds that many people viewing a deepfake where former U.S. president Barack Obama says seemingly out of character things are confused as to the video’s authenticity.22 Convincingly debunking the video within the video itself reduced uncertainty. More people were confident the information in the video was false. It didn’t, however, change the number of people who seemed to believe the information in the video (about 16 percent). This suggests that fully correcting misperceptions, as the above literature has found, may be very challenging. Second, two experiments by Google’s Jigsaw tested a range of interventions, finding that the more intrusive a warning about a video, the more people recognized that the video was fabricated. They also found substantial differences depending on the language used in warnings. Participants were much more likely to identify videos as “fabricated” than as “fake.” This suggests, in line with the other research cited above, that platforms should be careful in the language they use in warnings, and that small details can matter a lot. In their best performing intervention, which showed a warning page and required users to click before watching a video, about 70 percent of people correctly identified the synthetic video as “fabricated.”23

Taken together, the findings from these studies support the literature on other forms of mis/disinformation. Warnings in advance are the most effective, but with all warnings some people will still believe the synthetic or manipulated video. Even with pre-exposure warnings, about 30 percent of people still thought Jigsaw’s synthetic video was real. That said, there were substantial differences across different interventions, suggesting again that the details matter quite a lot. In the Jigsaw study, when in-video notifications were used instead of an interstitial page, only about 50 percent of people correctly identified the video as being fabricated.

A potential supplement to the approaches already discussed is civic education or media literacy.24 Andrew Guess and coauthors evaluated the impact of a set of tips for spotting fake news published by Facebook on the platform. They found that engaging with these tips improved the ability to detect fake news.25 Platforms might consider ways that they could integrate educational tools directly into their products. Such action would be supported by the psychological research cited above suggesting that preparing people to receive misinformation can lessen its impact.

Takeaways
  • Exactly what labels are shown, when they are shown, and who applies them is extremely consequential in the effectiveness of a label or correction.
  • Synthetic and manipulated media may require stronger, more visual labeling strategies to be effective.
  • Education to help people better recognize misinformation may be an additional tool to help to minimize the impact on belief.

Conclusion

In sum, there is still relatively little research on combating synthetic and manipulated media in particular. Existing literature does, however, provide some guidance on what strategies may be worth exploring in more depth. Finding ways to prevent people from being exposed to mis/disinformation in the first place is still the most effective way to minimize its impact. This can include removing the content as well as other strategies like downranking, using strong labels that precede the content, or requiring users to opt-in. Strong labels before exposure also seem to be the most promising at reducing belief. After exposure, it may be worth exploring stronger, more visual correction strategies and finding ways to boost the credibility of the correction source. It could also be worth considering ways to correct mis/disinformation that don’t challenge the underlying worldview of those exposed, although these more targeted strategies may be logistically challenging except for the most significant and high-impact information. Finally, civic education may be an additional tool to support other strategies, although it should not be seen as a standalone solution.

Table 1. Summary of available evidence on the effectiveness of policy responses to synthetic and manipulated media used by social media platforms.
Policy Evidence of Efficacy Caveats
Removal of Content Prevents any further exposure to the content on the platform when successful Relies on the ability to effectively remove

If content is covered by the media or available on other platforms, people may still be exposed
Downranking Content Facebook reports being able to reduce content visibility by about 80 percent Only applies on platforms with algorithmic curation
Demonetization No public studies  
Limiting Dissemination Reduced shares overall by 25 percent

Can reduce speed of information spread by an order of magnitude
The most viral content is the hardest to impact, so may need more direct targeting to be effective
Labeling Pre-exposure is typically more effective than post-exposure

Providing specific corrections and new factual information is more effective than vague statements

Most effective coming from a trusted source

Most effective when not undermining worldview but rather affirming it

Simple language is more effective than complicated descriptions

The more intrusive the label, the more effective
No approach fully eliminates the impact of misinformation
Providing Context Shown to reduce belief in misinformation and outperform some labels  
Civic Education Studies conducted both on platforms and offline show promise reducing impact of mis/disinformation< Should be used to supplement more direct strategies

Megan Metzger is associate director for research at the Global Digital Policy Incubator, part of Stanford University.

Notes

1 The effectiveness of all approaches explored in this memo depends on the ability to detect the content in the first place. There are a number of challenges related to detection, but these are outside the scope of this memo.

2 The ethical dimensions of the strategies explored here are discussed in a companion memo.

3 Antonia Woodford, “Protecting the EU Elections From Misinformation and Expanding Our Fact-Checking Program to New Languages,” Facebook Newsroom, April 25, 2019, https://newsroom.fb.com/news/2019/04/protecting-eu-elections-from-misinformation; and Kevin Martin, “Letter to Representative Schiff,” 2019.

4 Philipe de Freitas Melo, Carolina Coimbra Vieira, Kiran Garimella, Pedro O. S. Vaz de Melo, and Fabricio Benevenuto, “Can WhatsApp Counter Misinformation by Limiting Message Forwarding?,” September 23, 2019, arXiv:1909.08740v2. Accessed at: https://arxiv.org/pdf/1909.08740.pdf.

5 Katherine Clayton et al., “Real Solutions for Fake News? Measuring the Effectiveness of General Warning and Fact-check Tags in Reducing Belief in False Stories on Social Media,” Political Behavior (2019), https://doi.org/10.1007/s11109-019-09533-0.

6 Ullrich K. H. Ecker, Stephan Lewandowsky, and David T.W. Tang, “Explicit Warnings Reduce but Do Not Eliminate the Continued Influence of Misinformation,” Memory & Cognition 38, no. 8 (2010): 1087–1100; Briony Swire and Ulrich Ecker, “Misinformation and Its Correction: Cognitive Mechanisms and Recommendations for Mass Communication,” in Brian Southwell, Emily Thorson, and Laura Sheble, eds., Misinformation and Mass Audiences (Austin: University of Texas Press, 2018); Hollyn M. Johnson and Colleen M. Seifert, “Sources of the Continued Influence Effect: When Misinformation in Memory Affects Later Inferences,” Journal of Experimental Psychology, Learning, Memory and Cognition 20, no. 6 (1994): 1420–36.

7 Adam Berinsky, “Rumors and Health Care Reform: Experiments in Political Misinformation,” British Journal of Political Science 47, no. 2 (2017): 241–62.

8 Swire and Ecker, “Misinformation and Its Correction.”

9 Stephan Lewandowsky, et al., “Misinformation and its Correction: Continued Influence and Successful Debiasing,” Psychological Science in the Public Interest 13, no. 3 (2012): 106–31.

10 Ecker, Lewandowsky, and Tang, “Explicit Warnings Reduce but Do Not Eliminate the Continued Influence of Misinformation.”

11 Lewandowsky, et al., “Misinformation and its Correction.”

12 Dan Kahan, “Fixing the Communications Failure,” Nature 463 (2010): 296–98; Lewandowsky, et al., “Misinformation and its Correction”; Swire and Ecker, “Misinformation and Its Correction”; Irina Feygina, John Jost, and Rachel Goldsmith, “System Justification, the Denial of Global Warming, and the Possibility of “System-Sanctioned Change,” Personality and Psychology Bulletin 36, no. 3 (2009): 326–38.

13 Jimmeka Guillory and Lisa Geraci, “Correcting Erroneous Inferences in Memory: The Role of Source Credibility,” Journal of Applied Research in Memory and Cognition 2, no. 4 (2013): 201–9.

14 Drew B. Margolin, Aniko Hannak, and Ingmar Weber, “Political Fact Checking on Twitter: When Do Corrections Have an Effect?,” Political Communication 35, no. 2 (2017), https://doi.org/10.1080/10584609.2017.1334018.

15 Kahan, “Fixing the Communications Failure.”

16 Berinsky, “Rumors and Health Care Reform”; Swire and Ecker, “Misinformation and Its Correction.”

17 Clayton, et al., “Real Solutions for Fake News?”

18 Jeff Smith, Grace Jackson, and Seetha Raj, “Designing Against Misinformation,” Facebook Design, hosted on Medium, December 20, 2017, accessed at: https://medium.com/facebook-design/designing-against-misinformation-e5846b3aa1e2.

19 Leticia Bode and Emily K. Vraga, “In Related News, That Was Wrong: The Correction of Misinformation Through Related Stories Functionality in Social Media,” Journal of Communication 65 (2015): 619–38.

20 Georg Stenberg, “Conceptual and Perceptual Factors in the Picture Superiority Effect,” European Journal of Cognitive Psychology 18, no. 6 (2006): 813–47; Stephen Madigan, “Picture Memory,” in J. C. Yuille, ed., Imagery, Memory, and Cognition: Essays in Honor of Allan Paivio (Hillsdale, NJ: Lawrence Erlbaum Associates, Inc., 1984), 65–89.

21 Ullrich K. H. Ecker, et al., “Correcting False Information in Memory: Manipulating the Strength of Misinformation Encoding and Its Retraction,” Psychonomic Bulletin and Review 18 (2011): 570–8.

22 Cristian Vaccari and Andrew Chadwick, “Deepfakes and Disinformation: Exploring the Impact of Synthetic Political Video on Deception, Uncertainty, and Trust in News,” forthcoming in Social Media + Society, January 2020

23 Jigsaw via personal communication, January 15, 2020.

24 Erin Murrock, et al., “Winning the War on State Sponsored Propaganda: Gains in the Ability to Detect Disinformation a Year and a Half After Completing a Ukrainian News Media Literacy Program,” International Research and Exchange Board, 2017, accessed at: https://www.irex.org/sites/default/files/node/resource/impact-study-media-literacy-ukraine.pdf; Jon Roozenbeek and Sander van der Linden, “Fake News Game Confers Psychological Resistance Against Online Misinformation,” Palgrave Communications 5 (2019).

25 Andrew Guess, et al., “Can Digital Literacy Save Us From Fake News? Evidence From the 2018 U.S. Midterm Elections,” unpublished working paper, 2019.

 

Ethical Analysis of Responses to Synthetic and Manipulated Media

David Danks and Jack Parker

Methods to edit, modify, and even create media—video, images, audio—have rapidly become more sophisticated and difficult to detect. At the same time, these methods have become more widespread and easy to use, leading to an unsurprising proliferation of synthetic and manipulated media of all sorts. Synthetic and manipulated media have impacts that are hard to measure and future impacts that are hard to predict. However, there is already evidence that they can be used for harassment, propaganda, political interference, and other means to harm people.

In light of these threats, a number of responses have been suggested. These responses can be evaluated for their legal permissibility, their real-world efficacy, their feasibility, their consistency with existing policy, and other (conceptually independent) dimensions. This memo analyzes the ethical implications of platform responses to synthetic and manipulated media. The analysis informs an effort by the Carnegie Endowment for International Peace to develop a playbook of effective and ethical responses ahead of the 2020 presidential election in the United States.

Structural and informational asymmetries make it difficult, and arguably unethical, to expect end-users to play the primary role in effectively responding to synthetic and manipulated media. Thus, the ethical obligation (to reduce harms due to synthetic and manipulated media) falls largely, though not exclusively, on platforms. Henceforth, the memo uses the term “fabricated media” to encompass both synthetic media (digital falsifications of images, video, and audio created with AI algorithms) and manipulated media (any other digital alterations or manipulations of images, video, and audio including so-called cheapfakes).

This memo considers two types of responses that can potentially be used in conjunction with one another. Reactive responses are applied to fabricated media when or after it is introduced to the platform. For example, standard content moderation after posting is reactive. Preventative responses aim to discourage the development or sharing of problematic fabricated media in the first place. In general, the memo focuses on cases where a response is clearly either ethical, or unethical. Many real-world cases will fall into a “messy middle” where the ethical status of a response is unclear, and the memo indicates key concepts and principles to guide a platform’s thinking in those cases.

Reactive Responses

First, the memo considers responses to content1 that has already been produced. Platforms might hope that responses could be guided purely by objective features such as specific visual or auditory patterns (or discontinuities), strings of text, or metadata. These features can be determined and assessed by automated systems (perhaps after training), and so readily scale. Unfortunately, in the vast majority of cases, the ethical status of a response will depend partly on contextual and interpretive features such as the intended effect of the media, or the social group membership of the producer or consumer. For example, the same synthesized video of a political figure might be an act of satire or a tool for political subterfuge, depending on the context. These latter features have proven difficult for machines to identify automatically or at scale, but platforms will need to tackle the challenge of inferring them to ensure ethical responses. (For ambiguous cases, there may also be human involvement in the response, preferably by individuals who are independent from the business decisions of the platform.)

Content Removal

Platforms clearly have the ethical right to remove content under some circumstances. Some obvious cases involve media that violate the platform’s Terms of Service, or that are illegal (for example, media depicting child sexual abuse). Most platforms already have procedures in place to remove such content. If the content is legal and permissible, though, then platforms must consider which actual benefits and harms—physical, psychological, social, ethical, and so forth—will likely result from the fabricated media. For example, an individual’s interests in artistic expression and spreading their viewpoint will typically be promoted by some fabricated media. At the same time, the target of fabricated media will have various interests in privacy, nonharassment, or reputational integrity, each of which might be threatened by the fabricated media. Moreover, these impacts can be context-sensitive: a synthetic video of a supposed crime could have very different impacts depending on whether the faked “criminal” is a member of an oft-maligned minority group.

In theory, one should do a careful weighing of all stakeholder interests to determine whether some content removal is ethical. In the real world, such an analysis is rarely possible, but thankfully, often not needed. Consider the simple example of a synthetic video that “shows” a political candidate behaving reprehensibly, or speaking about their supporters in offensive or derogatory ways. This fabricated media clearly harms important interests of the candidate and their supporters. At the same time, it presumably advances the interests of their opponents, but not in an acceptable way (at least, in many political systems) as elections ought to be contested on the basis of accurate information, not outright lies. Hence, almost any weighing of the relevant interests would conclude that fabricated media of candidates behaving reprehensibly fails to serve an ethical purpose, so can ethically be removed. Of course, with enough creativity, one can dream up a context where such a fabrication might acceptably advance legitimate interests, but those will be strange circumstances indeed.

We can generalize from this case to a broader principle.2 Platforms can ethically remove any fabricated media that presents false information about someone that will likely lead to significant harm (reputational or other) to the target without the target’s consent. The restriction to significant harms is important: false but harmless information might be suitable for ethical removal, but a more detailed investigation would be required. Similarly, if the connection to harm is tenuous or improbable, then the platform would be in the “messy middle” where further investigation and analysis is required.

At the other end of the spectrum: Fabrications made with the consent of the target ought not be removed on the sole basis of their being fabrications. In addition, platforms ought not remove fabrications that make only superficial adjustments that do not affect the core impact or message of the media (for example, removing a blemish on a political candidate’s skin), even if there is not explicit consent, as these will almost always be ethically neutral changes. No interests of the target are being additionally harmed by the fabrication, so there is no basis for its removal.

The timing or speed of content removal is also ethically relevant, since interests are continually promoted or threatened/harmed during the whole time that the fabricated media is available. As the potential harms increase, the ethical need for swift removal increases. The time pressures are particularly relevant for removal responses, since removal is potentially reversible: content that was taken down can, after a period of time, be restored. In contrast, harms that occur while the media is available cannot be undone. Hence, we contend: If platforms have mechanisms to restore removed fabricated media in a reasonable timeframe, then the level of (likely) harm required for removal ought to be lower.

To this point, the memo has largely discussed the easier cases at the extremes. Removal decisions for the “messy middle” are more difficult, and likely will require consideration of the different stakeholder values and interests. For example, a disclaimer that some piece of media is intended as satire may be sufficient to render removal unethical (because it would now unnecessarily violate the interest in artistic expression), but also may be mere window dressing such that removal is still ethically justifiable. There are principled ways to make such decisions, but they are unlikely to be easily automated, and so will likely be difficult to scale. However, reactive responses that fall short of full removal may be ethically defensible in such cases.

Dissemination Control

Another type of reactive response focuses on ethically permissible dissemination restrictions. As with removal, platforms have the ethical right to engage in (some) control over the mechanisms and extent of content dissemination. The decision by a platform to host some media does not thereby imply any particular ethical obligation to permit unrestricted dissemination or circulation. In fact, the decision to permit individual users to pass fabricated media to other users is itself an ethical choice. Weaker circulation controls promote widespread access to new information and ideas, and enable people’s voices to (sometimes) be heard more widely. Stronger controls help to slow the spread of harmful fabrications and reduce the chances that people are unwillingly exposed to harmful (to them) media. Neither is privileged in advance.

Dissemination controls do, however, clearly influence people’s behaviors on a platform, and platforms have an ethical obligation to advance their stated values. Although ethical considerations alone do not dictate any particular dissemination strategy, they should be tailored to the particular values and interests of the platform and community. Any dissemination controls ought to support the values of the platform (and the user community, if that is part of the platform values). Given that all current platforms have some version of “do not actively harm people” as a value, platforms should subject fabricated media to some dissemination controls whenever it is likely to cause harms. For example, many platforms arguably have an ethical obligation to not permit unlimited distribution of fabricated media that negatively portrays other individuals, even if the resulting harms would not be so significant to warrant removal.

On many platforms, users not only can share content, but also can change or modify content (e.g., to eliminate irrelevant header information, or select just the part of the content that is directly relevant). However, those same capabilities can be misused so information appears to come from a different source, or have a different content, or more generally change in meaning. People have a legitimate ethical interest in knowledge of the source and type of information that they receive. Platforms should thus, on ethical grounds, restrict their dissemination systems so that only irrelevant information can be removed prior to (further) dissemination. Of course, whether something is irrelevant will typically depend on features of the context, and so there is further reason for platforms to tackle the context inference challenge.

Some platforms even permit content alteration before dissemination, but it is very difficult to identify potential use cases in which this action would be ethical. Even alterations that seemingly make no difference to the intended content will threaten the content creator’s interest in having their work provided to the world, and the changes have (by assumption) no compensatory gain. We thus tentatively conjecture that users should not be allowed to edit content “midstream” beyond deletion of irrelevant information. Of course, platforms do not have the further obligation of trying to track offline changes made to content (that is subsequently uploaded).

Differential Promotion

Platforms can also respond through their control over the visibility and prioritization of content, and in particular through mechanisms of downranking. The line between dissemination control and differential promotion is fuzzy, but useful for the purposes of this memo. Platforms host, and search engines help to manage, a massive amount of content, and so those entities have significant influence over what people see. Obviously, no platform is truly neutral or agnostic; all must incorporate some form of filtering or ordering. For example, many current platforms use techniques such as upvoting or preferred propagation through a social network. The morally appropriate factors for the use of these methods include both fine-grained information about the users (for example, their relationships with other users, or their past creation, dissemination, and promotion histories), and also the fabricated media itself (such as whether it is labeled, by users or by algorithm, as “NSFW”). However, no new moral interests arise for differential promotion—this response involves the same considerations (though perhaps different thresholds) as content removal and dissemination control.

In general, given the late stage in the process when promotion becomes relevant, we contend that the ethical default is that platforms ought not manipulate the promotion of particular fabricated media. If the harmed values were not sufficiently weighty to warrant content removal or dissemination control, then it is less likely that they are weighty enough to require the platform to control promotion. We recognize that this recommendation goes against the practices at many platforms, but there is only a narrow band of cases in which (a) some influence is ethically warranted, but (b) that influence ought not be removal or control. Efforts to identify that small subset are likely to cast the net either too broadly or too narrowly.

Labeling Content

The last potential reactive strategy involves visibly flagging or labeling fabricated media. For example, if a political candidate generates synthetic media to provide video of their role in some dramatic event (such as acting heroically in combat), then a platform might want to include a label on the content to indicate that it is synthetic media, not actual footage. In general, labeling poses only a low (though not zero) risk of infringing on the self-expression of artists, educators, and similar parties. This interest is easily outweighed by other interests, such as the interest in being informed or an interest in public safety. As a result, labeling should generally be considered an ethically permissible response for media identified as fabricated. However, that does not determine what, or who, should have the authority to label content.

In particular, as earlier noted, nonobjective features like context will almost always be relevant to the harms that are caused by fabricated media. Automated labeling will not be sufficient if some degree of context-related interpretation is needed. But employing humans to assign labels raises ethical concerns about whose judgments are ethically defensible. The standard response to date from platforms has been to use a third-party source, such as a fact-checking site. The primary ethical consideration is that these decisions should be made on the basis of (potential) harms to relevant individuals, rather than (potential) problems for the platform. Individuals entrusted with labeling fabricated media thus ought not have a political or financial conflict of interest in those actions.

Finally, an important subset of content labeling provides corrections or additional context alongside fabricated media. These options raise similar questions as above, and as with the case of labeling, corrections or supplemental information should generally be considered a permissible response to media identified as fabricated. However, this task carries the additional obligation of ensuring the correction itself is accurate. Corrections and counter-information should come from reliable sources that are publicly accessible by users and other affected individuals.

Preventative Measures

Digital Literacy / Civic Education

Preventative measures do not directly engage with content, but instead work to improve the resilience (in many ways) of platforms and platform communities. These responses do not directly target any particular content, but rather try to reduce the harms caused by future content (whatever it might be). For example, digital literacy and similar skills can help users resist disinformation and other kinds of harmful speech.

In general, the ethical obligations of social media platforms are to prevent harms and advance values, using whatever tools they have at their disposal. More specifically, institutions interested in combating problems related to fabricated media have an ethical obligation to ensure access to educational programs. Implementation of this obligation is potentially tricky, however, as access must be appropriately equitable. Provision of educational programs ought not create or exacerbate differences in groups’ vulnerabilities to fabricated media. In addition, there will typically be a trade-off between the depth of an educational intervention and its reach or penetration. Given the prevalence of fabricated media and the difficulty of predicting exactly who might be harmed by it, platforms ought to implement the most lightweight educational practices possible to maximize the number of people who receive the intervention.

While education and training interventions are potentially powerful, they also shift the burden of controlling misinformation onto the users of social media platforms. These users have done nothing wrong, but this kind of preventative measure requires them to bear the burden (in this case, to learn how to resist harmful fabricated media). While this burden-shifting might sometimes be ethically defensible, platforms should bear the primary burden of identifying and preventing the spread of misleading fabricated media.

Norm-Setting

Fabricated media creation and dissemination do not occur in a vacuum. Platforms create and sustain communities with their accompanying standards, norms, and conventions. Hence, the framing of community standards, community identity, and/or submission constraints can have a real impact on content producers.

Platforms have the ability to shape the fabricated media that are introduced by emphasizing particular aspects of their communities. This kind of “soft power” is not always acknowledged, but that does not make it less real. There are clear differences in the cultures of various platforms, and deliberate shaping and use of that culture can be a defensible ethical choice, if it supports the broader values of a platform or its communities. Community guidelines—including examples of desirable behavior—can be used to communicate rules, and also informal norms. Sites should clearly communicate positive expectations of users as community participants, and reinforce behavior consistent with those expectations.

The power to cultivate communities with particular properties does present a new ethical challenge. Platforms bear ethical obligations to create and maintain the dissemination norms of their user community, since they play a meaningful causal role in preventing the spread of harmful fabricated media. Moreover, these ethical obligations might be quite significant, particularly if community norms and user policing turn out to be more effective responses than some of the more reactive ones.

David Danks is the L. L. Thurstone Professor of Philosophy and Psychology, and head of the Department of Philosophy, at Carnegie Mellon University.

Jack Parker is a PhD candidate in the Department of Philosophy at Carnegie Mellon University.

Notes

1 For simplicity, we do not distinguish between content directly on a platform, and links to content that is hosted outside the platform. In the latter case, removal (and other responses) would be directed toward the link, not the content.

2 Our rules all concern the presumptive ethicality of removal (or other response), as extreme contexts and/or intentions may exist that flip the ethicality of the response.