Social media and messaging platforms are de facto regulators of online speech and therefore key decisionmakers in combating online influence operations. While many nations have laws regulating internet content, platforms have substantial autonomy and responsibility to draw their own lines between acceptable and unacceptable content. In recent years, major platforms have begun maintaining public “community standards”—written policies on a wide range of problematic activity like hate speech, violence, and influence operations.1

Influence operations are just one topic addressed by community standards, and these standards are just one tool that platforms have to counter influence operations.2 Still, community standards contain valuable insights for the counter–influence operations community. These documents include some of the most explicit and detailed attempts by anyone to define various elements of influence operations, spell out exactly what is problematic about them, and articulate consequences. It is difficult to think of any other policies that have as much impact in the fight against influence operations.3

To our knowledge, no one has collated the community standards of all major platforms and systematically analyzed how they address influence operations. There have been select comparisons of how specific platforms address certain high-profile categories, such as terrorist content,4 online harassment,5 and disinformation.6 But influence operations can employ multiple methods and cause complex harms that defy category-by-category analysis.7 To understand how platforms combat influence operations, we should examine community standards in their entirety.

Jon Bateman
Jon Bateman is a fellow in the Cyber Policy Initiative of the Technology and International Affairs Program at the Carnegie Endowment for International Peace.
More >

To that end, the Partnership for Countering Influence Operations (PCIO) has compiled all community standards from thirteen social media and messaging platforms (Facebook, Gab, Instagram, LinkedIn, Pinterest, Reddit, Signal, Telegram, TikTok, Tumblr, Twitter, WhatsApp, and YouTube) and organized them into a coded dataset to facilitate analysis.8 These standards are continually changing, so our dataset represents a snapshot in time.9 We aim to help researchers and practitioners in the counter–influence operations community better understand and compare how platforms are approaching this problem.

Our review of the dataset yielded three initial observations:

  • Platforms vary in how they characterize influence operations and related tactics. Platform community standards largely avoid the umbrella terms most familiar to experts and the public, like “disinformation.” Instead, they have defined a number of more specific prohibitions—some familiar (like violent threats) and some novel (like “coordinated inauthentic behavior” or “civic integrity” violations). Despite differences across platforms, there are converging approaches on some issues.
  • Community standards do not focus solely on the content of users’ communications. User behavior—for example, harassing or spammy activity—is an even greater focus. There are also policies governing which real-world actors are allowed to use platforms, how content may be distributed, and what actual effects would lead platforms to intervene. Most platform policies (our term for individual sections within larger community standards documents) address multiple dimensions.
  • Some platforms’ community standards are far longer, more complex, and/or more detailed than others. Twitter’s community standards are almost twice the length of Facebook’s and more than one hundred times as long as Signal’s.

These observations each bear on a basic tension facing platforms: whether to combat influence operations in a generalized or particularized way. Generalized approaches include the use of short, sweeping language to describe prohibited activity, which enables platforms to exercise discretion. Particularized approaches include the use of many distinct and detailed policies for specific types of prohibited activity, which provides greater clarity and predictability. Neither approach is obviously better, and the right approach may depend on circumstance. Our data suggests that platforms balance these two approaches in many different ways. As in other facets of this field, more research is necessary to measure which approaches, if any, are effective in countering influence operations and their harmful effects.

Diversity of Terms and Categories

The terms most familiar to experts and the public for describing harmful manipulative activity—such as influence/information operations and dis/misinformation—are noticeably rare in platform community standards (see Table 1). This may be because these familiar terms are broad and have contested meanings, which makes it difficult for platforms to apply them in real-world cases.10

In place of umbrella terms like influence operations, platform community standards tend to use a number of more specific policies that cover various categories of activity, like spam, harassment, endangering civic integrity, and so on. Effectively, platforms have subdivided the broad concept of influence operations into multiple (sometimes dozens of) smaller component parts (see Figure 1).

High degrees of subdivision often reflect a particularized approach to defining influence operations. It can be more clear-cut and easier to administer—akin to a criminal code that splits up “theft” into specific crimes like larceny, robbery, burglary, and embezzlement. However, influence operations can be complex and sometimes evade neat categorization. For example, a sophisticated actor might push the limits of multiple policies while being careful not to clearly violate any.

Perhaps recognizing this, platforms sometimes prefer more generalized terminology in their public communications. Twitter, for example, maintains an archive of “state-backed information operations,”13 even as its own community standards avoid the term. Platforms have also coined neologisms to capture more elusive aspects of information operations.14 For example, Facebook introduced the concept of “coordinated inauthentic behavior,” which includes a catchall ban on “behaviors designed to enable other violations under our Community Standards.” Finally, some platforms seek to punish “borderline” content that falls just shy of community standard prohibitions.15

Natalie Thompson
Natalie Thompson is a research assistant with the Technology and International Affairs Program.

Beyond terminology, the substance of community standards also varies significantly across platforms—though there is more consensus in some areas than others. Unsurprisingly, platforms seem to converge toward common approaches in areas where society as a whole has achieved greater agreement. For example, nearly every platform bans spam, and most platforms have explicit terrorism-related prohibitions.16 Other areas of relative consensus include contraband, copyright violations, violent threats, and impersonating specific individuals (see Figure 2). These activities are roundly condemned and typically outlawed in the countries where major platforms operate; it is therefore no surprise that platforms broadly ban them.17

Norms and laws about other areas—for example, false information and hate speech—are much more globally diverse and contested. Platform community standards seem to reflect this reality. A few platforms have relatively broad prohibitions against false information, including Pinterest’s ban on “misinformation, disinformation, [and] mal-information,” and TikTok’s policy against “misinformation that causes harm to individuals, our community, or the larger public regardless of intent.” Other platforms, including Facebook, Twitter, and YouTube, ban false information only in special cases, like when violence or electoral disruptions are likely to result. The number, nature, and scope of these special cases vary greatly from platform to platform.

The apparent correlation between societal consensus and platform consensus suggests that societies may need to develop more settled boundaries between legitimate and illegitimate discourse before platforms converge on common ways of handling many influence operations. Governments and civil society organizations all have an active role to play in shaping these discussions. And of course, platforms’ community standards can themselves influence what society deems acceptable. Granted, history suggests that near-total consensus will be difficult to achieve within and between large societies, and that these issues will never be fully and finally settled.18 Widening existing consensus and narrowing the intensity of remaining differences would represent significant progress.

Victoria Smith
Victoria Smith is a nonresident senior research analyst at the Partnership for Countering Influence Operations at the Carnegie Endowment for International Peace.
More >

In the meantime, malicious actors can try to exploit differences among platforms’ community standards. If certain kinds of content are banned on one platform, users can move to another platform that still allows that activity.19 For example, after Twitter suspended thousands of accounts in wake of the January 6, 2021, U.S. Capitol insurrection, many far-right users reportedly switched to encrypted messaging apps.20

Elements of Platforms’ Policies

Community standards are often described as moderating content: controlling the messages and narratives allowed on platforms. In reality, platforms’ standards govern much more than the content itself. They also bar actors with certain offline identities or affiliations from participating; prohibit various online behaviors; limit how content can be distributed; and disallow activity with certain real-world effects. (This framework is known as ABCDE.21) Enforcement actions in any of these areas may result in the removal of content, but content per se is not always the cause or target of the prohibition.

Our data shows that behavior, not content, is the top focus of platforms’ community standards (see Table 2). Behavioral policies include those barring harassment, spamming, and hacking—each of which can be tactics used in influence operations. Content is a close second in terms of frequency. Actors, distribution, and effects are more rarely addressed in community standards.

One reason that behavior and content may receive the lion’s share of attention: they are readily observable. All platforms, except end-to-end encrypted messaging services like Signal and WhatsApp, can access and monitor user content. Behavior is also visible to platforms; even encrypted messaging services can analyze metadata and reports from other users.

In contrast, an actor’s true identity (for example, their real name or membership in a terrorist group) may be hidden or disguised, making it difficult to ban someone on this basis. Real-world effects of online activity are even more difficult for platforms to discern. Experts know very little about how platform content shapes people’s beliefs and actions, making it very difficult for platforms to apply any effects-based policies.

Of the 154 distinct policies in our dataset, 47 of them (30 percent) addressed only one element of the ABCDE framework (see Figure 3). For example, Reddit has a policy against “breaking the site” via malicious code and the like. This prohibition is purely behavioral; it does not address actors, content, distribution, or effects. Single-element policies are consistent with a particularized approach to combating influence operations: they bar one type of influence activity or one tactic that may be used in a larger influence operation.

About half of platforms’ policies address two ABCDE elements, and the rest implicate three or more. YouTube’s policy on spam, deceptive practices, and scams is an example. It prohibits excessive, repeated posting (behavior); scams and manipulated media (content); abusive live streaming (distribution); and suppressing voter participation (effects). Such policies reflect a more generalized approach, attempting to account for how influence operations manifest in varied and dynamic ways.

Length and Complexity

The community standards of some platforms are far more comprehensive and detailed than others, judging by their overall word count and internal structure. At the high end, Twitter’s community standards are 23,110 words long. This is far longer than any other platform’s community standards, almost equaling the combined word count of the second longest (Facebook, 12,960) and third longest (YouTube, 11,206). In contrast, the remaining ten platforms average just 1,409 words each, and the shortest (Signal) is 180.

There is also a marked difference in internal complexity across platforms (see Figure 2). Twitter’s community standards have the largest number of distinct policies (29), followed again by Facebook (27). Instagram, Signal, Telegram and WhatsApp have just two.

There are a number of possible reasons for these differences. For some platforms, shorter and simpler community standards translate to fewer prohibitions and therefore reflect a less assertive posture toward influence operations. Of the five shortest community standards, two belong to platforms that espouse laissez-faire speech values (Gab and Telegram) and three belong to encrypted messaging services with a limited ability to moderate content (Signal, WhatsApp, and Telegram again).23

For other platforms, succinct community standards reflect a generalized approach to influence operations. TikTok and Pinterest have modestly sized community standards, for example, yet they have relatively broad prohibitions on misinformation. Again, we do not know which approach is more effective in combating influence operations, or how this might vary depending on the situation.

Differing institutional structures and resource levels may also be a factor in the varying breadth and depth of platforms’ community standards. Facebook, for example, has a large workforce with specialized teams for unique aspects of influence operations and related problems.24 This could naturally lead to a more particularized set of policies. Smaller platforms like Reddit may have just a handful of people responsible for defining the broad swath of harms covered in their community standards. Such a structure might lend itself toward a more generalized approach.


Our research on platforms’ community standards helps to highlight a tension between generalized and particularized approaches to combating influence operations. Both approaches come with trade-offs.

Generalized policies can resemble what legal scholars call “standards”: loose guides that may require significant judgment to apply, depend heavily on context, or involve weighing multiple factors that could conflict.25 Such approaches give platforms flexibility to enforce the spirit of a policy rather than the rigid letter. This makes it harder for bad actors to exploit narrow loopholes, and enables platforms to react quickly as influence operations take surprising new shapes. Generalized policies can also take less time and effort for platforms to craft and tweak, which is advantageous for smaller or younger platforms. And broader policies may encourage far-flung teams within platforms to collaborate on the big picture problems—harmful and manipulative influence campaigns—instead of parceling out their work into narrow silos of concern.

Particularized policies, in contrast, are closer to legal “rules”—specified sets of necessary and sufficient conditions that lead to well-defined outcomes.26 These policies can be more transparent and predictable to users. Such approaches can bolster platforms’ legitimacy and help defuse perceptions of arbitrary or biased platform decisionmaking. More clarity on platforms’ policies can also help facilitate public debate about how platforms use their power, and therefore support efforts to promote greater accountability. (Facebook, for example, has been able to cite a specific policy to justify 94 percent of its major takedowns.27) Notably, the Facebook Oversight Board’s initial tranche of decisions seem to suggest a preference for Facebook to treat its policies more like rules than standards.28

Future research should more deeply explore the tension between generalized and particularized approaches and gauge their relative effectiveness in combating influence operations under various conditions. The tension could have significant policy implications. For example, if longer and more detailed community standards are better for countering certain types of influence operations, then start-up platforms may need help developing such standards. Perhaps a consortium of companies and experts could develop a model document that anyone could use (or modify) for their own platforms, similar to the American Law Institute’s Model Penal Code and Uniform Commercial Code.29

More research is also needed on how community standards evolve over time. Our research did not focus on change over time, but we incidentally observed some significant changes while updating our database prior to publication. Between September 2020 and February 2021, eight of the thirteen platforms changed some aspect of their community standards. Facebook updated 21 of its 27 policies at least once, and three policies were updated on three separate occasions each. Twitter’s updates during this period actually shortened its community standards by 4,582 words (about 16 percent). Unfortunately, most platforms do not maintain archived versions of past community standards. This makes it hard to answer basic questions like whether, when, and why platforms are moving toward more generalized or more particularized approaches for combating influence operations. Platforms should maintain their own public archives in a format that researchers can easily analyze.


View the Database


1 Alongside these quasi-legislative documents, platforms increasingly publish quasi-prosecutorial and quasi-judicial material, such as enforcement priorities, evidence of wrongdoing in specific cases, and interpretive precedent.

2 For example, platforms also adjust the design and functionality of their products. See Kamya Yadav, “Platform Interventions: How Social Media Counters Influence Operations,” Carnegie Endowment for International Peace, January 25, 2021, And they can take targeted enforcement actions based on the policies in their community standards. See “Disinfodex,” Berkman Klein Center at Harvard University and the Ethics and Governance of Artificial Intelligence Fund at The Miami Foundation, 2020,

3 Some platforms also have internal guidance documents with much more detail on the interpretation and enforcement of community standards. However, many of these documents are not available to researchers or the public. Max Fisher, “Inside Facebook’s Secret Rulebook for Global Political Speech,” New York Times, December 27, 2018,

4 Anna Meier, “Why do Facebook and Twitter’s Anti-extremist Guidelines Allow Right-Wingers More Freedom Than Islamists?,” Washington Post, August 1, 2019,

5 Jessica A. Pater, Moon K. Kim, Elizabeth D. Mynatt, Casey Fiesler, “Characterizations of Online Harassment: Comparing Policies Across Social Media Platforms,” Proceedings of the 19th International Conference on Supporting Group Work (November 2016): 369–374,

6 Kaveh Waddell, “On Social Media, Only Some Lies Are Against the Rules,” Consumer Reports, August 13, 2020,

7 We use the term “influence operations” to refer to “organized attempts to achieve a specific effect among a target audience.” See Elise Thomas, Natalie Thompson, and Alicia Wanless, “The Challenges of Countering Influence Operations,” Carnegie Endowment for International Peace, June 10, 2020,

8 This data set is restricted to publicly available community standards, and does not include other types of platform announcements, blog posts, or reports where different terminology might be used. We included only policies directed at users, not advertisers.

9 Our data is current as of February 22, 2021.

10 Alicia Wanless and James Pamment, “How Do You Define a Problem Like Influence?” Journal of Information Warfare 18, no. 3 (2019): 1–14,

11 These are the terms most frequently used by experts, according to a recent PCIO survey. Victoria Smith and Natalie Thompson, “Survey on Countering Influence Operations Highlights Steep Challenges, Great Opportunities,” Carnegie Endowment for International Peace, December 7, 2020,

12 “False news” appears in three different Facebook policies.

13 Twitter Transparency, “Information Operations,” Twitter,

14 When such terms remain unique to one platform, they exacerbate the problem of fractured terminology in the counter–influence operations field.

15 Josh Constine, “Facebook Will Change Algorithm to Demote ‘Borderline Content’ That Almost Violates Policies,” TechCrunch, November 15, 2018,

16 Some prohibitions are implicit. For example, Reddit has no policy against terrorism per se, but bans content that “encourages, glorifies, incites, or calls for violence or physical harm against an individual or a group of people.”

17 Child sexual abuse material is another example. Although several platforms lack an explicit policy against this material, platforms based in the United States would still likely treat it as forbidden due to criminal laws.

18 Over time, platforms may need to consider adopting more nationally specific policies. Almost all policies in our database apply uniformly across the globe (assuming they are enforced as written). Because the vast majority of platforms we studied are based in the United States, American U.S. values, norms, and legal traditions governing speech and harm have an outsized influence on online discourse throughout the world.

19 Of course, diversity in community standards also has benefits. It can stimulate competition among platforms and enable different online communities to find homes that match their own values.

20 Gerrit De Vynck and Ellen Nakashima, “Far-Right Groups Move Online Conversations From Social Media to Chat Apps — and Out of View of Law Enforcement,” Washington Post, January 18, 2021,

21 Camille François, “Actors, Behaviors, Content: A Disinformation ABC,” Transatlantic Working Group, September 20, 2019,; Alexandre Alaphilippe, “Adding a ‘D’ to the ABC Disinformation Framework,” Brookings Institution, April 27, 2020,; James Pamment, “The EU’s Role in Fighting Disinformation: Taking Back the Initiative,” Carnegie Endowment for International Peace, July 25, 2020,

22 Coding policies according to the ABCDE framework is challenging because elements of the framework sometimes overlap and platform policies can be open to interpretation. We used one primary coder, then had a secondary coder examine a subset of the data to check for internal consistency. Other coders could potentially generate different results from the same data on different interpretations of the codebook and the platform policies.

23 The outlying platform is Instagram: its community standards are among the five shortest, but Instagram is neither an encrypted messaging platform nor an evangelist of laissez-faire speech values. However, Instagram is owned by Facebook and inherits the broader Facebook community standards, meaning it has many more policies than are apparent at first glance. See Facebook Oversight Board Case Decision 2020-004-IG-UA, January 28, 2021,

24 Facebook Careers, “Product & Content Policy at Facebook: How We Build Safe Communities,” Facebook, February 20, 2019,

25 See, for example, Louis Kaplow, “Rules Versus Standards: An Economic Analysis,” Duke Law Journal 42, no. 557 (1992),

26 Ibid.

27 This data comes from Disinfodex. In contrast, Google and YouTube have not cited specific policies for their major takedowns.

28 Evelyn Douek, “The Facebook Oversight Board’s First Decisions: Ambitious, and Perhaps Impractical,” Lawfare, January 28, 2021,

29 American Law Institute, “About ALI,”