Source: Getty
PCIO Baseline

Platform Interventions: How Social Media Counters Influence Operations

There has been a surge in announced interventions to counter influence operations over the last two years. But determining their effectiveness is tricky.

Published on January 25, 2021

Increased public concern about online influence operations has led social media platforms to experiment in recent years. Most noticeably, platforms have banned malicious content or behaviors and taken down networks related to influence operations as part of enforcing platform policies. These moves sometimes trigger accusations of censorship or overreach. Platforms also are criticized for unduly focusing on specific threats while neglecting the product features of their systems that allow influence operations to succeed in the first place.

Responding to these criticisms, platforms have increasingly supplemented bans and takedowns with adjustments to the design and functionality of their products to help combat influence operations. These product changes (which we call platform interventions) address specific aspects of the user’s experience; they generally don’t reimagine how the platforms are structured or re-engineer key algorithms. For example, platforms have introduced new kinds of content labels and made it harder for users to share certain content widely. Interventions are growing in frequency and importance but have not yet been comprehensively analyzed. There has been no systematic comparison of interventions across platforms, or quantitative analysis of how interventions are changing over time. In fact, the rapid pace of new intervention announcements has made it difficult to keep track of them, and they have not been catalogued in one place.

To remedy this gap, the Partnership for Countering Influence Operations (PCIO) assembled a database of interventions announced by fourteen platforms since 2015.1 Announcements came from platforms’ official newsrooms and blogs, through their communications teams on their own social media profiles, or through popular news media outlets. Altogether, ninety-two counter–influence operations interventions were identified.

There has been an extraordinary surge in the number of publicly announced interventions to counter influence operations during the last two years, mostly from Facebook and Twitter. Redirection has become the most common intervention, followed by content labeling. Both techniques involve contextualizing or correcting potentially harmful content, a softer measure than taking down content or banning users who spread harmful content.

Unfortunately, announcements of new platform interventions have rarely addressed whether the interventions had been tested for effectiveness. Though platforms are likely to be testing interventions before rolling them out, their lack of announcements on effectiveness is a lost opportunity to cultivate public trust. Announcing the effectiveness of interventions can also aid researchers studying the effectiveness of countermeasures to influence operations.

Rise in Interventions

Intervention announcements trickled out from 2015 to 2018, followed by large growth in 2019 and 2020. The surging numbers most likely reflect a dual reality: platforms are implementing more counter–influence operations interventions, and they are announcing a higher proportion of the interventions that they implement. The number of announced takedowns has also risen substantially during the same period, according to Disinfodex, a new database of disclosures by platforms and reports from independent researchers.2 However, interventions have increased much more rapidly than takedowns.

Several factors may be driving these trends. First, real-world events—especially COVID-19 and the 2020 U.S. election—have triggered new waves of influence operation campaigns through social media, inviting new responses from platforms. Second, outside stakeholders (including governments, the media, and users) have demanded more accountability and transparency from platforms. Third, malicious actors continue to develop new tactics—often in response to previous platform interventions or policy changes—forcing platforms to develop new countermeasures. Fourth, experts continue to learn more about how influence operations work, resulting in interventions being introduced to target newly understood aspects of influence operations. Fifth, platform interventions are increasingly seen (or portrayed) as effective and necessary complements to wholesale bans and takedowns.

All these reasons suggest that platform interventions will continue to proliferate rapidly. However, the current pace of near-exponential growth is probably not sustainable. Platforms will eventually reach their carrying capacity. The technological, monetary, and human resources required to test, monitor, and compare each intervention to others increase with the growth in interventions.3 Platforms may not have these resources at their disposal. Systematically integrating interventions into platforms and educating and encouraging users to adopt these interventions also pose limits to how many interventions platforms can feasibly introduce. In other words, the dizzying growth of interventions during 2019–2020 indicates a field that is still under development.

Differences Across Platforms

Facebook and Twitter announced far more interventions than any other platforms in the data set, together comprising 54 percent, while the twelve other social media companies comprised the remaining 46 percent.4

Though it remains unclear why Facebook and Twitter have announced so many more interventions than other platforms, there could be several potential reasons. Facebook and Twitter have received greater public scrutiny and disclosed a higher number of influence operations than other platforms. They may therefore be more motivated to implement interventions and/or announce them. (Some platforms may believe that announcing interventions could tip off bad actors or invite unwanted attention from activists and policymakers.) Additionally, not all platforms have the same resources; some can dedicate more time, staff, and money toward formulating, testing, and announcing interventions. Twitter and Facebook also have the most detailed published rules for what is allowed and disallowed on their platforms, according to forthcoming PCIO research.5 Finally, platforms with end-to-end encryption, like WhatsApp, may have fewer intervention options available due to their inability to view content.

Nature of Interventions

Platforms used redirection and content labeling far more than other intervention types (77 times out of 104 total interventions). Redirection consists of inviting users to access authoritative content in another location, either on or off the platform. For example, during the coronavirus pandemic, several platforms have redirected users to information from the World Health Organization and the U.S. Centers for Disease Control and Prevention. Content labeling, on the other hand, means exposing users directly to additional context or identifiers (like fact checks, advertisement funding disclosures, or article publication dates) without requiring a click. For example, in 2020 Twitter began labeling certain types of misinformation related to the U.S. election. Other less frequently used interventions include teaching users to identify disinformation, enabling users to report misinformation, limiting the spread of misinformation, or improving user account security.

The relative prevalence of redirection and content labeling has increased over time, suggesting they have become tools of choice for platforms. Figure 4 shows growth in the three most popular interventions, with redirection and content labeling becoming more dominant since in 2019. Redirection in particular experienced two periods of rapid growth: early 2020, when the coronavirus became a global pandemic, and late 2020, during the lead-up to the U.S. election. Redirection and content labeling have often been used in conjunction with one another. For instance, Instagram has labeled all posts related to the U.S. 2020 election and redirected users who clicked on the label to the platform’s 2020 U.S. Voting Information Center.

The emergence of redirection and content labeling as dominant interventions has significant policy implications. These interventions tend to preserve user choice and autonomy; they allow sensitive content to continue circulating while offering counterspeech as a remedy. This approach can be less intrusive and heavy-handed than, for example, restricting shares or search results. Yet redirection and content labeling place greater burden on users to manage influence operations threats. If users choose to disregard labels or not click on redirections, influence operations can spread. Given platforms’ heavy use of redirection and content labeling, more research should focus on the effectiveness in dealing with influence operations.

Measuring Effectiveness

Measures of effectiveness were notably absent across the data set. When platforms announced new interventions, they rarely (8 percent) stated whether or how these interventions had been tested for effectiveness prior to full-scale deployment. There was no apparent pattern amid the few cases where platforms (Facebook, WhatsApp, and YouTube) did describe effectiveness research. Our review only looked at initial announcements, not subsequent public reports. Platforms do sometimes address effectiveness of interventions in periodic lookbacks, but this reporting is not consistent and may come well after the fact. Ideally, platforms test major product features before deploying them. They likely do so and thus platforms probably have more data on these initial interventions than they share publicly.

In any case, assessing effectiveness of influence operations countermeasures is very difficult. That said, disclosing more information about the efficacy of interventions could help build public trust, aid researchers who study influence operations and platforms’ actions, and help platforms focus efforts on interventions that work.

Conclusion

The growth in interventions is a sign of proactiveness. With increased attention to effectiveness of interventions, expert understanding of influence operations and how to counter them will mature and stabilize over time, enabling platforms to focus on the most powerful and durable interventions.

Loading...

View the Database

Notes

1 To differentiate counter–influence operations interventions from those focused on other problems, I searched platform announcements for certain keywords associated with influence operations (such as “influence operations,” “misinformation,” “disinformation,” and “coordinated inauthentic behavior”). I also searched platform announcements for mentions of key events susceptible to influence operations, such as COVID-19 and national elections. Data was collected between July and November 2020.

2 The average rate of takedown announcements across Facebook, Twitter, Google/YouTube, and Reddit has roughly doubled in the last two years. See “Disinfodex,” Berkman Klein Center at Harvard University and the Ethics and Governance of Artificial Intelligence Fund at The Miami Foundation, 2020, https://disinfodex.org/.

3 Testing and monitoring could be automated and reduce resources required. However, in order to ensure accuracy in automation, platforms would have to invest additional resources.

4 Facebook, Instagram, Messenger, and WhatsApp were treated as separate platforms in the data set, as were Google and YouTube. Most interventions were implemented on only one of these platforms; when they were applied to more than one, I recorded them separately for all relevant platforms in the data set.

5 Natalie Thompson, “Platform Policies Baseline,” Carnegie Endowment for International Peace, forthcoming 2021.

Carnegie does not take institutional positions on public policy issues; the views represented herein are those of the author(s) and do not necessarily reflect the views of Carnegie, its staff, or its trustees.