Measuring Influence Operations

Evidence-based policymaking depends on measurements. But we lack robust, evidence-based measurements of influence operations’ spread, their effects, and the effectiveness of countermeasures needed to support community resilience and appropriate policy interventions.

To address this gap, PCIO and the Empirical Studies of Conflict Project at Princeton University convened three working groups with more than 40 researchers from North America, Europe, and Latin America, producing six studies. The project culminated in a Measurements Symposium with more than 60 participants from across the research community, government, and philanthropies. Projects emerging from this initiative include:

  • Understanding How Influence Operations Across Platforms are Used to Attack Journalists and Hamper Democracies

    • João Guilheme Bastos dos Santos, Nina Santos, Caio Machado, Luiza Bandeira, Fernanda K. Martins, Jade Becari, Barbara Libório, Jamile Santana, Viktor Chagas, Renara Hirota, Felippe Mercurio

    The year 2020 was considered the most dangerous to be a professional journalist in Brazil’s recent history. The country plummeted from a media environment considered "Open" to "Restricted", and attacks on journalists - especially on women journalists - have been a key factor in this decline. In addition to the targeting of minority groups, these influence operations against journalists have also been characterized by their cross-platform nature, as perpetrators leveraged digital platform features to coordinate harassment and spread disinformation. This research sought to understand how online violence against journalists is fostered in Brazil, how women and non-white journalists are targeted, and how these operations benefit from different platform features. We used a mixed-methods approach that included semi-structure in-depth interviews with 13 Brazilian journalists who have suffered online violence and analysis of data collected from Twitter, YouTube, and WhatsApp. The data was studied combining qualitative, network, and lexical analysis. Qualitative methods were used to double-check and interpret the attacks in our sample; while network analysis methods were used to compose networks of Twitter hashtags and YouTube recommendations to understand the clusters of actors involved. Finally, lexical analysis served for understanding different words and expressions used to attack journalists according to their gender and race. The interviews revealed a widespread perception that women and non-white journalists are more frequently targeted than their male and white counterparts. Moreover, Twitter appeared to be the most problematic platform. These perceptions were confirmed by our data analysis, which showed that, among the five journalists which were more attacked on Twitter, four were female journalists, including the one most attacked. We also found that hashtags related to attacks against media outlets are used by the same actors which support President Jair Bolsonaro campaign for re-election and criticize the Parliamentary Committee on the Pandemic that investigates failures in the government's handling of the crisis. We also found different vocabularies employed to attack journalists. The differences in these strains were particularly related to the gender and race of the journalist attacked. Beyond links connecting Twitter and YouTube, the main convergence between the attacks seems to be the textual patterns in hostile comments found on both platforms.

  • Applying Word Embeddings to Measure Valence in Information Operations Targeting Journalists in Brazil

    • David A. Broniatowski

    Among the goals of information operations are to change the overall information environment vis-à-vis specific actors. For example, “trolling campaigns” seek to undermine the credibility of specific public figures, leading others to distrust them and intimidating these figures into silence. To accomplish these aims, information operations frequently make use of “trolls” – malicious online actors who target verbal abuse at these figures. In Brazil, in particular, allies of Brazil’s current president have been accused of operating a “hate cabinet” – a trolling operation that targets journalists who have alleged corruption by this politician and other members of his regime. Leading approaches to detecting harmful speech, such as Google’s Perspective API, seek to identify specific messages with harmful content. While this approach is helpful in identifying content to downrank, flag, or remove, it is known to be brittle, and may miss attempts to introduce more subtle biases into the discourse. Here, we aim to develop a measure that might be used to assess how targeted information operations seek to change the overall valence, or appraisal, of specific actors. Preliminary results suggest known campaigns target female journalists more so than male journalists, and that these campaigns may leave detectable traces in overall Twitter discourse.

  • The COVID-19 Vaccination Battleship in the Comment Sections of Multiple Platforms: Analyzing Two Case Studies

    • Marcelo Sartori Locatelli, João Pedro Ribeiro Junho, Lorena Leão, Josemar Alves Caetano, Wagner Meira Jr, and Virgílio Almeida

    Social media platforms are central sites where disinformation, denialist narratives, confusion and political disputes find a fertile ground to grow and spread throughout the society. User comments are an important component of political battles on critical societal issues, such as democracy and vaccination. In this article, we examined the collective effect of vaccine influence operations across three social media platforms: YouTube, Gab and Instagram. The focus of our analysis is understanding the dynamics of “online comment sections” of specific channels, videos, and posts on those platforms. Specifically, we analyzed aspects of the language of comments of both pro- and anti-vaccination groups as well as cross-platforms operations to amplify group messages. Finally, we monitored counter influence operations to characterize the language of messages that were banned from the platforms. We present two case studies that investigate the behavior of anti-vaccine and pro-vaccine groups in two different contexts, represented by the use of social media platforms in USA and Brazil.

  • Removal of Anti-Vaccine Content Impacts Social Media Discourse

    • Tamar Mitts, Nilima Pisharody, and Jacob N. Shapiro

    We study the impact of removing anti-vaccine content on social media activity. We follow 160 Facebook groups discussing COVID-19 vaccines from April 13, 2021, through September 13. 36 anti-vaccine groups were removed during our study period. Using a stacked difference-in-differences design, we find that these removals had substantial impacts on the social media activity of those engaging with those groups/channels on other platforms. In particular, Facebook removals of anti-vaccine groups led to a 10-33 percent increase in the rates of anti-vaccine rhetoric among users who linked to the removed groups on Twitter over the month after the removals. These results suggest that taking down anti-vaccine content from one platform can result in increased production of anti-vaccination content on other platforms by those most directly engaging with the removed content.

  • Effects of the Post-January 6 Deplatforming on Social Media Discourse

    • Cody Buntain, Martin Innes, Tamar Mitts, and Jacob N. Shapiro

    We study the impact of the ‘Great Deplatforming’ following the January 6, 2021, insurrection on social media usage. Facebook, Twitter, and YouTube removed tens of thousands of accounts. We identify three key patterns. First, there was substantial intentional movement to alternative platforms, much of it announced on mainstream channels such as Twitter and Facebook. Second, the deplatforming triggered a sustained increase in interest about Gab, but much smaller changes in other platforms. Third, discourse on Gab shifted dramatically, increasing in volume, as one would expect given the increase in interest, but also shifting to include more hate speech and more discussions of narratives around voter fraud and censorship of right-wing ideas.

  • Scaling-Up Interventions Against Misinformation on Social Media Using Targeted Ads

    • Hause Lin, Adam Berinsky, Dean Eckles, David Rand, and Gordon Pennycook

    Our project sets out to develop a paradigm for using online ads to test the efficacy of interventions against misinformation on social media. Past work has found that simple prompts or “nudges” that remind people about accuracy are sufficient to improve the quality of content that people share on social media. This prompt is effective because people are often distracted from even considering whether content is accurate before they choose to share it. We are testing this approach using targeted ads on Twitter to investigate whether messaging about accuracy is sufficient to increase the quality of content that people share. These experiments will provide evidence for the feasibility of using ad campaigns to scale up the testing of interventions on social media.

Please note...

You are leaving the website for the Carnegie-Tsinghua Center for Global Policy and entering a website for another of Carnegie's global centers.

请注意...

你将离开清华—卡内基中心网站,进入卡内基其他全球中心的网站。