Source: Getty
Article

Perspectives for Influence Operations Investigators

The field of influence operations investigations is growing rapidly, and researchers need a better grasp of best practices and standards. Here, experienced researchers offer insights ranging from methods to data collection to team development.

Published on October 25, 2022

Introduction: What It Takes to Investigate Influence Operations

Dean Jackson & Victoria Smith

As public urgency swells around online influence operations, professionals from sectors including academia, private industry, and the nonprofit space have rushed to fill gaps in capacity. They bring with them assumptions and approaches from diverse fields like cybersecurity, counterextremism, and offline investigations. As a result, the space is bustling, but it lacks consistent, widely articulated standards and best practices to guide and evaluate its work.

In a 2020 community survey by the Partnership for Countering Influence Operations (PCIO), a third of respondents noted the lack of shared standards as an important concern. PCIO’s Influence Operations Researchers’ Guild exists in part to address this issue. Investigative standards serve a dual purpose. First, they provide commonality: standards are widely followed practices for minimizing mistakes and improving results. Second, they represent expectations that, if not met, open the investigative process up to critique. For these reasons, a field with shared standards should be able to produce more reliable investigations and more readily identify flawed ones.

Because of the high level of public and policymaker interest in this topic, wrong or overblown conclusions carry significant risk of false alarms, botched policy, and wasted resources. If organic content is wrongfully labeled part of an operation, authentic individuals may be censored erroneously—with political consequences. In the realm of international affairs, incorrect attributions of online operations to foreign states could lead policymakers to pursue sanctions or other retaliatory actions under mistaken pretenses. In election contexts, incorrect accusations could shift public debate in advance of voting or damage trust in the results. The stakes are real and can be high. (The false identification of a suspect in the 2013 Boston Marathon bombing is an early example of the harm that can come from open-source investigations done poorly.)

The articles in this compendium represent a starting point for thinking about investigative standards and the investigative process more broadly. They cannot hope to cover every aspect of the community’s work, which can involve an overwhelming number of angles and a wide array of technical skills. Instead, they promote a spirit of adaptability, flexibility, and entrepreneurialism—attributes that are especially important in a quickly changing field.

Each article offers a perspective on the challenges different researchers have come up against, sheds light on how they approach their research, and offers guidance on navigating difficulties. The pieces may be especially valuable for independent investigators and those on small teams, whose members often act as ideas people while simultaneously bearing responsibility for gathering the requisite data, working out how best to analyze it, writing up their findings, and promoting the final product to their target audience. These essays offer relatively simple guidance on matters that are key to the job and the maturation of the field.

Looking across the articles in this compendium, four key themes stand out as valuable for fledgling investigators. First, investigators should know their strengths. The combination of skills in one’s possession should be the driving force behind the research questions they tackle. Learning new skills is important—but so is properly consolidating them. It is better to do one thing well than many things badly, and responsibly acknowledging limitations is a strength, not a weakness.

Second, investigators should develop an analytical mindset and keep asking questions such as: Does the data support the conclusion? Might there be other explanations? Is all the necessary information available, and if not, how might it be found? Knowledge of the available tools and techniques is very useful, but they cannot replace proper analytical processes.

Third, it is essential to understand the target audience for an investigation. Writing a news article is quite different from making recommendations to busy policymakers. Findings should be clear and accessible. Claims should be supported by the evidence presented. Terminology should be clearly defined and consistently used. The relevant methodology should be explained—as should its weaknesses. These simple measures help readers understand both the extent and limitations of an investigation.

Finally, criticism is normal, and investigators at all levels should learn from it. Given the lack of consensus around standards in this field, even respected researchers sometimes disagree, and it is normal for them to criticize one another’s work. This can be daunting for independent or fledgling researchers putting their work into the public domain, but it offers a chance to improve upon and learn from the experience of others.

These essays offer important advice and perspectives on a difficult area of study. Newcomers should take comfort in the observation that this work is hard for everyone. It is a cycle of experimentation and refinement, and no individual is the final authority on all its subject matter. Today’s novices may set the standards that future investigators aspire to meet.

About the Authors

Dean Jackson is project manager of the Influence Operations Researchers’ Guild, a component of the Partnership for Countering Influence Operations at the Carnegie Endowment for International Peace.

Victoria Smith was a nonresident senior research analyst at the Partnership for Countering Influence Operations at the Carnegie Endowment for International Peace.

Return to top ↑

Methodological Considerations and Guidance

Eneken Tikk & Mika Kerttunen

Introduction

In a deteriorating international climate, influence operations span countless targets, platforms, and jurisdictions, which can complicate data collection and research. Often collated with adjacent issues like cybersecurity, hybrid operations, and information interference, influence operations offer multiple entry points, angles, and levels of analysis. As an influence operations researcher, clearly defining and recording your methodology will provide you with a solid basis for your investigation. A sound methodology will not only allow you to see where the limits and gaps in your analysis might be but also brings transparency to your work, allowing other researchers to better understand and accept your conclusions.

Defining Your Research Question

Influence operations provide plenty of opportunity for research. Defining a research question early on in the process will help keep work focused and prevent the inquiry from getting sidetracked.

Identifying a research question starts with an assessment of an event or narrative being exploited by an influence operation. Think about whether the influence operation has emerged in a new setting, whether it is spreading, whether some actors’ techniques or targets have changed or continue to constitute a pattern, or whether particular or new target audiences appear more vulnerable to certain types of operations. Throughout your investigation, it is also important to keep questioning whether other explanations and conclusions about the same or similar operations are still relevant or need updating. You could even borrow a research question that another researcher has used in a different context or time and apply it to another.

Single concepts, terms, or events do not constitute research questions in isolation. Elections, for example, are not a research question. Elections are, however, a legitimate area of interest where a number of interesting research angles and questions can be identified. Potential research questions emerge when the area of interest (in this case elections) is combined with another variable—for example, a specific demographic such as age, political leaning, ethnicity, or gender, as well as geographic location or socioeconomic status. For example, a research question could ask whether specific tactics or narratives are being used to target a particular demographic group.

A good research question makes a compelling case for the research project. Seek to understand your audience and try to identify what real-world problems need to be solved or what decisions key actors (for example, national authorities or private companies) need to make so you can better frame your research question. Asking unexpected and more complex questions differentiates breakthrough research from the ordinary and can help you to attract funding for your work or identify other researchers to potentially collaborate with.

How you frame your research question can significantly affect its meaning and therefore the scope of your research. For example, there is a difference between examining a country’s foreign policy through its influence operations and researching a state’s influence operations as a function of its foreign policy: the first question is interested in policy, whereas the second question is preoccupied with operations. In scientific language we are speaking of independent variables (which are often taken as such) and dependent variables, which are the objects of study. Be aware of both the focus of your research and the independent variables you can explore in your investigation.

Identifying relevant research problems, formulating creative and useful research questions, and knowing how best to try to answer them comes with practice and experience. Learn from academic journals, professional papers, and public documents, particularly those that clearly articulate their methodologies. Understanding existing research will help you to avoid replicating already exploited research questions and allow you to identify research areas that have been largely overlooked and questions that have not been asked (or adequately answered).

Finally, think about your strengths, opportunities, and weaknesses. What and how much data can you responsibly collect, analyze, and store? What are your investigative strengths? Knowing this information will allow you to identify the questions you are best placed to answer.

Common Pitfalls

Some common pitfalls influence operations researchers can fall into include:

Biased or nonidentified assumptions: We all have internal biases that we take as a given or accept as fact. As far as possible, you need to recognize and acknowledge them as part of your methodological process. While previous experience is valuable, it does not necessarily follow that the findings of a previous investigation will be applicable to a new one. For example, don’t assume that the role a specific social media platform plays in one context will be the same in a different context, as platforms play different roles across geographies, cultures, and demographic groups. It can also be easy to fall into the trap of assuming that influence operations are only carried out by or between adversaries. Try to be aware of how the actors you perceive as both friendly and malign are using influence operations. (This issue, obviously, opens up to many interesting and perhaps neglected research questions!)

Narrow or limited data availability: Publicly available data is often limited. This may result in limited freedom of inquiry or recycling of established data. However, the lack of available data can also inspire methods of finding new sources of data ethically or identifying new ways of asking questions.

Lack of evidence: Instead of providing hard, observable, or verifiable evidence, politically and doctrinally oriented research can rely too heavily on an authoritative opinion of a president or commanding general. Researchers may also attach too much value to a single source. Try to ensure your analysis is grounded in evidence.

Lack of agreed definitions: Poor community consensus on terminology used in the field of influence operations can create confusion. For example, different sources, authors, and states use and understand the notions of influence or information operations, foreign interference, or cybersecurity and cyber resilience differently. Be sure to define the terms you use in your own research and be consistent in how you use them.

Consistent level of analysis: Make sure you do not jump between different levels of analysis and discussion unless this follows from your chosen methodology and research question. Analyzing the technical features of a government’s recent influence operation hardly allows drawing conclusions as to this government’s future political intentions. Exercise caution when attributing strategic or political importance to technical or tactical findings and avoid building a policy or strategy argument on technical analysis.

Identifying relevant and real-world-based research problems in the field of influence operations should not be difficult. Moving from a research problem to a clear and coherent methodology will require careful considerations: What questions should I ask and answer to make my point? Which methods will help to solve the problem? Which sources provide me with relevant data and information? The responsibility of the researcher is to provide the reader with a thrilling set of questions, a clear methodology, thorough argumentation, and a compelling solution.

About the Authors

Eneken Tikk is the executive producer at the Cyber Policy Institute and an adjunct faculty member of the Erik Castrén Institute for International Law and Human Rights, University of Helsinki.

Mika Kerttunen is the director of the Cyber Policy Institute, an adjunct professor in military strategy at the Finnish National Defence University, and a member of the board of the Swedish Defence University. He is also a nonresident fellow at Stiftung Wissenschaft und Politik.

Return to top ↑

Managing a Complex Investigative Process

Jacob Wallis

My own understanding of working on the analysis of online influence operations has undergone a significant evolution as the scale of these activities has grown. As I completed my PhD in 2014, I was using combinations of web scraping, network analysis, and qualitative content analysis to examine online political campaigns and the exploitation of social media by extremist groups. By 2020, following a stint working for the Australian government as a national security analyst, I was leading cross-functional teams of linguists, political analysts, data scientists, and machine learning specialists in the investigation of at-scale influence operations by significant state actors like China. The analysis of influence operations has had to scale up, adopting continually changing approaches like those we use at the Australian Strategic Policy Institute (ASPI).

There are a range of actors with diverse motivations—geostrategic, political, and financial—that are willing to manipulate the audiences available across social media platforms. Russia’s interference in the 2016 U.S. presidential election demonstrated the utility of this low-cost mode of information warfare, and other actors have also discovered that there are gains to be made from the heightened political sentiment that arises during election campaigns. Entrepreneurs at the base of the digital economy from an obscure town called Veles, in what is now the Republic of North Macedonia, turned the polarized sentiment in the 2016 U.S. election into a business model, creating an online ecosystem of Facebook groups and content-farm websites covering U.S. politics.

Australia too has had the experience of its election being turned into a business opportunity. During the 2019 federal election, similar content farm operations based in Albania, Kosovo, and the Republic of North Macedonia ran large Facebook pages distributing nationalist and often Islamophobic content as a way of building politically motivated audiences. These actors seeded those groups with links to a content farm, generating advertising revenue. A similar operation run from Israel used the same business model, seeding Facebook pages with far-right and Islamophobic-themed content to drive audience engagement and click-throughs to advertising revenue–generating content-farm websites. These examples demonstrate that a range of actors—from authoritarian states to financially motivated scammers—are actively exploiting the easy reach into the online publics that social media provides. The analysis of influence operations and disinformation campaigns that exploit social media audiences is itself an emerging enterprise as these actors and their activities proliferate.

For operations that take place at scale, this analysis blends features of intelligence analysis, investigative journalism, cybersecurity, open-source research, and data science. And there are many actors that have the willingness, capability, and intent to deploy influence operations at scale. Various forms of interference and subversion have long been used by international actors to destabilize adversaries, but research from ASPI shows a steady trajectory of growth in cyber-enabled foreign interference from 2010 to 2020.The Oxford Internet Institute’s 2020 Global Inventory of Organized Social Media Manipulation highlights that this is a growth industry, with $60 million spent on privatized, disinformation-for-hire services since 2009.

The automation of influence operations radically increases the scale at which actors can disseminate destabilizing disinformation. Advances in the technologies that underpin at-scale disinformation have emerged at a time of geostrategic upheaval. Authoritarian states exert political control over their own populations through information control. Russia’s meddling in the 2016 U.S. presidential election demonstrated that the penetration of social media creates an attack vector that states can exploit in their adversaries. China has watched closely and increasingly interweaves disinformation with coercive diplomacy. As democratic states characterized by their openness struggle to calibrate their responses to this threat, authoritarian regimes continue to manipulate their own domestic populations and seek strategic advantage by deploying these same techniques against their adversaries.

At ASPI we have analyzed two large-scale influence operations data sets that Twitter has attributed to the Chinese state, first in August 2019 and again in June 2020. These are complex data at a scale that is beyond what an individual human analyst can parse manually. Analyzing data at this scale requires quantitative and qualitative approaches. We have also undertaken a number of smaller-scale studies that use similar mixes of quantitative and qualitative methods while working from more limited sets of data.

In August 2019, Twitter released its first influence operations data set linked to the Chinese government into the public domain through its influence operations archive. At ASPI we pulled together a team of three (including my former colleagues Tom Uren and Elise Thomas and myself)—who had skills in statistical analysis, online investigations, and open-source intelligence respectively—to work on the data set.

Our approach was experimental in that we had to find how best to analyze this data, building on the skills available across our small team. We knew that we would need quantitative approaches to analyzing this scale of data (more than 125 gigabytes) married with qualitative analysis that could provide nuance to the rough cuts that our data mining could make. We used the R statistical package to undertake multiple iterative cycles of data mining. Each cycle of data mining was followed by a phase of qualitative analysis to find further leads into the data set of 3.6 million tweets. Through these multiple cycles, we found meaningful ways into the data through the identification of entities, individuals, and events. Each of these categories could then be used to slice the data into meaningful subsets for close qualitative analysis. This process created access points to the data. We also undertook network analysis—using the social network analysis package Gephi—to model the relationships between the 940 accounts that made up the data set.

We drilled into a range of behavioral traits that we could identify from the data. We modeled the self-reported geolocation of the accounts as well as the languages they used over time. These categories of data were incredibly useful in establishing what was happening within the network of accounts. The cross-section of languages (such as Indonesian, Portuguese, Spanish, and English) used by the accounts in the data set over time paired with the variation in the content suggested that many accounts had originally been used for spam marketing targeting multiple audiences before ultimately being repurposed for a Chinese government–run influence operation. For example, networks of Twitter accounts that had previously been used to market Indonesian IT support or adult entertainment services suddenly, in coordination, switched language and topic to post messages in Chinese about the Hong Kong protests.

Our hypothesis was that these accounts had been purchased on the influence-for-hire marketplace by a proxy acting on behalf of the Chinese government. This theory was supported by Twitter’s attribution language, which noted that a subset of the accounts had been run from “unblocked IP addresses originating in mainland China.” The population in mainland China is blocked from accessing Twitter by the Chinese government, so unblocked access could be an indicator of government collusion. By examining the posting patterns within the data and comparing these to time zones, we found that despite the diverse self-reported account locations across the data, the posting patterns mapped to working hours in Beijing’s time zone. These were not the patterns that we would expect from a random set of Twitter accounts from all over the world and provide another indicator of coordination.

We published our findings at ASPI in “Tweeting Through the Great Firewall,” and we learned a great deal about how to work with the complex data sets that Twitter makes available through its influence operations archive. These lessons would prove useful for application in analyzing the second data set. In June 2020 we worked directly with Twitter to analyze an influence operations data set linked to the Chinese government. This time we had access to the data set prior to public release. But there was a catch: we had nine days to analyze more than 30 gigabytes of multiformat data. Whereas the 2019 data set comprised 940 accounts, this June 2020 data set included 23,750 accounts. Twitter had identified an additional 150,000 hacked or stolen accounts as forming part of the campaign, but they were not publicly released for privacy reasons, since they had previously belonged to authentic social media users. We had been tracking this campaign independently, so we had a sense of this scale and expected that there would be some cross-platform data flow. We knew that, given the intense time pressure, we would have to automate the processes that we had used previously.

We pulled together a bigger team—myself, Uren, and Thomas, plus my colleagues Albert Zhang, Samantha Hoffman, Lin Li, Alexandra Pascoe, and Danielle Cave—possessing expertise in data mining and data science, in open source intelligence and online investigations, in Mandarin and Cantonese, in international relations, and in the Chinese Communist Party’s approach to state security. We also contracted a machine-learning specialist to assist in automating the data-mining process where possible.

When the data were eventually shared with us, there was another layer of complexity: the meaningful textual content was embedded within images as memes containing screeds of Chinese language text. We therefore had to introduce a different first step: using optical character recognition to extract text from 350,000 images for autotranslation. From there we analyzed the data using a process consciously modeled on the approach we had used with the 2019 dataset. We extracted entities from the text data so we could identify the people, organizations, countries, and cities that were being discussed within the text. We used sentiment analysis to model the emotional valence around these entities within the data. We undertook a process of topic modeling to broadly categorize subsets of the conversations within the data. These steps allowed us to slice the data into component chunks for human analysis.

In this instance we had to contract a local AI company (Addaxis AI) to supplement our capacity. We have a small (yet growing!) team. Data scientists are in high demand across industries. As a result, it can be challenging to hire them in the not-for-profit sector. We’ve been fortunate to grow in-house talent and have developed some of our own machine learning tools, but we still find that we have to turn to industry to enhance our capabilities. Supplementing our capacity in this way has been essential to delivering disinformation analysis that can align with the tight time frames demanded by our industry partners. It also allows our methodological approach to evolve as the threat landscape does. For example, we have had to adapt rapidly to shifts in the channels that threat actors use to distribute disinformation—from the analysis of text to image to video—as threat actors shift their distribution patterns.

As before, we compared the self-reported locations of the 23,750 accounts and the languages used over time. We again noted that the accounts had posted in multiple languages but switched to Cantonese at scale in late 2019. We identified large networks of Russian and Bangladeshi accounts within the data that had again most likely been purchased on the influence-for-hire shadow marketplace. The posting pattern again mapped cleanly to weekday working hours in Beijing’s time zone.

We extrapolated from the data that Twitter had provided for analysis, identifying linked accounts and networks. Through this process we identified that the operation was ongoing and persistent, generating new accounts. The campaign was pivoting away from the topics that we expected from an influence operation run by assets of the Chinese state—the Hong Kong protests, the Taiwanese presidential election, the coronavirus, and the exiled Chinese billionaire Guo Wengui’s relationship with Steve Bannon, for example—to focus on domestic protests in the United States following the murder of George Floyd. This was a significant finding as it was the first time—as far as we could judge with high confidence from open-source data—that a Chinese government–linked online influence operation had targeted U.S. domestic politics.

In this instance the entrepreneurial pivot by the campaign’s operators was apparently designed to create the perception of moral equivalence between the U.S. government’s response to domestic protest and the Chinese government’s suppression of protest in Hong Kong. Efforts to create a moral equivalency between the policies of the U.S. and Chinese governments have become a consistent element of the Chinese government’s diplomatic line, state media messaging, and covert influence operations. Three processes—extrapolating from the Twitter data set, identifying the flow of the campaign across platforms, and recognizing its tactics, techniques, and procedures—were all essential in capturing this significant finding. Given that this operation had emerged from the same networks that Twitter identified in 2019, we published this research at ASPI as “Retweeting Through the Great Firewall.”

The marketplace for disinformation analysis is competitive, so research that aims to have a genuine, positive impact on policy has to be clearly visible to policymakers. We developed a communications strategy that would support the release of our research. Other projects at our center had driven high impact by partnering with high-profile media partners. We appropriated this strategy, coordinating the report’s release with the public release of the data set through Twitter’s influence operations archive and in an exclusive reporting agreement with the Washington Post. Each of us on the research team developed extensive Twitter threads to provide key insights on our analysis of the data. Our flurry of activity on Twitter produced another round of media coverage that further amplified the reach of our research. But media partnerships are not the only way to reach audiences. Many disinformation analysts use online publishing platforms like Medium to reach an online audience for their analysis. We have also consistently found that professional networks are important channels into policymaking environments, both in government and industry, so posting research output to targeted group messaging chats offers another vehicle for reaching policy-relevant audiences.

This communications strategy ensured that public conversations around the release of the data were well contextualized. We were conscious that the release of the data, along with our analysis, would inform diplomatic statements, and we wanted to ensure that policymakers’ and diplomats’ understandings of the operation would come from an evidence-based position.

We continue to use these approaches to analyze influence operations. But we also continue to innovate as the landscape evolves. For our investigation of disinformation about China’s Xinjiang region and the role of U.S. social media companies, my colleagues Zhang and Zoe Meers and I tracked how the Chinese Communist Party’s officials were contesting the narrative around the party’s human rights abuses in Xinjiang. Zhang, the lead researcher for the project, used CrowdTangle to track Facebook data, Twint to harvest data from Twitter, RStudio to analyze the data, and Gephi to conduct network analysis, all to explore relational patterns across a network of suspicious social media accounts. He eventually identified a Xinjiang-based company that was running YouTube channels delivering international-facing disinformation about the region. Albert’s investigation went so far as to identify the tender documentation that linked the company to the regional government. This was a powerful piece of the investigation in that it created a chain of evidence that enabled us to make a high-confidence attribution connecting the social media activity to the Chinese government.

Challenges remain for those of us working outside of government and industry on the analysis of influence operations. There is no clear business model to fund this work, yet the costs are significant in terms of building the capability to deliver strong research. Funding tends to be ad hoc, creating sustainability challenges in building infrastructure and staff expertise that can endure short-term funding cycles. In our work at ASPI, we would not have been able to build our capability without our partnerships with industry, civil society, and government. These partnerships represent the most significant defense that democratic societies have against influence operations by authoritarian regimes. The willingness of industry to share data with trusted partners, of government to place trust in independent research undertaken in the public domain, and of civil society to share research findings with the public is the first line of defense against manipulation of the information environment.

Given the cross-cutting nature of influence operations, partnerships will continue to be of fundamental importance. At ASPI we work closely with industry partners, but we also recognize the value of a broad structure of partnerships and supportive relationships. We recognize the challenges that our industry partners face in assessing, analyzing, and countering malign influence operations. We also recognize the valuable work of other counterdisinformation initiatives such as EUvsDisinfo, the European External Action Service’s program focused primarily on countering Russian influence operations targeting EU member states. In fact, we have drawn from the EUvsDisinfo database to track Russian COVID-19 disinformation and its reach into Australian conspiracy-themed Facebook groups. And the importance of partnerships—of sharing data, validating research, publicizing findings, and building cross-cutting networks in defense of the information environment—is exactly why ASPI is involved in the Carnegie Endowment for International Peace’s Partnership for Countering Influence Operations.

About the Author

Jacob Wallis is the head of program, disinformation and information operations at the Australian Strategic Policy Institute’s International Cyber Policy Centre.

Return to top ↑

Investigating Influence Operations by Twitter Integrity

Patrick Conlon, William Nuland, Kanishk Karan

We are observing a shifting trend in state-affiliated influence operations, where tactics that look and feel more like spam are in more frequent use, both as a method of dominating the information environment and of obfuscating true sponsorship. This trend is making it more challenging for practitioners to serve the public interest and provide visibility into campaigns with likely state influence while simultaneously satisfying the hard attribution standards that the influence operations field has borrowed from cyber threat intelligence, standards that frequently underpin how decisions on disclosures are made. To address this dynamic, new and established investigators alike should conduct more focused research on a broader array of what we call “platform manipulation” in order to describe the impact and to work toward countering influence operations.

Spam: Then and Now

The genesis of the term “spam” comes from Usenet forums and early efforts to manage user experience in an email context—it was originally used to describe the tactic of sending indiscriminate messages, many of which were scams or advertisements, to a large volume of users on platforms that look much different than modern social media services. Spamming today also occurs on social media platforms, but it occurs for a wider variety of purposes and leverages the full scope of the products and functions platform services provide. These range from purely financial motivations (such as traditional spam, perhaps) to the pursuit of political and social influence. It’s important to realize that spam has evolved as the internet has evolved—spam isn’t just about fraud; sometimes, social media spam is used to hide more deliberate, malicious activity in noise.

Twitter’s policies build on the notion that spam has evolved and introduce the umbrella term “platform manipulation.” Thinking of spam in this context places it alongside a larger set of behaviors “intended to artificially amplify or suppress information . . . that manipulates or disrupts people’s experience on Twitter.” Platform manipulation can come from authentic or inauthentic sources, and it can manifest as bulk or aggressive activity or more targeted activity. The key factor is the attempt to manipulate or disrupt the experience of users to drive traffic or attention to unrelated accounts, products, services, or initiatives. Considering spam in the wider context of platform manipulation suggests it is not merely an annoyance or noise but instead is a potential and even common tactic of influence operations. This is how Twitter understands spam and how its policies encourage others to think about it.

Influence Operations Using Spam Tactics

Influence operations at their core are about manipulating the relationship between people and their information environment in order to affect real-world outcomes. Some are highly targeted, while others are noisy and opportunistic. Most public reporting on high-profile influence operations focuses on those targeted, carefully curated campaigns, however, which creates the impression that persistence and specific, often narrow objectives are common features. That is, most of the publicly available studies to date are narrowly concentrated on highly targeted campaigns with a political focus. However, sometimes the most powerful method of influencing the relationship of internet users to the information around them may be through the volume, velocity, and timing of content delivery—features most commonly considered to be spam tactics. These deserve more attention.

Political spam can be very effective at shaping an information environment, and in some online communities, high-volume, low-quality political content is a common feature of a person’s experience with media in general. Venezuela’s online environment is a good example. Public reporting on the use of social media by government ministries in Venezuela presents a clear picture of an online political space where government-aligned narratives amplified by inauthentic political spam are key features of the information ecosystem.

Unlike the more conventional influence operations that public reporting tends to highlight, spam in Venezuela includes mentions, retweets, and hashtags that amplify core government accounts and government-friendly narratives and are not solely the result of government operatives carefully tailoring influence; they are not intended to be deceptive. The system is designed to be noisily integrated into the lives of common citizens. Financial incentives and rewards encourage people to play an active role using their real accounts. The resultant volume of activity equates to an influence operation, but instead of showing classic influence operation hallmarks like identity and location obfuscation, inauthentic coordination, and manipulation, the behavior is overt and is a fixture of how citizens interface with politics and political institutions online. Technical sophistication may be low, but participation appears high. Venezuela is not the only country in the world where this dynamic is present, but understanding how influence operations manifest there might help platforms get closer to tailoring interventions proportionate to threats as they present themselves. Not all influence campaigns are targeted—the case in Venezuela helps illustrate that.

Though the effects of political spam are not yet well understood for every geography, high-volume activity tends to be most effective, and a more common hallmark of influence operations, when the actor or principal behind it has considerably more resources than the general population. In these examples, the raw volume of activity generated by an operation can far outweigh the baseline authentic civic activity in an ecosystem. For example, Meta reported an instance of this in the context of the February 2021 protest movement that occurred throughout Russia calling for the release of detained political opposition figure Alexei Navalny. During this operation, actors working in support of Russian President Vladimir Putin attempted to flood threads and discussion groups planning protests or expressing support for them with negative messaging and irrelevant content, effectively adopting well-understood spam tactics to drown out pro-Navalny messages and to generate confusion. Though observers may think of Russian influence operations as highly targeted, the use of spam in this context was an important reminder that different circumstances do precipitate different adversarial tactics. Similarly, during the Hong Kong protests in mid-2019, the “Spamouflage Dragon” network (first named so by Graphika), backed by the People’s Republic of China (PRC), produced content attacking the protesters that, for a time, flooded all hashtags associated with Hong Kong and the protests (even #HongKong and #香港) before Twitter was able to detect it and take remediation measures.

This type of activity, constituting influence via political spam tactics, is easy to see, difficult to attribute, and can fundamentally shape an information environment. As such tactics come into more frequent use, they should change the way professionals think about countering influence operations and the centrality of attribution to the counter–influence operations practice. There’s a basic calculus at play—practitioners want to stop abuse and protect ecosystem health in an environment where the varieties of abuse are expanding; attribution with confidence is both harder and more time consuming; and knowing who’s behind a campaign of manipulation is less helpful toward practitioners’ objectives than identifying how the campaign is occurring and how to stop it. As an extension, researchers should direct their attention toward addressing the expanding tactics of manipulation and take a more expansive view of what constitutes an influence operation. Confident attribution can’t take precedence over ecosystem health. From here, the imperatives of influence operations research and reporting also shift.

Spam Tactics in the Context of Influence Operations

Based on a review of Twitter’s own disclosure program, influence operations that make use of spam tactics do appear to be increasing in frequency. Further, they continue to evolve in such a way and at a pace that suggests a need to rethink both the importance of attribution and how it is done. The following sections describe these spam dynamics and why they deserve both attention and inclusion into attribution approaches.

The Question of Timing

Studying and describing timing in spam-rich operations is critical in a very different way than it is for influence operations that are longer in duration and more persistent. Operations using spam tactics as a method of manipulation often occur in short bursts and appear intended either to overwhelm an information space or to drown out, distort, and subvert organic narratives coming from authentic users. In the practice of attribution or, more simply, proving that the behavior is an influence operation rather than just scaled spam, it may be more useful to examine the indicators of concurrence (in account registrations, posting behavior, or thematic similarities) to show coordination and/or command and control than to search for technical connections to a known threat actor or group. Contrast this with the attribution approach used to describe operations occurring in longer-term strategic contexts and time horizons. There, industry analysis borrows from traditional cyber intelligence tradecraft and seeks to identify attributable adversaries operating along a kind of influence operations kill chain (in which bad actors research their targets, build an audience for their narratives, establish durable personas, and then distribute content). But in short, burst operations, kill chain abuse is less identifiable, and the classic attribution approach is less viable. However, allowing for an approach to attribution that considers timing as a (lower-confidence) component in combination with other observable pieces of evidence places researchers on much better footing to make the right investments proportionate to the harm being done in specific information environments.

Copypasta

The use of duplicated or templated text is a common abuse tactic that analysts have historically attributed to botnets, and many casual observers still do. However, this tactic has become more common in influence operations—including among authentic users of social media platforms (or combinations of authentic and inauthentic users). This so-called copypasta (from the phrase “copy-paste”) is often found in the context of politically motivated campaigns where dominating the narrative by faking widespread support and drowning out competing voices is a core objective; when paired with or in combination with other evidence that an influence operation is present, it’s a tactic that incrementally builds toward a more conclusive picture.

However, copypasta alone rarely says much about attribution: the organizers of copypasta campaigns often coordinate their operations outside of what is visible to platforms, leaving investigators either with few attributable leads to pursue or guessing at the authenticity of the campaign itself. Further, copypasta that is amplified by influential users or genuine activists can muddy the provenance of an in-flight operation. Once this second-wave amplification occurs, it becomes far more difficult to distinguish between influence operations and what might be considered “truly held beliefs at scale.” Analysts have observed this dynamic during copypasta campaigns in many countries, including in India.

The enforcement challenges posed by copypasta are thorny when other context is missing, though, like when campaigns are planned and coordinated outside of the platform or platforms. This increases the need for data-sharing and cross-disciplinary analysis from a wider community of researchers both in and out of the tech sector proper. It’s difficult to understand what transpired in a high velocity, events-based copypasta campaign without considering evidence beyond hard technical indicators. For example, during Twitter’s investigation into an apparent copypasta campaign highlighting the so-called benefits of the Chinese Communist Party’s (CCP) policies toward the Uighur population in the Xinjiang region, our investigative team relied on both analysis of the copypasta text and other account information to establish a link to previously identified CCP activity. This is one reason why Twitter invests in expanded transparency around influence operations: to facilitate a broader community understanding of tactics and harms.

The Varying Uses of Fake Engagement

Campaigns employing copypasta often leverage fake engagement to game ranking and add another layer to the veneer of support for a theme or narrative. Twitter’s Trust and Safety team defines fake engagement as the intentional and inauthentic amplification of account interactions through inorganic means. What fake engagement looks like at the platform level is a function of how the specific service operates—bought or inauthentic likes, follows, and shares from fake accounts are relatively common features that vary somewhat across platforms, however. Once deployed, fake engagement can provide an appearance of grassroots legitimacy and authenticity in an otherwise fabricated or hijacked narrative (a process sometimes called “astroturfing”). As in the case of the Venezuelan government-driven political spam, it can serve to artificially amplify strategic narratives that may not otherwise achieve strong distribution.

Attributing fake engagement to specific service providers, much less to states themselves, is extremely challenging and rare. State-aligned campaigns appear to source fake engagement from a rotating set of service providers that can be found all over the world, sometimes work across national borders, are expanding in number, and offer a variety of tactics. For instance, the PRC-backed Spamouflage Dragon campaign has been observed to make use of preexisting networks of retweets and likes in the Middle East and Latin America, networks that otherwise have nothing to do with Spamouflage Dragon or China. However, determining that engagement on a strategic narrative or campaign is indeed fake can help researchers get closer to establishing that a well-resourced operation is occurring, even when attributing the engagement itself is hard.

Automated platform moderation systems pick up a high percentage of fake engagement activity before it is publicly posted, but evidence from activity that does get through these systems suggests persistent use in the context of geopolitical influence campaigns. State (or state-aligned) actors have different motivations when using these pay-for-engagement services—for instance, Twitter staff have observed PRC-supportive influence operations use bot-like fake engagement in order to up-rank different messages over time, seemingly seeking to overwhelm topics and events with aggressive, persistent, scaled engagement. In contrast, influence operations that Twitter has observed from other countries seem to more selectively engage in single issues in isolation and in a more limited way, remaining hyper-focused on a small number of key issues, whereas PRC-supportive operations select a variety of focal topics that they develop over time. For instance, an Indonesian state-backed operation (which was spreading pro-Jakarta narratives about the territory of West Papua) was found to use, among other assets, a preexisting Indonesian K-pop spam cluster to amplify its messages. The actors responsible for this activity either compromised or fully acquired the use of accounts in the network (which included authentic accounts with over 100K followers) and repurposed it for the operation.

Despite the difficulty that spam tactics pose for the attribution of influence operations, the nature of the fake engagement can provide some contextual clues on the strategic objectives of a potential upstream sponsor. Perhaps more critically, it can help platforms understand and describe through content analysis what users in targeted audiences are being exposed to as a method for understanding impact and developing interventions.

Incentivized authentic spammers

In the context of social media, spam is typically associated with automated activity, but increasingly, verifiably real people use spam tactics in ways that look very similar to automated accounts. Sometimes these users are paid or otherwise motivated to engage in manipulative activity and spam intermittently, say for ideological reasons, while continuing to also use their accounts authentically.

This complicates how detection and enforcement are carried out: machine-generated and fake accounts collaborating to engage in platform manipulation are easier to accurately identify than authentic users engaging in spam-like activity. Enforcement against this activity also poses a different challenge: suspension or removal of authentic accounts engaged in spam may increase costs to bad actors, but it has not been shown to deter this activity over the long term. Moreover, the risk of false positives is high enough that enforcement may inadvertently limit legitimate speech. Finally, if the long-term objective of enforcement is to alter on-platform behavior, suspending the accounts of real users in certain cases of platform manipulation is often too blunt an instrument.

Despite these drawbacks, the potential harm from spam generated by real individuals is nonmarginal and merits enforcement. Resharing of content by authentic users can create a “spam chamber,” an environment where narratives are artificially boosted by the combination of inauthentic and authentic activity. Parsing online spaces like these can result in a distorted, funhouse mirror–like experience for observers, who may struggle to separate authentic, organic viewpoints from those that are artificially inflated.

The Venezuelan government’s efforts to incentivize spam activity by ordinary citizens once again provides a prominent example. Public reporting has noted that organizations in Venezuela have developed ways of rewarding spam behavior with monetary compensation, mostly off-line. As bad actors adopt new ways of obfuscating evidence, on-the-ground insights will become even more important to uncovering, understanding, and countering state-assisted spam operations.

Even as these behaviors are analytically significant, one should not assume that monetary compensation is the driving force behind all or most politically motivated spam. In multiple countries, Twitter staff have observed identifiably real users engaged in platform manipulation to unduly shape the discourse on Twitter, without evidence to suggest these users are compensated off-platform. It is entirely conceivable they acted out of a sense of patriotic duty or allegiance to a particular political figure or party. This may be the case for the significant and increasing number of real individuals from the PRC who create Twitter accounts and use them to amplify messaging consistent with government talking points. These users have been reported to coordinate their activities off-line or on platforms where trust and safety teams have less visibility. Notably, these accounts often do not bear the traditional technical markings of inauthenticity that are typically associated with influence operations; instead they complement ongoing efforts to influence conversations about Hong Kong, Xinjiang, and a range of other topics of strategic interest to Beijing.

Implementing Solutions

How can platforms mitigate the role of political spam in the information environment and improve the quality of their attributions? First, it is crucial to recognize that spam is a meaningful way of influencing and manipulating the information a user encounters and engages. While platforms should continue to analyze and disclose targeted information operations conducted by more readily attributable state actors, they should also take steps to report on platform manipulation campaigns that influence political dynamics using different rubrics for attribution, also applying approaches that take broader patterns of behavior into consideration.

With this in mind, platform teams should expand their notion of what is disclosable, better enabling the global community of experts to manage a broader spectrum of influence operations as they inevitably evolve. Twitter’s disclosure of data related to activity attributed to PRC-aligned actors during the Hong Kong Umbrella Movement in 2019 is a good example of introducing new expressions of influence operations into disclosure programs. The case was novel in several respects: blatantly spam-like accounts transitioned through a series of personas and purposes before finally supporting a central network of more sophisticated inauthentic personas and “journalistic” outlets. When Twitter released data on this activity, researchers suggested that the data set was noisy and unfocused, but the inclusion of these less sophisticated accounts demonstrates their full lifespan before they were pulled into the operation. Better understanding of how spam intersects with more focused attempts at political influence could help improve the legibility of future data sets.

Platforms should also adapt their internal analytical processes. For starters, in some contexts spam activity may be cause for concern at much lower levels than platform analysts previously thought. In those situations, the noise in social media activity becomes, itself, a signal worthy of attention. Increased attention to spam tactics—including the individuals, events, and entities targeted by spikes in political spam—will contribute to a more holistic view of the information environment and the totality of manipulative tools a motivated actor can employ.

In addition to other steps to mitigate political spam and influence operations more broadly, Twitter and other platforms have begun identifying and publicly labeling media accounts that are under the editorial control of a state or regime. This serves a similar purpose of helping users to better understand the possible biases of their selected information source. With an approach that embraces transparency around known influence operations (including the amplification of gray media organizations that have nontransparent links to states), political spam, and state-affiliated media, Twitter wishes to help the research community achieve a more comprehensive understanding of the field of play.

Conclusion

Political spam’s role in facilitating, maintaining, and even supplanting supposedly more sophisticated influence operations merits revisiting how analysts approach the often tricky issue of how, when, and whether to attribute influence operations to specific actors. Put simply, it is a mistake to let strict-standard attribution stand in the way of transparency when the core objective of trust and safety practices is to reduce damage to information environments and harm to users. As a community of practitioners addressing influence operations, we should be reporting on more expressions of platform manipulation rather than reporting on less as attribution gets harder. Going forward, new and seasoned analysts should work to more fully understand the full environment in which information is weaponized against users. In many cases, a healthy environment is more important than attribution.

About the Authors

Will Nuland leads Twitter’s Threat Disruption practice in the Trust and Safety Council, which provides investigations and develops enforcement strategies focused on adversarial threats.

Patrick Conlon leads Twitter’s Threat Intelligence practice for the Site Integrity team, a part of Twitter’s Trust and Safety Council.

Kanishk Karan was until August 2022 an investigator on Twitter’s Threat Disruption team based in the Asia Pacific.

Return to top ↑

How to Investigate Influence Operations as an Independent Analyst

Elise Thomas

As an investigator, approaching a potential influence operation from scratch can be challenging and frustrating, but it can also be a great opportunity to think creatively and try out new methods and techniques.

Perhaps the most important thing to remember is to approach it with the right mindset from the start: that is, to accept and embrace the fact that you are going to repeatedly hit dead ends and will need to change tack or think of new angles to approach problems. Every successful investigation is driven by a combination of resourcefulness, creativity, and sheer persistence.

The best way to approach any given investigation varies enormously depending on the context and the type of investigation. However, there are a few generally useful things to think about as you’re deciding how to tackle the challenges you’re facing.

What Information Are You Trying to Find?

First, establish the questions your investigation is trying to answer. If your goal is to understand how extensive a particular network is or how many people a particular piece of content may have reached, your methodology will obviously be different from that used to identify who might be behind the campaign. In some cases, you may be interested in both.

To some extent, this will be driven by the kind of campaign you’re looking at and the kind of evidence available to you. For example, a very specific, targeted campaign may not be very interesting in terms of reach (which could be small), but it might be extremely interesting in terms of attribution. On the other hand, a large-scale Twitter or Facebook bot campaign may not provide many opportunities for investigating attribution, but mapping the network could help to show its scale and significance.

It may seem obvious, but it can be a clarifying thought exercise to spend even just a few minutes considering exactly what the point of your investigation is. (Of course, this can evolve as the investigation continues.) Being clear in your own mind about what you want to achieve will help with the next step.

What Information Do You Have?

Think about the information you have already that you could use to start your investigation. Some examples could include:

  • social media accounts;
  • activity patterns, including networks of activity;
  • content, such as social media posts, blog posts, or media articles;
  • web domains; or
  • identifying information, such as names, user handles, phone numbers, or email addresses.

Throughout the investigation, but particularly at this stage, it is important not to fall into the trap of confirmation bias. Don’t only look for the evidence that would prove your hypothesis—you also need to actively look for evidence that could disprove your hypothesis.

Is it possible that the activity you’re looking at is authentic, or could there be other explanations for what is happening? How certain are you that the different accounts, domains, and other elements you see are all a part of the same activity? You do not need to be completely sure about these things at the beginning of your investigation, but by the end of the investigation, you will need to be able to provide evidence for why you have assessed that the activity is a coordinated influence operation.

How Can You Use the Information You Have to Find the Information You Need?

This is both the most challenging and most fun part of an investigation. Before diving straight into the investigation, it is useful to think analytically about where the information you need is most likely to be and how you can use the information you already have to get there.

For example, if the goal is to identify the scale and impact of a particular network, you can think about how to use the parts of the network you already have to find the rest. Can you look for friends, followers, or amplifiers of the accounts you have already identified? The recommendation algorithms in many social media platforms can be very helpful in suggesting other accounts you might be interested in, while running social media network analysis can also help to identify influential accounts in the network that you may not have previously been aware of. Can you search for the same pieces of content across the platform, and have you checked whether it is also on other platforms or websites? Can you identify where and when that content first appeared and spiral outward from there?

Alternatively, if the goal is to identify who is behind a particular campaign, you might want to think about where identifying information is likely to be found. You could look for personal accounts, metadata left in files, domain data, cookies, or tracking codes, any of which might enable you to link more pieces of the puzzle together and pivot toward an identification. However, while it may be possible to identify the accounts that appear to be leading a particular influence operation, it may not always be possible to determine with confidence who is operating those accounts.

With all this being said, it is important to stay curious and stay flexible! You don’t need to be wedded to a plan or a methodology. If you find an interesting rabbit hole, run down it. If you hit a brick wall (which you probably will), change tack and try something else. There is no single right way to run an investigation, and trying out different approaches is part of the fun.

Other General Points to Keep in Mind

Don’t drown in tools and resources. There is now an almost endless sea of tools and resources aimed at investigating influence operations. Many of these are useful, but it can be easy to lose time learning about new tools or methods that are not going to be applicable to your current investigation. Where you need a tool, use it; where you don’t, skip it or save it either for a future investigation or for when you have some free time.

Understand what the tools you use are doing. Most tools will take whatever input you give them and spit out a result. Do not confuse getting a result with getting an answer to your research question, unless you’re sure you understand how the tool derives its results. For example, when analyzing data from Facebook’s CrowdTangle tool to track the spread of a particular link on Facebook, CrowdTangle might tell you the link has been shared 342 times. That’s a result, but it may not be a complete answer to the question you’re asking, because CrowdTangle only looks at shares on pages, public groups, and verified profiles. The link under investigation could have been shared hundreds or thousands of times by ordinary users and in private groups.

Don’t assume odd behavior is inauthentic. People are weirder than you think. The internet is a very big place. It contains many different cultures and subcultures in which the norms of online behavior may be radically different from those you are used to. It also contains some very strange individuals who may behave in bizarre—but nonetheless authentic—ways. You shouldn’t assume that seeing accounts that behave in a way that seems odd to you is necessarily a sign of inauthentic activity. It could be, but it could also be authentic behavior that sits outside of your realm of experience (in which case, it’s a learning opportunity).

Recognize that unusual events drive unusual behavior. Particularly around crisis events, like natural disasters or political or military crises, the behavior of authentic online actors changes. There may be an influx of new accounts to platforms like Twitter, or older, barely used accounts could be rediscovered as people join the platform to access news or to engage in discussion. It is worth being especially careful at moments like these to try to ascertain whether the behavior you’re looking at is inauthentic, as opposed to real people responding to an unusual situation.

It's a cliché, but it’s true: open-source intelligence is more about mindset than it is about tools or techniques. Being persistent, creative, and open to learning new skills has helped me in everything from investigations into state-linked Chinese influence operations and pro-Russian propaganda efforts to shady “humanitarian organizations” to cryptocurrency scams and even Satanist biohackers. It’s a wide, weird world! And learning about it through open-source intelligence can be a lot of fun.

About the Author

Elise Thomas is a senior OSINT analyst with the Institute for Strategic Dialogue. She has previously worked as a researcher for the Australian Strategic Policy Institute and has written as a freelance journalist for Bellingcat, Foreign Policy, Wired, and the Daily Beast, among others.

Return to top ↑

Investigating Cross-Country, Cross-Platform Spread of Information

Harpre Ke

Introduction

In 2020, Doublethink Lab published “China’s Infodemic Spreads to Taiwan and Southeast Asia, ”a report on how disinformation about COVID-19 originating from the People’s Republic of China (PRC) spread throughout Taiwan and Southeast Asia. This research traced the content creators and dissemination pathways of this content in order to understand its origins and potential motivations.

This article will show, step-by-step, the process behind the study’s planning and execution. It will describe how researchers traced the pathways over which online content was disseminated, the online tools they used to identify its original source and the ways it mutated across platforms, how they visualized these pathways, and how they conducted their final analysis.

A correct process is tremendously important to influence operations investigations: if the process is flawed in the early stages, it will produce compromised analysis and, ultimately, conclusions that are difficult to defend. Poor data collection practices or incorrect assumptions about the provenance of content can introduce bias that undermines the final result. By examining the work of experienced investigators, novices can learn to implement good processes and avoid common pitfalls in their own work.

Preparation

To ensure that an investigation can be conducted smoothly and all the steps can be followed, investigators should prepare the following steps in advance.

Use the Google Chrome web browser. The advantage of Google Chrome is that it hosts numerous plug-ins that facilitate investigations.

Use a virtual private network (VPN) server. This is especially important when looking into PRC-based operations. Some websites moderated according to PRC rules may block or limit the access of overseas IP addresses. Having a VPN service in Hong Kong makes it possible to visit these sites without limitations from the PRC’s Great Firewall. It will also protect researchers’ digital security. Be careful what VPN provider you select—some are more likely than others to comply with government requests for user data, especially those owned in part by a government or any government-linked entities, especially the PRC. It is important to research any potential VPN provider that you are considering. Particularly, look for a commercially operated VPN provider with company registration in a jurisdiction with strong data privacy laws, such as the European Union. Avoid PRC-registered companies.

Create new social media accounts for the investigation. Due to the often sensitive nature of investigatory subjects, we recommend that you sign up for new social media accounts for relevant platforms (when terms of service allow). It may take you a great deal of time to sign up for all of the accounts, and some platforms (like Weibo) demand more information to complete the registration, including a phone number and an email address. Please do not use your personal information (such as your real name, address, or phone number) for the registration of these new accounts. For example, you could buy a prepaid SIM card to obtain an anonymous phone number.

Content Analysis: Identify the Disinformation and Keywords

The goal of the Doublethink Lab report was to investigate the dissemination pathways through which COVID-19 disinformation spread to Taiwan and Southeast Asia. First, we defined the scope of our investigation by searching for false information that has been debunked by the CoronaVirusFacts/DatosCoronaVirus Alliance Database, which is moderated by the International Fact-Checking Network (IFCN) at the Poynter Institute. An assumption of our method was that the original disseminator of this false information created it on purpose, and therefore the operation could be classified as disinformation. We assumed this because of the very specific and factual nature of the false claim (see box 1), which is highly unlikely to have been created accidentally as an honest mistake or misinterpretation. Our other assumption is that we had indeed found the original disseminator. If this assumption does not hold, we must consider the possibility that the “originator” we identified could be spreading misinformation unintentionally. Under these assumptions, we simply refer to “disinformation” in what follows.1

Step 1: Define the Geographic Scope

Narrow the scope of the investigation to identify clear variables for analysis. In this research, we focused specifically on Cambodia, Malaysia, Myanmar, the Philippines, Singapore, Taiwan, and Vietnam. These countries were found in a pre-study to have similar disinformation reaching their information environments—a requirement for our aim of studying the international spread of specific pieces of disinformation in this region. (This convergence may be due in part to the fact that these countries happen to share a significant Chinese diaspora and histories of geopolitical tensions with the PRC.)

Step 2: Narrow Down Scope of Analysis and Use Keyword Analysis

Search and narrow down the scope of the disinformation narratives to be studied. Searching within the CoronaVirusFacts/DatosCoronaVirus Alliance Database, we identified Chinese-language disinformation relating to Taiwan or the PRC that spread in the targeted countries mentioned earlier. We also looked at which style of Chinese characters were used in the various posts we examined. Traditional characters are used in Taiwan and Hong Kong, whereas simplified characters are used in the rest of the PRC. These facts can be used to speculate about the origins and/or the target audience of disinformation, although additional evidence would be needed to corroborate any firm conclusions. We then interviewed local journalists, activists, professors, and NGO staff based in these countries in order to ground our understanding of the disinformation in their local contexts.2 We asked our local experts to filter the disinformation based on their judgments of the reach, impact, and different types of disinformation. Accordingly, we filtered out six cases of disinformation for further investigations.

Box 1: Case Study, Using COVID-19 Disinformation Debunked in Taiwan

Focusing on one claim in particular helps to illustrate how we traced the spread of specific false claims about COVID-19 that circulated in Taiwan. This post (in traditional characters) claimed that “the Christian fellowship of doctors at the National Taiwan University Hospital (NTUH) said that the coronavirus is a combination of SARS and AIDS.” It was debunked by two fact-checking centers based in Taiwan, the Taiwan FactCheck Center and MyGoPen (see figure 1). They pointed to the announcement made by a hospital spokesperson, who said that the statement did not originate from their doctors.

 

Identify keywords of novelty and repetition. Using the posts, we then identified keywords or URLs for further analysis. The criteria we used for choosing the keywords included their novelty (how distinct they are compared to preceding content and commonly used phrases). Then, we looked for their repetition in the searches to be described below. We also needed some relatively high-frequency words to ensure the results matched our topic. For example, a keyword such as “Taipei University Hospital” is fairly novel but could be matched for many health-related topics other than COVID-19; therefore, adding the relatively high-frequency keyword “COVID” would narrow the results appropriately if COVID-19 was the topic. Clusters surrounding these keywords were also identified to expand the possibilities for analysis. The same method was used to identify keywords from another disinformation post on Facebook (see figure 2).

From these two items, we were able to identify the following keywords based on their novelty: “SARS+AIDS,” “autopsy” (屍檢), and “Zhongnan Hospital of Wuhan University” (武漢大學中南醫院). The following high-frequency keyword was identified to match our topic: “Novel Coronavirus Pneumonia” (新冠肺炎).3 These keywords were identified for the next step, which is to trace the dissemination of misleading content across platforms.

Temporal Analysis: Trace Disinformation Back Across Platforms

For the next step, we tried to identify the origin of these posts. This process relied on two elements—the keywords and URLs we identified—to trace content back to its producers, amplifiers, and disseminators. Doing so helped to determine which public groups were influential on social media and to illustrate the dissemination pathways of misleading content via these actors and groups. In other words, we conducted a temporal analysis to track how the content evolved and was disseminated over time.

Online Tools for Social Media Analysis

We used the following online tools to trawl posts on their corresponding social media and internet platforms:

Original version of disinformation. In order to uncover the original (or earliest) version of this content, we searched for our keyword cluster on Sogou and Weibo Search to determine if these narratives originated from inside the PRC information ecosystem (such as on PRC-based social media and blogs), as, in our experience, the PRC is a common source of disinformation that reaches Taiwan. This search can be compared with the results from Facebook to determine the earliest instance of the disinformation, which can be identified as the one with the earliest time stamp. The results revealed that this narrative was originally published on an official WeChat page named “HC3i 中数新医” on March 10, 2020, under the title, “武汉一线医生详解新冠: 危重新冠肺炎像“SARS+艾滋病” (“a doctor in Wuhan said that the symptoms of the coronavirus looked like a combination of SARS and AIDS”), in simplified Chinese characters (see figure 3). This title included the novel keywords “SARS+AIDS.” Because WeChat pages are required to have a bank account in the PRC in order to register, this account and post likely originated in the PRC.

Mutated versions of disinformation. We then used CrowdTangle to search the keywords on Facebook and found two different versions of this post, both translated versions (see figure 4). One version was posted on May 9, 2020, and was a translation from simplified Chinese characters into the English language. The post was identified based on the repetition of novel keywords. The sentence in the English post, “*Information from the National Taiwan University Hospital (NTUH) doctor team,*” is also a replication of a sentence from another version of the disinformation, “*国立台湾大学医院(NTUH)医生团队提供的信息*,” written in simplified Chinese characters. The distinct sentence “Covid-19 is a combination of SARS and AIDS” appears in the original WeChat post in simplified characters, which stated, “Covid-19,是SARS + AIDS的组合.”

The other set of posts was translated from traditional Chinese characters into the Indonesian language and posted separately on April 18 and May 4, 2020. The sentence in this Indonesian post, “Info dari team dokter National Taiwan University Hospital (NTUH),” repeats a sentence from the original WeChat post, “台大醫團契訊息給各位參酌,” written in traditional Chinese characters. In terms of novelty, the sentence “Covid-19 seperti gabungan dari SARS+AIDS” is also identifiable with the original WeChat post, in particular with the keywords, “重症新冠肺像” (“SARS+AIDS”).

Identify Actors and Targeted Groups of Disinformation and Their Geolocations

Next, we collected online evidence, including the original and mutated versions of the disinformation as well as the names of the accounts, pages, and groups; their geolocations; and the archived URLs (see figure 5).

Languages and geolocation of relevant Facebook groups. The earliest version of the disinformation was posted on March 10, 2020, in simplified Chinese characters on WeChat—suggesting it was targeted at audiences in the PRC and/or the Chinese diaspora. It was translated into traditional characters and spread in Taiwan (see figure 2). A translated version in the English language was posted on May 9, 2020, by a Facebook user with self-defined geolocation in Singapore and disseminated in a Facebook group called “STAYING HEALTHY 120 YEARS,” which is reportedly based in Greenland (according to the location of the administrator, defined by the person operating the account).4 An Indonesian version was also disseminated on April 18, 2020, in a Facebook group called “HELP INDONESIA Ministry.” On May 4, 2020, another Indonesian version was posted in the “Info gempa & bencana Indonesia - Dutchsinse” Facebook group; both of these groups self-report that their administrators are based in Indonesia.

Actors and geolocations. The original version of the disinformation seemed to be produced by a WeChat page named “HC3i 中数新医.” The amplifiers and disseminators of the disinformation were Facebook accounts “David Chew,” “Frans MA,” and “Roem Anwar.” Based on its profile information, the self-reported geolocation of the Facebook account “David Chew” is likely to be in Singapore, and the Facebook accounts “Frans MA” and “Roem Anwar” are likely based in Indonesia (see figure 6).

Visualize Dissemination Pathways of Disinformation

Based on the complete table of the COVID-19 disinformation originator and disseminators in the report, we illustrated the dissemination pathways of the disinformation based on the temporal and spatial analysis of the actors’ activities. The infographic below shows the flow of disinformation, from left to right (see figure 7). The red circles are social media accounts on platforms such as WeChat, Weibo, or websites that are based in the PRC.5 The blue circles are Facebook accounts while the blue rounded rectangles are Facebook fan pages or groups based in other countries.

Putting It Together: Writing the Analysis

Analysis was the final step of our process. Looking at the evidence, we made judgments about the creator of the disinformation, their potential motivations, and its potential impact. Investigators must take care not to draw conclusions that are not supported by the evidence. The argument should be supported by detailed, replicable explanations of the steps taken and the evidence obtained, as we have shown above.

We summarize our evidence as follows. We have a specific, supposedly factual claim—that COVID-19 is SARS and AIDS combined—that was debunked by fact-checking organizations. We identified novel keywords related to this claim: “SARS+AIDS,” “autopsy,” and “Zhongnan Hospital of Wuhan University.” Using these keywords to search Facebook and PRC social media, we found the first instance of this disinformation to be a WeChat account in the PRC. It was found to appear later on Facebook, including translations into local languages, and was republished by accounts self-reporting to be in Greenland, Indonesia, Malaysia, the Philippines, Singapore, and Taiwan. Without further verification, we cannot know whether these are the true locations of the operators of these accounts—however, we can infer that the content is most likely intended to be shared for audiences in those countries. Some of these accounts shared the first observed translated versions of the original WeChat post. A mutated version of the original post, in traditional Chinese characters, changed the “source” of the disinformation from doctors at a Wuhan hospital to doctors at the Taiwan University hospital. The Taiwan University version was also translated into English and Indonesian. It is important to note, as indicated in figure 7, that many of these posts did not result in significant sharing.

We finally discussed our interpretation of this evidence. Under the assumption that the origination of a false claim of such a directly factual nature is highly unlikely to be accidental, and the assumption that we had indeed found the originator, we concluded that the WeChat account shared this false information on purpose—that it is disinformation. If we later were to find an earlier version of this message, we would need to reevaluate this judgment. Inferring the intentionality of the disseminators is much more difficult, and we do not have sufficient evidence in most cases to judge one way or the other whether they shared the content with an intent to mislead. However, the translation in traditional characters mutated the supposed source of the false information to be a local Taiwanese authority (a doctor at Taiwan University’s hospital) that is more likely to be trusted by Taiwanese person, strongly suggesting intentional targeting. Even though this disinformation reached a number of different countries in different languages, given the low numbers of shares, it is likely that it had a relatively small impact. Lacking further evidence, we can only speculate about the motives of the originator. The nature of the message appears designed to create panic in the targeted societies. Other potential motives could be financial or attention seeking; however, we have not gathered any evidence to suggest these possibilities.

About the Author

Harpre Ke is an analyst at Doublethink Lab based in Taiwan. He works with tech experts to develop user-friendly tools to investigate PRC information operations. His research includes open-source intelligence investigations, social network analysis, narrative analysis, political economy, and information operations studies.

Notes

1 The difference between “misinformation” and “disinformation” is important. Both refer to false information, but disinformation is spread intentionally while misinformation may be the result of unintentional sharing of false content.

2 We used our trusted network to verify that these individuals have the appropriate expertise in information operations and that they are trustworthy.

3 When conducting subsequent searches, both traditional and simplified versions of the keywords are used.

4 Although Facebook requires accounts to report their true location, it is possible for a malicious actor to misrepresent their location. Without further evidence one way or another, verification is not possible. From the reported locations we can infer the potential audiences for these messages.

5 WeChat pages are required to have a bank account in the PRC in order to register. For Weibo accounts, we use the self-reported location of the user.

Return to top ↑

Assessing Social Networks Beyond Quantification and Across Platforms

João Guilherme Bastos dos Santos

Understanding Information Flows

The term “infodemic” is commonly used to describe the spread of large volumes of false information. Just as epidemiologists study the spread of viruses across and between communities during a pandemic, investigators studying infodemiology research the spread of false information across and between communities online and offline.

If we consider false information as something that spreads like a virus and contaminates people, then understanding the similarities between the spread of a virus and the circulation of false information can provide insights into how false information spreads in social networks, both online and offline.

The coronavirus, for example, arrived in many countries by plane and was spread at large gatherings, workplaces, or on public transit. In this analogy, different methods of communication, such as platforms and apps, function in a similar way, by allowing information to travel and reach diverse groups that are socially and economically distant from the source of the information. Just as viruses are spread through international, national, and local travel, information can spread on platforms with a global or national user base as well as through local or special-interest communities, encountering new audiences as it spreads. In isolation, these social networks would play a limited role in spreading viruses coming from abroad. Combined, they connect global dynamics to microscale realities, bringing international viruses into contact with individuals who never left their own neighborhoods.

In order to understand how information spreads, we must consider the network as a whole and not as independent environments operating in isolation on individual platforms. To date, research into online influence operations has tended to focus on single platforms, particularly Facebook and Twitter, with relatively little cross-platform research. This is not wholly surprising; cross-platform research involves much larger volumes of data, which can be hard to obtain, and requires an understanding not just of how individual platforms or apps operate but also how they interact with others. This chapter will first provide examples of how features of individual platforms and apps affect dynamics of information flows, before illustrating how some of these features affect the ways that information flows between platforms.

Combining Algorithmic and Nonalgorithmic Filters and Features

The first step in understanding how information travels through a network is building an understanding of the relevant features of individual platforms. There are too many platforms with too many features to be discussed here at length. However, this section attempts to illustrate some of the key features that affect information flows on different types of platforms. As researchers of influence operations in your own investigations, think carefully about which platforms you are choosing to research and how users connect with each other and share and receive information.

Mobile communication apps such as WhatsApp, Signal, or Telegram allow users to share information with their contacts without using algorithms to predict a user’s preferences and without filtering or prioritizing the content they receive. This means that regardless of how negative or disinterested your reaction to a piece of content has been, your contacts can send it to you multiple times and your mobile phone will download another copy of it.

Groups are formed among individuals who know each other or have something in common; for example, school friends, work colleagues, or neighbors. These individuals will all have their own unique combinations of groups, creating a network that grows with each new user. Each user can forward information to different audiences that are not necessarily direct contacts of the individual who first shared the information. While these networks may appear small at first, they have the potential to reach a large number of users relatively quickly. Going viral on WhatsApp depends on people forwarding messages to their own contacts. An individual user’s relevance will depend on their contacts, the groups they participate in, how frequently they share information, and how frequently their content is forwarded by their contacts. Therefore, more than the number of people receiving the information, individual networks and dynamics play a key role in this scenario.

For example, if a WhatsApp group is full to its limit of 256 people and each member is willing to forward the message to another full group, a message could reach up to 65,500 people in the first round of forwarding and 16.7 million in the second. On the other hand, Telegram allows users to send messages to their existing contacts and find new ones by searching usernames. Telegram also allows much larger groups of up to 200,000 and has channels where users can broadcast to unlimited audiences. Verification badges for official channels and automation features bring diverse advantages when compared to WhatsApp—and also the possibility of combination (for example, using Telegram to distribute links to WhatsApp groups, fostering new WhatsApp networks of groups). If we only try to understand each platform separately, we cannot fully understand how influence operations are really using these applications.

By contrast, social media platforms such as Facebook, Twitter, TikTok, or Instagram use algorithms to influence the content displayed on users’ time lines. Users can share content, but the algorithms will determine how it is prioritized in the feeds of other users and, therefore, how likely it is to be seen by them. Some platforms also offer the ability to use paid promotions to artificially boost content among specific audience segments. Paid promotions follow microtargeting dynamics, raising concerns about personal data use, behavioral profiling, and the ethical use of algorithms. The logic and impact of viral content is defined by rules quite different from the ones involved in WhatsApp or Telegram.

There are also different ways in which these platforms interact with other applications, bringing strategic advantages for influence operations. TikTok, for example, makes it easy to download videos and send them as a message through WhatsApp, mirroring the replication features related to the messaging application. At the same time, TikTok’s algorithms also play a key role in recommending new content and personalizing individual users’ feeds, reaching a large number of profiles more willing to engage with the content. Therefore, the information flow will be affected by the combination of different logics, from algorithmic mediation for accessing content online to the download and spread of content following viral, non-algorithmic logics inside messaging apps.

Some platforms and apps are included in zero-rating agreements, where mobile network providers exempt specified apps, websites, or online services from a user’s data allowance. This allows people who may not otherwise be online to access the internet and introduces new potential audiences and participants in the consumption and distribution of information, allowing information—and false information—to flow beyond traditional online access barriers.

Understanding How Information Is Shared Between Platforms

The next stage in the investigation will be to look for evidence of information posted on one platform being shared or promoted on another. If this is the case, what are the features of these platforms that facilitate the sharing of content between them, and is it possible to identify how these features alter the dynamics of information flows?

Knowing that someone shared false information on an app without considering their centrality in relation to the network of people involved can be like detecting an infected person inside a bus without knowing where the bus is. Without knowing whether this bus operates on an international or intercity route, or how connected it is to wider transport networks, we can’t fully understand the relevance that this route has in the spread of a virus. Likewise, how far a piece of false information spreads will depend on the interconnectedness of the groups in which it is first shared. However, the nature of these groups can change over time. Highly connected groups can become relatively isolated, for example if key people leave and take their bridges to other groups or platforms with them, even if the amount of shared content remains consistent. Over time, individual platforms may change how their algorithms operate, or they may introduce or remove functionality. These changes, too, can affect how quickly information can spread between platforms, and they turn online platforms into part of a time-varying, complex system that is extremely hard to predict.

The first example of how features of different platforms interact to affect the spread of information is illustrated by comparing the features of WhatsApp and Facebook.

On Facebook, users lose access to content if its original post is removed, either by the platform or the user. In contrast, on WhatsApp, the content is replicated every time the message is sent. This means that every recipient of a forwarded WhatsApp message has an individual copy of the content downloaded on their phone. Recipients can then upload this content to any online network using cross-platform sharing features. Once the content returns to Facebook through different profiles, it receives new IDs, and its flow through WhatsApp is hard to track once it involves private messages through phones.

A single person receiving and storing a piece of content on WhatsApp can be more important for a harmful campaign’s capacity to survive than thousands of people liking it on Facebook, because users on Facebook depend on its availability online whereas on WhatsApp it can be uploaded multiple times to multiple platforms. However, Facebook users can download content on Facebook and share it on WhatsApp, combining both features. When these two platforms are used together, the speed and spread of shared information becomes much harder to predict than when they are used independently, making part of the process invisible and increasing its complexity.

Users can also post call-to-action campaigns to influence the popularity of content on other platforms. For example, if a YouTube algorithm takes into account watch time, likes, and dislikes, then activists can encourage WhatsApp or Telegram users to access distributed links and increase likes, views, and watch time, supposedly cheating the system. Once in the video platform and after watching the video they received on WhatsApp, users might receive recommendations of similar content or even forward the links back to other groups on WhatsApp.

Just like knowing the logic of trending topics on Twitter does not tell us much about which strategies were employed to scale up hashtags, it is difficult to prove the extent to which the popularity of specific posts is the result of spontaneous and authentic behavior, algorithm bias, or coordinated actions to influence the system.

We can follow the viral dynamic of a single piece of false information about the Brazilian Superior Electoral Court (see figure 8). This information, shared on ninety WhatsApp groups, falsely stated that the court was nullifying votes.1 The image on the top left shows that central nodes receive the information first and recurrently, then the information spreads out to the extremities of the network. The information circulating on WhatsApp in this example was also found on Facebook (though those networks are not visible in the image below), further demonstrating the interconnectedness of online communications.

As this example shows, a key dynamic in the spread of influence operations involves building segmented groups of supporters in which people work as multipliers as well as bridges to other groups. That is why personal data is at the center of new segmentation strategies, even more than content itself: the very same content delivered to the wrong segment might not have the desired effects.

Users can also be encouraged to join special- and central-interest WhatsApp groups via links shared on apps like Telegram, which has a larger group size and is more automation-friendly. Once inside these WhatsApp groups, users can work as bridges to connect the target group to the influence operation’s information flow.

The interconnectivity of apps and platforms has implications for research not just into influence operations but also into whether and how these operations change as a result of targeted interventions. Returning to the transport analogy, removing buses from a transport network cannot stop the spread of a virus: people will adapt to the new circumstances by using other forms of transport with similar or perhaps greater levels of risk. Similarly, new platform rules won’t stop users from sharing influence operations’ content; they will just adapt to the changes, with different or even greater levels of risk. This has obvious implications for interventions that simply remove or curtail the activities of specific features or platforms, rather than considering the network dynamics as a whole.

This also suggests that targeted interventions have the potential for greater impact than sweeping measures. If isolating specific trains where a virus is known to be present in the early stages of spread could protect a system more than later suspending all the trains in a town, then suspending specific accounts in the early stages of an influence operation could be more relevant than trying to take action against hundreds of thousands in the later stages.

Conclusion

Other factors also have an impact on the information environment and should be considered by investigators in the course of their research. Financial incentives such as potential monetization, for example, can be employed to encourage the decentralized spread of false information by different channels. From clickbait or the sale of fake cures to a professional campaign with people paid to produce content, influence operations can combine a range of financial incentives for different actors to promote their narratives. Another factor that shapes the information environment is the use of personal data. Campaigns can use this information to identify potential supporters in strategic, behavioral, or demographic segments and target them with specific messages with the aim of changing network dynamics in their favor.

Ultimately, the dynamics of any particular influence operation can be determined in large part by the unique combinations of the connections and behavior of individual users. The same content shared in different networks by different actors can quickly scale up or be completely ignored. Far from exerting equal persuasive power in all groups, information goes viral when it is able to adapt to fulfill the needs of specific audiences. The ability of information to spread does not lie within the content itself but in the strategies and connectivity of the people who see it and share it.

Ignoring the dynamics of individual networks strengthens misunderstandings that so-called fake news strategies will be repeated with equal impact in multiple scenarios, scales, and countries. In reality, the dynamics of networks are much more variable, depending on the functionality of the platforms used, repertoires of online action, the connectivity of active users, and the ability of the message to find a receptive and supportive audience.

About the Author

João Guilherme Bastos dos Santos is a data analyst at Rooted in Trust Brazil (Internews), a researcher at Brazilian National Institute of Science and Technology for Digital Democracy, and a member of Carnegie’s Partnership for Countering Influence Operations. He has a PhD in communications from Rio de Janeiro State University, and his doctoral visit was supervised by Stephen Coleman at the University of Leeds.

Notes

1 Data were gathered by Rio de Janeiro State University’s Technologies of Communication and Politics Research Group.

Return to top ↑

Assessing the Impact of Influence Operations Through the Breakout Scale

Ben Nimmo

One of the most difficult questions for investigators of influence operations is, “Did the operation make a difference?” Journalists, policymakers, and the public all want to know, understandably, whether any given influence operation was an existential threat to democracy or if it was a flop. Past assessments of influence operation outcomes have used indicators such as whether high-profile users or outlets amplified a given operation’s assets, how many people may have seen content from the operation, or how many people reacted to the operation’s posts. Each metric provides a window into the operation’s dynamics on individual platforms, but none gives a more comprehensive measure.

This essay sets out one method—called the Breakout Scale—for measuring influence operations’ potential impact, in a way that is comparable across platforms. This method can be used while an operation is ongoing on the basis of incomplete data. Of course, the outcome can, and indeed should, be recalculated if new data come to light. As such, the Breakout Scale is not intended to assess the wide-ranging impact of influence operations as a whole on democratic discourse but instead to measure and compare different active operations while the process of discovery is ongoing. The scale builds on a previous paper I published with the Brookings Institution in 2020 and includes insights from subsequent studies of newly reported influence operations, feedback from colleagues in the field, and a series of examples showing how this methodology can be applied to real-world influence operations.

The Breakout Scale assesses influence operations’ potential for impact by asking two questions: how many communities are exposed to the operation’s content, and how many platforms is it on? In this context, the term “communities” refers to like-minded individuals who consume news or other content from a common source and form a self-defined grouping, such as a Facebook group, a cluster of mutual follows on Twitter, or the readership of a local newspaper. The term “platforms” refers not only to social media but also to traditional media and the pronouncements of celebrities. In rare cases, real-world events might also serve as platforms—such as when, in a strategy resembling one known to be used by the Russian Internet Research Agency (IRA), U.S. President Joe Biden–themed toilet paper was distributed in Times Square in New York City.

I use the term “potential impact” deliberately. The Breakout Scale is primarily intended for use by investigators and open-source researchers as they uncover influence operations. By using it, researchers can situate newly observed operations in the context of other known operations and assess how urgent a threat a new operation poses to the environment in which it is operating. The scale therefore focuses on evidence that can be immediately observed and rapidly updated, such as social media posts, news articles, public statements, and real-world events. It is not designed to measure the actual impact on a polity of a particular operation; such a calculation would require post hoc analysis of much larger data sets. The Breakout Scale’s fundamental principle is that an operation will only impact an audience if it manages to land its content in front of them. Being seen by an audience is not enough for success, but not being seen by an audience is an indicator of failure.

The Breakout Scale is divided into six categories, inspired by the system used for tropical storms.

Six Categories of Influence Operations

At the lowest end of the scale, Category One efforts operate on one platform and stay in one community. One example is a network of Twitter bots that spam-posts a single message and retweets other accounts in the network but is not retweeted by other users. Hyperpartisan Facebook pages that spread content to their followers could be another example, provided that their content is not shared to other platforms or communities. Their followings can be large, with thousands of users, but if those users are already converted to the hyperpartisan cause, the message is not breaking out to find new converts.

Category Two operations come in two variants: they either appear on multiple platforms but fail to break out of the community where they were posted or they stay on one platform but reach multiple communities there.

An example of the first variant is the Russian operation Secondary Infektion, which posted forged documents and accompanying articles on a dizzying range of platforms and forums—more than 300, at last count—but typically failed to attract any engagement from real users. An example of the second type is an operation run on Instagram by the IRA in October 2019 in the United States. This operation landed its memes in front of multiple communities, including supporters of movements such as Black Lives Matter, LGBTQ rights, gun rights, and conservatives, but it stayed exclusively on Instagram.

Category Three comes when an operation is on multiple platforms and is being shared by multiple communities on each platform. For example, two major conspiracy theories in the United States—known as “Pizzagate” and “QAnon” respectively—both spent some time at Category Three, spreading through mutually sympathetic but distinct communities on fringe and mainstream platforms before being reported on by the mainstream media. The pro-Chinese operation Spamouflage reached Category Three in late 2020, when its Twitter and YouTube content began to break out among influencers in Hong Kong, Pakistan, the United Kingdom, and Venezuela. Category Three operations can represent a tipping point, as it may only be a matter of time before they are reported on by journalists, especially tech and social media correspondents.

Category Four operations break out of social media entirely and are uncritically reported by the mainstream media. To count as a breakout, this reporting need not endorse the operation’s claims: the crucial factor is that it brings those claims to the attention of new audiences, including ones that may not be on social media at all. This is why it is so critical for the mainstream media to exercise caution in how and when they report on content such as conspiracy theories and weaponized leaks: the desire to break the news should be balanced against the danger of gifting the operators a breakout moment.

In 2017, for example, the Los Angeles Times published an article about the backlash on Twitter to Starbucks’s decision to hire refugees by embedding two tweets; both were from IRA troll accounts, according to the list of IRA assets later provided by Twitter. The Indian Chronicles operation exposed by EU DisinfoLab repeatedly landed its articles in the Indian press agency ANI. These breakout moments are particularly important because they lend the influence operation the reach of the media outlet in question. If they are reported uncritically and taken at face value, then they also benefit from the media outlet’s credibility.

Category Five operations are amplified not only by media, but by celebrities—any high-profile individual or influencer with a major following. Pop stars, actors, and politicians, for example, can all be celebrities in the meaning of the Breakout Scale, since they can all introduce false stories to new audiences. The scale considers this to be a more potentially damaging breakout than mainstream media pickup for two reasons. When a political celebrity such as an electoral candidate amplifies an operation, they will likely trigger coverage from many media outlets at once, and this has more potential impact than an amplification by a single outlet. When a nonpolitical celebrity, such as a pop star or influencer, amplifies an operation, they are bringing it to an audience that may not have any interest in the news at all—in other words, the breakout is from the realm of news reporting into another subject category entirely.

For example, guitarist Roger Waters repeated the false claim that the White Helmets rescue group in Syria was a “fake organization” creating propaganda for terrorists, taking that disinformation campaign to a Category Five. Similarly, when UK opposition leader Jeremy Corbyn unveiled a batch of leaks planted online by Secondary Infektion before the UK general election, he lifted it from its usual Category Two straight to a Category Five.

At the highest level, Category Six operations cause or pose a credible risk of causing a direct effect in the real world or in legislation, policymaking, or diplomacy. The scale considers an operation to pose a credible risk of causing a real-world effect if it shows the intent and apparent ability to do so in a detailed and specific manner. For example, an operation that organizes protests for two mutually hostile parties at the same time and place would pose a credible risk; an operation that issues generic threats with no indication of the ability or intent to carry them out would not.

For example, when a member of a parliament calls for an organization to be defunded or banned because of a false story, that is a Category Six breakout. When the Pakistani government summoned the Swiss ambassador because of an event run by the Indian Chronicles operation, it was a Category Six. More recently, the mob that stormed the U.S. Congress on January 6, 2021, after a sustained disinformation narrative of massive electoral fraud was a major Category Six incident.

Benefits and Limitations

By using these categories, researchers can situate newly observed operations in the context of other known operations and assess how urgent a threat the new operations pose to the environment in which they are operating. A Category One or Two operation may, for example, be better handled by continued observation and analysis to reach confident attribution before exposing it. A Category Three operation—at the tipping point before it breaks out of social media entirely—is a more urgent problem, where the most effective initial response may be to disclose it to platforms, but not immediately to the public, to avoid turning it into a Category Four. An operation that has already reached Categories Five or Six by the time it is discovered presents the most urgent challenge and is more likely to merit immediate exposure.

One nuance in the use of the Breakout Scale concerns the duration of the operations to which it is applied. The scale is most easily applied to individual moments, such as the emergence of a particular narrative or leak, but long-running operations likely consist of many distinct moments. For example, the Iranian attempt to pose as the Proud Boys in the week before the U.S. election lasted one day, while the Indian Chronicles lasted fifteen years.

Long-running campaigns are best understood as a sequence of individual moments, each of which can have its own categorization, in the same way that a windstorm can have gusts and lulls of different categories.

Thus, one Spamouflage video that I came across during my research was run on November 18, 2020, and was headlined, “Humanitariandisaster,theU S epidemiccontinuestodeteriorate.” (Spamouflage had quality control issues.) As was usual for Spamouflage, the video was posted on several platforms: YouTube, where the account posting it was soon taken down; Facebook, by a page with zero followers; and Twitter, where it was tweeted and retweeted by a cluster of accounts—most likely automated—that all featured cartoon cats as their profile pictures. These accounts amplified each other but did not receive amplification from other accounts. This was therefore a Category Two moment: multiple platforms, no breakout.

By contrast, a second Spamouflage video dealt with the outbreak of the coronavirus pandemic in the United States and questioned, “As the world’s largest superpower, what happened to the United States?” This video, too, was posted on Facebook, YouTube, and Twitter; it was then retweeted by the Venezuelan foreign minister to his 1.7 million followers, creating a Category Five moment.

As such, an appropriately nuanced description of Spamouflage would be that it consistently stayed at Category Two but managed occasional breakthroughs to Category Five.

Likewise, Secondary Infektion was almost always a Category Two because it posted forged documents on many platforms but usually failed to get them noticed. However, its leak of U.S.-UK trade documents was a major Category Five moment, amplified by a UK prime ministerial candidate on the campaign trail just weeks before an election. Therefore, similar to Spamouflage, Secondary Infektion was mainly a Category Two operation with one significant leap to Category Five.

This nuance points to another lesson: the importance of detailed analysis and codification of influence operations, even if they appear to be failing to gain traction when they are first discovered. Persistence has a quality of its own, and it is worth identifying the unique features of a persistent threat actor who consistently runs Category One or Two operations, in case, like Secondary Infektion, operations suddenly break into the higher categories. The exposure of Secondary Infektion’s Category Five–level interference in the UK election was made possible by earlier analyses that had identified its unique features when it was still a Category Two.

The Breakout Scale is meant to allow investigators to compare the potential impact of different operations as they uncover them. Investigators can use it to map their discoveries against the constellation of operations that are already known, and the scale can guide their decisions on how to prioritize investigations, when and where to publish their discoveries, and how to describe what they have found.

For example, as discussed earlier, it may be appropriate to keep a Category One or Two operation under observation to see if more can be learned, but a Category Five or Six will require urgent action. If investigators uncover leads to two different operations at the same time, they can measure both against the Breakout Scale and prioritize investigating the one in a higher category, as it has greater potential for impact. When investigators share their findings, they should include their assessment of where the operation sits on the Breakout Scale, enabling effective peer review and reducing the danger that they will inadvertently bump the operation into a higher category themselves by reporting it without appropriate nuance.

As with any framework, the Breakout Scale comes with caveats. It is not designed to measure absolute impact across society as a whole but to allow investigators to assess and compare potential impact against observable criteria as they continue to investigate. The categorization of individual operations can change because investigators find new evidence, or because the operation achieves a new breakout, as Secondary Infektion did in 2019. The data on which a categorization is based will likely be incomplete, due to the nature of the investigative process. With those caveats, however, the Breakout Scale is intended to serve as one tool in an investigator’s array and to enable calculated prioritization, evidence-based impact analysis, and comparative study of different operations from around the internet and around the world.

About the Author

Ben Nimmo has been researching influence operations since 2014. A pioneer in open-source research, he co-founded the Atlantic Council’s Digital Forensic Research Lab (DFRLab) and served as Graphika’s director of investigations from 2019 to 2021.

Acknowledgments

I am grateful to Alexandre Alaphilippe for an early chance to read the draft of the EU DisinfoLab’s report on the long-running operation dubbed “Indian Chronicles” and to Anastasia Zolotova for bringing the cluster of Spamouflage catbots on Twitter to my attention.

Return to top ↑

Acquiring Data

Dhiraj Murthy

There remains little guidance on how researchers studying influence operations can collect and process platform data effectively, legally, and ethically. Key challenges include developing an initial research question, finding appropriate data sources, parsing platforms’ Terms of Service, weighing other legal and ethical considerations, and accessing necessary technical resources and expertise. Each of these research tasks involve many gray areas, even when studying platforms (such as Twitter) where a large amount of influence operations research has already been done. Different researchers face a variety of situations and constraints, and these can evolve rapidly—for example, due to a platform’s Terms of Service revisions. While there is no single comprehensive, step-by-step guide, this article aims to provide influence operations researchers with enduring principles and points to some of the best resources currently available.

Defining Your Research Question

Some researchers believe that the best approach is to go out and collect data before their research questions are defined. However, prior thought and reflection usually pays great dividends. If researchers can develop initial research questions, however tentative these may be, then data collection can be more targeted and therefore more proactively designed to be effective, ethical, and legal. For example, if a research project is specifically interested in influencers on Twitter, it may be possible to implement inclusion criteria at the outset that restricts collected data to verified Twitter users, keeping average users out of data collection. Narrowing the data collection in this way would make it simpler to conduct follow-on analysis, while also reducing the legal and ethical burdens and risks of handling extraneous data on many irrelevant users.

Nevertheless, I advise my students that there are some specific cases where data should be collected prior to research question development, such as when time is of the essence. For example, collecting from a live-streaming search on Twitter during an event will often return more data than can be obtained retrospectively. That’s because Twitter’s application programming interface (API) excludes past tweets from hashtag- or keyword-based queries, unless the researcher has paid for premium developer services.

Ultimately, there is no one-size-fits-all approach for data collection. Rather, thoughtful reflection—earlier rather than later—makes for the best research practices and helps avoid the “garbage in, garbage out” problem.

Finding Data

When it comes time to gather data, consider whether you must collect new data yourself or if instead you can leverage data sets already collected by others. Academic data repositories, often hosted by university libraries, are now regularly used to create accessible ways to share social media data sets. Social media platforms also share data on influence operations. Perhaps the most well-known examples are Twitter’s Transparency Reports and its sharing of data on accounts that Twitter has taken down due to their ties to influence operations. These data sets provide a treasure trove of opportunities for researchers.

Additionally, open-access online repositories contain many data sets that were crowdsourced or scraped from platforms. It is worth searching GitHub, the Archive Team of the Internet Archive, and the internet more broadly for such data sets. For example, in my research about extreme speech on Parler, I collected my own data while also incorporating data collected by others. Repositories will change over time, so check back repeatedly. You can use search terms such as “influence operations,” “disinformation,” and “misinformation.” Of course, it is always the researcher’s own responsibility to verify that the data being used were collected ethically, legally, and compliantly.

Another method that I regularly use is to contact the authors of papers who have already collected data. Many journals list authors’ email addresses, and some academics are happy to share these data sets or even potentially collaborate. Collaboration on reports, white papers, or journal articles can be attractive to both nonacademic and academic researchers.

If the data you need is not already available, you will need to collect it yourself. I think many researchers, especially those without significant technical backgrounds, find this intimidating. However, there are accessible and quick data tools—some free, some paid—that don’t require the use of programming languages such as Python. At the time of writing, some sources to consider for collecting social media data are:

  • Netlytic (free for lowest usage tier; covers Twitter, YouTube, and RSS feeds),
  • Brandwatch (consists of Crimson Hexagon; high cost; covers Instagram, Facebook, Twitter, and Reddit),
  • NodeXL (low cost; covers Twitter, YouTube, and Wikipedia), and
  • DiscoverText (free trial; covers Twitter and RSS feeds).

All of these are powerful tools designed for nontechnical social media researchers. Some, like Netlytic, have detailed YouTube tutorials to walk you through the steps of data collection and analysis. There are also books and articles that describe how to use these tools, report results, and conduct analysis. Moreover, if you look for citations of the software through Google Scholar or another academic citation search engine, you will see how these tools have been successfully deployed to collect data.

Legal and Ethical Compliance

Collecting and reproducing platform data requires compliance with the platform’s Terms of Service. And it may involve other legal issues, such as the requirements of the European Union’s General Data Protection Regulation. Beyond formal legal obligations, researchers should also consider the ethics of collecting social media data—for example, when and how to handle personally identifiable information.

Unfortunately, legal and ethical principles can be hard to understand and apply, and they sometimes even seem contradictory. For example, Twitter’s display requirements state that developers should include full details of an account when quoting its tweets, but research ethics sometimes weigh in favor of anonymization to protect the account owner’s identity. Moreover, simply stripping out the username does not necessarily make it impossible to reidentify a user later, either via personally identifying information revealed in the tweet or by using a search engine to trace the actual content. These are just a few of the many gray areas that researchers face. The contours of these challenges vary from platform to platform, and they will change over time as platform policies, laws, and ethical standards evolve.

Although definitive answers don’t exist for every problem and situation, there is a lot of collected wisdom that researchers can draw on. My advice is to turn to these sources early—before actually collecting any data. As a start, I would recommend reviewing the ethical standards for data collection developed by the Association of Internet Researchers (AOIR) in their “Internet Research: Ethical Guidelines 3.0.” For platform-specific ethics or user-centered insights, I would recommend consulting the following resources.

  • Journal articles that have surveyed ethical questions on relevant platforms, such as:
    • Katie Shilton and Sheridan Sayles, “‘We Aren’t All Going to Be on the Same Page About Ethics’: Ethical Practices and Challenges in Research on Digital and Social Media,” paper presented at the Forty-Ninth Hawaii International Conference on System Sciences, January 5–8, 2016, https://doi.org/10.1109/HICSS.2016.242;
    • Joanna Taylor and Claudia Pagliari, “Mining Social Media Data: How Are Research Sponsors and Researchers Addressing the Ethical Challenges?,” Research Ethics 14, no. 2 (2017): 1–39, https://journals.sagepub.com/doi/10.1177/1747016117738559;
    • Leanne Townsend and Claire Wallace, “The Ethnics of Using Social Media Data in Research: A New Framework,” in Kandy Woodfield (ed.), The Ethics of Online Research (Advances in Research Ethics and Integrity, Vol. 2) (Bingley, West Yorkshire: Emerald Publishing Limited), 189–207, https://doi.org/10.1108/S2398-601820180000002008;
    • Helena Webb, Marina Jirotka, Bernd Carsten Stahl, William Housley, Adam Edwards, Matthew Williams, Rob Procter, Omer Rana, and Pete Burnap, “The Ethical Challenges of Publishing Twitter Data for Research Dissemination,” paper presented at WebSci ’17, Proceedings of the 2017 ACM on Web Science Conference, June 2017, 339­–348, https://doi.org/10.1145/3091478.3091489; and
    • Matthew L. Williams, Pete Burnap, and Luke Sloan, “Towards an Ethical Framework for Publishing Twitter Data in Social Research: Taking Into Account Users’ Views, Online Context and Algorithmic Estimation,” Sociology 51, no. 6 (2017): 1149–1168, https://doi.org/10.1177/0038038517708140.
  • User-centered perspectives that aim to inform researchers of how users perceive ethics, such as:
    • Helen Kennedy, Doug Elgesem, and Cristina Miguel, “On Fairness: User Perspectives on Social Media Data Mining,” Convergence 23, no. 3 (June 28, 2015): 270–288, https://doi.org/10.1177/1354856515592507.

Even if the platform you are studying does not have established data collection practices, there are generally similarities between platforms where best practices can be discerned. In the domain of social media, for example, you can consult handbooks of social media research such as The SAGE Handbook of Social Media Research Methods and The SAGE Handbook of Social Media. Both have chapters that reflect upon best practices in legal issues, compliance, and ethics.

Reviews and Transparency

Regulations and norms on thorny issues such as web scraping, consent, and the involvement of minors vary tremendously based on many factors, including country and context. Outside reviews and methodological transparency can help researchers navigate these complex, interlocking compliance challenges.

Where possible, researchers and institutions would benefit from placing their studies under Institutional Review Board (IRB) review—even if they think IRB approval is not needed. Going through the steps of a so-called IRB exempt review for the collection of public data does add extra time to the research process. But it provides peace of mind to researchers and often assists in identifying any legal, ethical, or compliance issues.

If a researcher does not have access to a review structure, disclosing all methods and search queries used for data collection—including whether and how web scraping was deployed—is critical. This transparency enables the public to confirm that research methods were appropriate, contributing to an overall culture of accountability in the field.

Technical Expertise and Resources

Working with platform data presents several technical challenges. First is the need for high levels of technical ability, including programming, to use some (though not all) of these data sets. Particularly large data sets, such as those released by Twitter, require skills in languages such as Python to be able to effectively conduct analyses and often require data science approaches (and knowledge of more specialist programming tools and packages). This is a real challenge, as there is a steep learning curve for organizations and individuals without a strong technical background.

Another challenge is the raw computing resources needed to process data sets that have millions of entries. Even smaller data sets, in the hundreds of thousands of entries, can be extremely resource intensive if your research goes beyond searching content to include higher order tasks like classifying or grouping content, perhaps using machine learning.

To illustrate, an algorithm that takes one second to process one post could take a decade and a half to handle 500 million posts. I have moved tasks like this to my campus’s supercomputing infrastructure, because what takes a second on my computer takes a millisecond on a supercomputer with many cores, lots of memory, and high graphics processing units. This makes analysis achievable in days rather than years, but again, it requires access to these resources and the technical knowledge to use them.

Given the challenges of handling large data sets, it is critical to set a scope of work that can be realistically achieved with the computing resources and technical ability available. It can take multicore processors with large amounts of memory and many nodes on a supercomputer to get through the extremely large data sets that we see from the platforms—such as Twitter, Reddit, and YouTube—that are popular for influence operations research. Computational power is especially important for intensive tasks like network analysis. If in-house computing infrastructure is not available, you will need to buy, rent, or borrow infrastructure. Reaching out to a cloud services provider ahead of time for a cost estimate can help you plan and budget accurately. (Cloud service providers like Amazon Web Services and Oracle Cloud are popular choices.) Moreover, there are different technical learning curves that need to be overcome to deploy data and software on cloud computing services, whether they are at one’s own institution or elsewhere.

Summary

Collecting and processing platform data is often neither easy nor simple. However, an increasing number of resources exist to help researchers at all skill levels and budgets. Following a few basic principles can help you answer your research questions while adhering to various legal and ethical standards. The most important takeaways are to:

  • plan very early on and do not just jump in and collect data (unless data will be lost);
  • reflect on what computing resources are available while research questions are being developed and what budget constraints you may face;
  • consider using data collected by others to save time and resources;
  • think carefully about ethical, legal, and compliance issues;
  • consult established ethical practices, such as the AOIR ethical guidelines, and familiarize yourself with existing literature to discern some best practices for the particular platform you are collecting data from;
  • consult IRBs or similar units within organizations to ensure the highest levels of ethical standards in influence operations research (where available); and
  • be aware of the technical skills required to analyze data at scale.

About the Author

Dhiraj Murthy is a full professor of Media Studies in the Moody College of Communication and of Sociology at the University of Texas at Austin.

Return to top ↑

Data Visualizations

Carlotta Dotto

Network visualization tools have become indispensable resources for journalists and information experts who want to dig beneath the surface of social media conversations and research suspicious behavior online.

They can be used to visually map active communities on social media, reveal constellations of influential actors in an influence operation, and shed light on the most common tactics behind such operations; they can be used to understand how networks behave and look at what lies behind the dissemination of malicious information across social media platforms (see figure 9).

Network graphs have also become an increasingly common tool to share and promote academic work on social media. But what do they really represent, how should they be used, and how should researchers and journalists interpret them? In this chapter, I will explain the different uses of network visualizations and give a few tips on how to read them, drawing on a case study of how First Draft, a nonprofit organization focused on misinformation and disinformation, tracked an anti-Muslim influence operation that spread on Indian social media.

Overarching Principles in Network Analysis and Visualization

In graph theory, a network is a complex system of actors (called nodes) that are interconnected by some sort of relationship (called edges).

A relationship could mean different things—especially on social media platforms, which are fundamentally made up of connections. Network analysis therefore focuses on understanding the connections rather than the actors (see figure 10).

In a graph data set, each data point is a node, an edge, or an attribute. For example, two accounts on Twitter are the nodes of a network. One retweets the other, and this is called the edge. Attributes are the key properties of the network and can define the nature of the relationship. They can be accounts’ names, creation dates, how many times they retweet each other, measure scores, and so on.

It can be a challenge to get the files for nodes and links in a usable shape. Not all data sets already include relationships, and even if they do, the relationships might not be the most interesting aspect to look at. Sometimes it’s up to the researcher to define edges between nodes.

Remember that networked data is everywhere—not only on social media or online—and the use and potential applications of network analysis tools can go far beyond the field of disinformation.

Take, for example, large-scale investigations of document caches such as the Panama Papers or Paradise Papers. Or, examine the graph models used to explore networks of donors to political parties. Network graphs can also be used in art, for example, to illustrate relationships between works in an art collection (as seen in Stefanie Posavec’s work) or to explore connections in European royal families (as in Nadieh Bremer’s beautiful project).

However, it is in the field of social network analysis and research on influence operations where the tool has provided a wider scope for experimentation.

How to Interpret What You See

Once you have built a network graph, there are a number of elements that will define the structure of the visualization and the visual encoding of the network.

For example, degree centrality (or connectivity) describes the number of connections (edges) of a node, and it is conventionally illustrated by the size of the node—the more connections a node has, the bigger it is.

Weight is also one of the most important elements of a network graph. If an account retweets the same account multiple times, the weight of their relationship will be higher, while the measure of betweenness centrality indicates nodes that act as bridges between groups or clusters.

Another important attribute is the direction, which helps explain the nature of a relationship. When one account retweets another, it creates what is called a directed connection. In directional networks, there are in-degree values (for when an account retweets or mentions another) and out-degree values (for when an account is retweeted or mentioned by another). But other times, connections don’t have directions, such as two accounts that are friends on Facebook, for example.

Network analysis tools use algorithms to calculate some of these measures. Filters, color gradients, node sizes, and edge shapes can be used to show particular attributes of the network. Location is also an important element of visual coding: think of algorithms as the driving forces that shape the constellation of nodes, where gravity pulls together the nodes that are most connected.

Network analysis is therefore the activity of playing with filters, attributes, visual elements, and algorithms that will help you gain important insights. At the same time, always go back to the original question you wanted to investigate: For example, who is speaking about a specific topic? Which accounts are most influential?

An Anti-Muslim Case Study

As a practical example of this, along with the First Draft team I used network visualizations to analyze a network of Twitter accounts that was actively spreading anti-Muslim sentiment on Indian social media in May 2021.

The network was artificially pushing anti-Muslim messages, “using newly created accounts posting the same text and amplifying specific hashtags” (see figure 11). Among others, the hashtag #UnitedAgainstJehad trended on Indian social media, sparked by an open call to propel an anti-Muslim narrative to prominence.

We noted that “among the key narratives pushed using the hashtag was the idea that Muslims posed a threat to the Hindu way of life, and that they threatened India’s national security.” This was part of a “wider context: an active effort to spread harmful mis- and disinformation about the Muslim Indian community.”

We used network tools to trace the origins and tactics behind the campaign. In the few hours after the first time the hashtag was posted on May 12, 2021, “likes and shares poured in and by May 13, the hashtag had already appeared over 11,000 times, producing nearly 70,000 interactions on Twitter.”

The network diagram maps the accounts that tweeted the #UnitedAgainstJehad hashtag (see figure 12).

Each node represents an account, and each connection between two dots is a retweet. The size of the nodes shows accounts that were retweeted the most in the network, revealing the most influential accounts in making the hashtag trend.

Network analysis also helped disclose a so-called copypasta campaign: the content of the accounts’ most influential posts was copied and pasted hundreds of times by other accounts, and the small clusters of accounts next to the most retweeted accounts were likely engaged in copying and pasting the more influential accounts’ tweets.

Another use of network visualization was to highlight the role that old and new accounts alike played in promoting the hashtag. A recent creation date of a group of accounts is another widely accepted key indicator of suspicious activity by a network.

The network graph shows accounts that posted the hashtag (see figure 13). This time, the size of the circles shows the volume of tweets posted by each account (node). Red nodes represent accounts that were created in May 2021. Some of the accounts in the dataset were created on May 13, 2021—the day the hashtag reached its peak. This indicates that these accounts were likely created for the explicit purpose of artificially pushing the hashtag. The diagram shows that some of the accounts that were created in May 2021 were also among the accounts that tweeted the most.

Network visualizations helped examine the influence campaign and chart how trolls and far-right influencers were pushing viral, anti-Muslim propaganda. At the same time, the network illustrates a mix of old (in gray) and new (in red) accounts that actively participated in spreading the hashtag—demonstrating the complexity of actors and how an influence campaign is often the result of a heterogeneous combination of different types of accounts.

Network Visualizations: Not Just for Fireworks

As demonstrated, network analysis is a versatile tool full of uncharted potential. Other examples of accurate analyses include the work of open-source intelligence expert Benjamin Strick, such as when he unveiled a pro-Chinese government information operation on Twitter and Facebook for Bellingcat or the visualizations shown by researcher Erin Gallagher on her Twitter profile depicting, among other things, the spread of viral propaganda by far-right and QAnon groups.

Network analysis is far from being a fixed science—the world’s best academics are still exploring its complex, multifaceted approaches, and there is still much to comprehend.

Yet, there are pitfalls to relying on this type of analysis. Due to their complicated nature, network visualizations can easily be misused, misrepresented, and misinterpreted—and their application requires a great deal of caution.

One of the main challenges when approaching this science is to understand what type of relationship you want to analyze and to correctly build a graph data set that represents that connection.

There are step-by-step guides to help you collect social media data and format it correctly so that it can be read and understood by network analysis tools.

Next, you need to determine how much data you want to collect and the time interval on which to focus the analysis. One of the most common mistakes in visualizing networks from social media is to display too much data—or too little. When a network is too large and closely interconnected (meaning that a large number of nodes corresponds to an exponentially larger number of edges), it will be nearly impossible to draw any interesting conclusions from it.

Then, use filters and attributes to focus the analysis and present readers with a manageable volume of connections. At the same time, visual aids such as colors, sizes, and annotations can help highlight and guide the analysis in the right direction.

Even if it is well-constructed, reading and interpreting a network visualization correctly can be challenging. Therefore, it is important to provide your audience with the tools and information they need to effectively grasp the message you are trying to communicate using the network map.

As impressive as they may seem, network visualizations are by no means conclusive graphs. They are first and foremost exploration and research tools that can make invisible connections visible. Beyond the beautiful aesthetics, it’s easy to end up with uninsightful graphs that say little beyond fireworks.

A Few Recommended Tools for Visualizing Networks

Several tools are available for sketching and analyzing the networks that you produce in the course of your investigations, and some allow you to gather the necessary data directly. Among the best free tools are:

  • Gephi, the most used tool for network analysis. It is an open-source program that allows you to visualize and consult graphs. It doesn't require any programming knowledge. It can handle relatively large datasets—the actual size will depend on your infrastructure, but you should be able to go up to 100,000 nodes without a problem.
  • Neo4j, a powerful graph database management technology. It is well known for being used by the International Consortium of Investigative Journalists for investigations such as the Panama Papers.
  • Flourish, which includes network graphs in its data visualization options. This program is a good place to start to get a grip on the dynamics and shapes of the data involved. It even has a virtual reality network visualization template.

If you are familiar with coding, you might want to explore Python’s network analysis library NetworkX or the igraph package within R, which is a great library dedicated to graph drawing if you are comfortable with JavaScript.

About the Author

Carlotta Dotto is a visual data journalist and trainer who specializes in data investigations, interactive storytelling, and data visualizations. For over two years, she worked for the nonprofit organization First Draft developing pioneer techniques to research and investigate disinformation campaigns, online extremism, and media manipulation. She now works as a visual editor at CNN and has written for several UK national and international newspapers.

Return to top ↑

Media Coverage of Influence Operations

Olga Robinson and Shayan Sardarizadeh

Spotting, investigating, and reporting on influence operations, sponsored by either foreign or domestic entities, is often a complex and lengthy task.

Busting a network of inauthentic accounts on a major platform is a rewarding investigative project for journalists and researchers. But there are common mistakes that can potentially lead those working on a project to draw the wrong conclusions.

Some of the common mistakes include misattributing the potential perpetrators, overestimating or underestimating the impact and size of an operation, and rushing to judgment or misreading the nature of an operation before all the evidence has been gathered.

Any of those errors could lead to a report that misleads readers or even ends up helping bad actors—far from the goal of shining a light on coordinated disinformation.

In this article, we review some of the common pitfalls investigators ought to be wary of, and then we offer some tips on what reporters should watch out for in research on influence operations.

Finally, we suggest how both researchers and major social media platforms can help journalists with better and more transparent reporting of influence operations.

Attribution

One of the most common mistakes journalists and researchers make is attributing an influence operation to an actor before collecting sufficient data.

Once a number of accounts have been identified and confirmed as part of a larger network, it is sometimes automatically assumed that the network’s identity, location, and motivation can also be confirmed by investigators.

This is a misconception that risks delegitimizing an otherwise solid investigation. It is very difficult to assess the location of an account or attribute an operation to an entity. Doing so often requires access to data beyond what is available to most journalists or researchers, and it is almost always a probabilistic judgment, not an absolute certainty.

Social media platforms are one of the few authorities that have sufficient data to routinely make such attributions. In the case of sophisticated state-sponsored operations, this can also require collaboration with government agencies such as the U.S. Federal Bureau of Investigation. In some instances, collaboration between social media companies, cybersecurity entities, government agencies, and investigators can result in higher-confidence attributions.

Even then, drawing conclusions about an operation’s links to a state-backed actor can be tricky. For example, ahead of the 2019 general election in the UK, the social media platform Reddit attributed leaked confidential documents detailing UK-U.S. trade talks to a campaign “originating from Russia”; but the company’s statement and other research into the matter left open the crucial question of whether the Russian government was involved.

Given the complexity of the attribution process, any report claiming it can locate or attribute a network without providing concrete evidence should be treated with skepticism. Ideally, attributions should be expressed in terms of probability or confidence levels. A logical argument underpinning the attribution should be offered, along with any evidence gaps, assumptions, and alternative possibilities that were considered.

The safest way of reporting about a network is to present a detailed list of the accounts you have spotted to the relevant platform(s) and ask whether they are able to make an attribution after an internal investigation.

Impact and Size

It is also important to be transparent about the impact of an influence operation and how the conclusion about its supposed success—or lack thereof—has been reached.

Measuring the impact of an influence operation is one of the hardest tasks for disinformation researchers and journalists working on the beat.

It is a complex process, often involving multiple unknowns. And it is easy to get wrong.

For one thing, most practitioners can only make educated guesses—at best—about what the actual goal of an operation might be. Without this knowledge, it is very difficult to tell for sure whether a specific campaign has been a success.

Researchers and journalists may only have access to a part of a network and not its entirety, making reliable assessment of its impact more challenging to pin down.

In addition, an operation’s goals may change over time, which would inevitably affect how its potential impact can be measured.

Taking these challenges into account, it is important to report on operations responsibly by providing as much context and evidence to support your conclusions as possible.

For example, at the height of the coronavirus pandemic, we investigated a set of fake or hijacked accounts promoting pro-Chinese government messages about the crisis. The network consisted of more than 1,200 accounts that posted across platforms like Facebook, YouTube, and Twitter.

Despite posting content prolifically, the network appeared to have a relatively low impact. Many accounts had no more than several hundred followers or subscribers, and they were largely amplified by fellow accounts in the network rather than genuine users.

We mentioned some of these observations early in the article to make it clear to the reader that the relatively large number of accounts in the network and its cross-platform operation did not necessarily mean its impact had been significant.

The same principle applies to reporting and making an assessment of an operation’s size.

It is important to be transparent about how large or small a network is, regardless of its origin and potential links to a state actor or a well-known source of disinformation.

Ahead of the 2020 presidential election in the United States, Facebook shut down a network of two pages and thirteen accounts it attributed to “individuals associated with past activity by the Russian Internet Research Agency (IRA).” The network focused primarily on amplifying an outlet called PeaceData, which posed as an independent news website.

Although the PeaceData operation reportedly had links to the IRA, its size could not compare to the thousands of accounts created by the troll farm across multiple social media platforms ahead of the 2016 election, which was not made immediately clear to readers in some of the early reports published by media outlets about the operation.

In the interest of accuracy, it is best to mention the size of the recently shut-down network early on and avoid overhyping it for the sake of a catchy headline.

Evidence

When you come across an explosive claim online, which appears to be backed up by evidence, it’s tempting to share and report on it as soon as possible.

But this is exactly the moment when it’s best to stop and think: Does this piece of evidence constitute actual, verifiable proof? Can there be another explanation for what you’re seeing?

In April 2020, the BBC’s specialist disinformation and social media correspondent Marianna Spring noticed claims on Twitter that the UK Department of Health and Social Care was running a network of Twitter accounts posing as National Health Service (NHS) staff.

Spring notes that the department dismissed the allegations as “categorically false,” and Twitter said it had not seen proof that would link a network of fake accounts to the government.

In addition, the accounts, which were allegedly fake pro-government “bots,” were seemingly deleted.

Apart from Twitter, the claims also appeared on Facebook, Spring found. But the only two pieces of evidence to support the allegations were a screenshot of one of the supposedly fake accounts in the network and an archived version of several of their tweets.

This account—called “NHS Susan”—claimed to be that of a junior doctor. Its profile picture used a photograph of a real NHS nurse who happens to have a different name. Its Twitter bio said she was “transitioning in 2020” and was “fighting COVID on behalf of all LGBTQ & non-binary people.”

According to Spring, this description, as well as the wording of some of the posts tweeted out by “Susan,” suggest that the account might be a satirical one or deliberately provoking.

Later, Spring got in touch with the person who first made the widely shared claims on Twitter, but they did not provide any additional evidence or proof of how they had discovered the evidence of the alleged network. Two years on, there is still no verifiable evidence the accounts were run by the DHSC, or that a network of fake accounts existed at all.

The story highlights how important it is to take the time to unpack and analyze what is presented as concrete proof online. After all, reporting on a story that is not supported by reliable evidence may undermine the credibility of your work for years to come. And that’s a high price to pay.

Another high-profile example, highlighting the importance of erring on the side of caution before rushing to judgment, is the racist abuse of three England football players on social media following the team’s defeat against Italy in the Union of European Football Association’s 2020 European Football Championship.

In the immediate aftermath of the match, some media reports suggested that most of the abusive posts came from anonymous foreign accounts based outside of the UKThis prompted calls to end online anonymity to reduce the level of abuse posted on major social media platforms and adopt a more robust approach to guard against foreign-based online campaigns.

But an investigation by Twitter a month after the incident found not only that the UK was “the largest country of origin” for the majority of the abusive content Twitter had removed but also that the vast majority of the accounts tweeting racist or abusive content were identifiable and not anonymous.

In the first few days after the match, 1,961 tweets had been removed, and only 2 percent of them had generated more than 1,000 impressions before removal. Considering the UK has a population of nearly 67 million people and less than 19 million Twitter users, the racist abuse came from a small fraction of British Twitter users.

Following revelations about Russian interference in the 2016 U.S. election, it has become a frequent occurrence for some politicians or partisan sources around the world to pin either a disruptive political incident or any form of online campaign aimed at Western nations on a nefarious campaign coming directly from Moscow or Beijing, without any evidence to back up such claims.

Major conspiracy theories such as QAnon are sometimes presented by reporters and researchers as a possible foreign interference campaign emanating from Russia, despite no evidence of Russian origin found to date.

Such claims can undermine trust in future evidence-based investigations alleging interference from a foreign state or entity, including from Russia or China.

The best approach when it comes to influence operations is to always stick with evidence that stands up to scrutiny, and when such evidence is not available, reporters should refrain from making allegations based on conjecture.

What Influence Operations Researchers Can Do for Reporters

When reporting on a newly exposed influence operation, journalists are often looking for the same things as in any other story. Researchers can help reporters out by emphasizing the below elements.

First of all, why does this story matter and will it resonate with a general audience that does not have any specialist knowledge? It can be hard to answer these questions straightaway, so it helps when researchers provide key highlights of their investigation early on and give a clear explanation as to why their discovery is important to a general audience.

Transparency is another crucial factor. For any responsible reporter, it is important to understand how exactly a researcher came to a certain conclusion, whether their evidence holds up, and what their motivation is. Many media outlets, though not all, require journalists to report impartially and to avoid inserting personal opinions or political biases in their work. Researchers and academics would make a reporter’s job easier by being honest about their own motivations, political biases, sources of funding, and affiliations.

That transparency allows reporters to contextualise the information provided to audiences and look for a variety of experts from different walks of life with a range of political, cultural, and social backgrounds to avoid giving the impression that they are siding with a specific point of view.

If a researcher or online activist refuses to share their evidence, show how they work, or answer additional questions about their findings and what motivated them to start the investigation, it is far less likely their story will be picked up.

Once the story is being worked on, it’s helpful for researchers to be as responsive as possible to follow-up questions from the journalist.

Reporting on some influence operations may require quite a lot of background knowledge, which not all journalists will have. That is why it is good practice for researchers not to assume knowledge when communicating with reporters. Avoid obscure and highly technical terms and provide essential context where necessary.

While tech and disinformation reporters usually have some knowledge of coding, hacking, and computer science, they are journalists first and foremost and are not as technically proficient as investigators in research-focused institutions and academic environments generally are.

Investigating influence operations requires some degree of technical prowess that may be above and beyond the skill sets of many journalists. Therefore, presenting the data set or research findings with as much context and in the simplest language possible will help journalists understand the significance of the findings better as they attempt to put together a story pitch for an editor.

How Platforms Can Help Reporters

Another issue frequently raised by journalists investigating influence operations is the way they are both reported and taken down by major tech platforms. The level of data and details provided by platforms vary from one to the other, and each has its own upsides and downsides.

Meta, Facebook’s parent company, releases regular reports detailing takedowns of operations on the platforms it owns. Facebook is often the largest tech platform used by bad actors for what Meta refers to as “coordinated inauthentic behavior.”

While these reports often provide an abundance of information about the origins, size, and impact of influence campaigns spotted and removed by the company, no more than a handful of examples of the content posted by the accounts involved in the operation are provided in these reports.

This is in direct contrast to how Twitter approaches its takedowns. After every takedown, the company makes available large data sets of tweets, profile information, and multimedia from accounts attributed to state actors for relevant journalists, researchers, and academics.

Meta’s reasoning for not doing the same often includes privacy concerns and the fact that those operations are primarily removed for their behavior rather than their content.

But having access to content posted by removed influence operations allows reporters, researchers, and academics not only to study the posts to figure out the ways that bad actors manipulate audiences but also to be able to recognize evidence of wrongdoing in the future should another operation replicate content posted by one previously removed.

In fact, several influence campaigns removed by Facebook in recent years have posted memes previously used by the IRA, sometimes with slight alterations, which itself is a helpful clue for investigators looking to spot new influence campaigns.

Meta has recently announced it is working on an initiative to start sharing more data with researchers and academics about “malicious networks” and information operations, which would hopefully go some way to addressing the problems cited above.

Meta disbanded the team behind CrowdTangle last year. CrowdTangle is an analytics tool owned by the company that is widely used by academics, researchers, and journalists to spot viral content and misinformation trends, including influence operations, on Meta’s platforms.

The decision, according to the New York Times, was influenced by concern among senior Meta executives about critical news stories where CrowdTangle data had been used.

The company also paused access to the tool for new joiners in early 2022, citing “staffing constraints.”

Collaboration between tech companies, researchers, academics, and reporters is essential to help identify new trends in influence operations, potentially discover the motives behind them, and better inform the general public about the risks posed by such campaigns.

About the Authors

Olga Robinson is a journalist reporting on disinformation, conspiracy theories, and media manipulation for BBC Monitoring. She also observes the Russian media and focuses on the use of disinformation techniques within the country.

Shayan Sardarizadeh is a journalist reporting on disinformation, conspiracy theories, cults, and extremism for BBC Monitoring. He also conducts open-source investigations with a focus on content verification.

Return to top ↑

Encouraging Diversity of Staff

Camille Stewart Gloster

Research without the benefit of diverse perspectives is incomplete. That may be viewed as a bold statement, but research from one perspective is inherently limited. This is particularly true when studying influence operations, which thrive or wither based on the societal contexts in which information is moving. Even when research conclusions may be correct, the choices made and insights gained during the course of the study can often still be enriched by different viewpoints.

As such, diversity is essential. And while diversity of perspectives and disciplinary backgrounds are important in influence operations research, they are no substitute for diversity of ethnicity, gender, race, and other identity attributes.

Influence operations can have a disproportionate impact on specific communities or identity groups, especially those that have been historically marginalized and remain vulnerable, either to being misled or to the consequences of a given narrative’s spread. Therefore it is incumbent upon influence operations researchers to incorporate the perspectives of those groups (and researchers from those groups) to better identify, understand, and—when relevant—respond to influence operations. Lack of representation has the potential to cause a range of impacts including skewed choice of research topics and methods, incomplete or inaccurate framing of research agendas, and incomplete or inaccurate findings and recommendations. The ability for influence operations research to effectively influence governance, strategy, and mitigations hinges on a complete understanding of the threat and its impact on people. Broad categorizations of people that do not account for different or outsized impacts on particular subsets are unlikely to result in comprehensive understandings of the problem space.

Assessing the full impact of influence operations often requires cross-disciplinary analysis of culture, geography, linguistics, and social dynamics. Without an intimate understanding of the differing social perspectives, history, and lived experiences of people with different identities—something that best comes from representation of the affected communities—examinations of influence operations are incomplete. Diversity allows for deeper understanding, new insights, innovation, and challenging the status quo. In a world where social media can in some cases carry a message much further than propaganda of the past, the benefits of diversity are necessary to understand, and effectively combat, influence operations.

Russia’s weaponization of race in its influence operations ahead of the 2016 U.S. elections helps illustrate the importance of societal context—as is also evident from the difficulty that nondiverse teams encounter when trying to adequately understand and address the information threat. In 2016, the U.S. government did not anticipate or conteract Russia’s targeting of Black Americans in a timely manner. Since then, the recognition of this strategy has caused the U.S. government and global social media companies alike to account for the kind of chaos and division fueled by identity-based information operations. As Army Cyber Institute officer Maggie Smith outlines in a 2021 article on the importance of diversity in counter-disinformation,

“Regardless of origin, foreign manipulation and influence activities in the context of great-power competition are potent because they work within existing social fault lines to raise tensions on both sides of divisive public issues (e.g., gun control, racism, and abortion) and to reinforce peoples’ fears and prejudices.”

The racial inequity that underpins U.S. society has long been weaponized to influence democratic outcomes, radicalize citizens, and exacerbate division.

Influence operations often target divisive issues and may sometimes be intended to widen or exploit these divisions at a societal level rather than target specific audiences. However, many of the most divisive issues globally often have outsized impacts on minority or marginalized communities. Where these communities are impacted, research teams with no intimate understanding of the communities impacted by influence operations cannot appropriately scope research, fully grasp those operations’ impact, or evaluate the efficacy of proposed solutions. In the context of recent U.S. elections, Russian operatives have targeted both White and Black voters with different messages—trying to increase anger and distrust among White conservatives while working to lower Black voter turnout, using different accounts and narratives to target the two audiences. However, the impact on Black and other minority communities is outsized. Whereas White conservatives may have been affected at a mental level by Russian operations, successful efforts to “confuse, distract, and ultimately discourage” Black Americans from voting harm them more concretely by deepening their exclusion from democratic systems and thereby exacerbating inequities over time. Therefore, any effective anti- or counter-narratives must consider how Black, Latino, and other minority communities experience U.S. society.

Understanding how, and why, the Black community in the United States reacts to such narratives requires a deep, detailed understanding of the oppression faced by this community. This means understanding the mistrust of systemically racist governments and institutions, including police and politicians; the current-day socioeconomic manifestations of oppression; and the limitations on access to relevant information faced by large segments of the community. While all researchers can and should be aware of these issues, studies of influence operations that involve representatives from the affected community are—all else being equal—likely to be both more rigorous and more accurate than those which do not. Including those perspectives can be a complex endeavor, as communities are not a monolith. However, some inclusion of relevant community members provides a greater opportunity to be aware of, and understand, relevant dynamics.

While organizations, and specifically influence operation researchers, must do the work to create diverse teams that not only understand the context through personal experience but also bring an innovative perspective, we must all also acknowledge that every identity cannot be represented on every team. This is why cultivating teams that are conscious of diversity, challenged to think outside of their personal circumstances, and exposed to the cultures and lived experiences of others is also extremely important. We can all see how that consciousness is changing the global discourse about diversity, particularly (in many countries) around race, gender, and intersectional identities. Whether by bringing in diverse speakers or providing relevant training, there are opportunities for researchers to cultivate an inclusive environment that not only allows for innovation but is welcoming of new voices and colleagues.

Diversity should span race, gender identity, ethnicity, socioeconomic background, geography, religion, and other identity attributes, with prioritized focus on including underrepresented or marginalized groups given the local context—for example, in the United States, different races, ethnicities, and gender identities. Influence operation researchers must avoid the instinct to define diversity too broadly to mean simply individual differences among people, rather than focusing on historically excluded groups nationally and/or globally. This dilution of diversity often results in a meaningless effort to highlight individual interest or nuances without addressing the underlying structureal issues. Intersectionality is also crucial to building diverse teams: if a team’s focus on gender means it has only recruited White women, it should reevaluate how race can be layered into its outreach and recruitment to ensure a truly inclusive focus on women.

Recognizing that influence operations research is enriched by diverse perspectives, it is incumbent upon research teams to seek out diversity. There are a number of barriers to entering this field, not limited to a lack of access and exposure, lack of financial resources, lack of network diversity, and geographic barriers. This can make recruitment more difficult, but it is not impossible. There is a diverse group of practitioners in this field, or in relevant adjacent fields, whose expertise would lend itself to meaningfully contributing to national security, foreign policy, and cybersecurity. The industry is rich, albeit not abundant, with diverse talent and perspectives. Additionally, there are ways to diversify research inputs that do not require a team to represent every potentially relevant identity or background, such as contracting diverse reviewers and working with diverse and/or local partners.

As leaders building influence operations research teams, you should embrace diversity without fearing the necessary shifts in process and outcome. Recruitment may look different and require a bit more intentionality and work. Engagement with partner organizations and communities may need to look different in the long run; however, this additional work will be well worth the investment to gain deeper insights, better reflect communities in your work, and provide innovative solutions. Here are a few ways to incorporate diverse perspectives into influence operations research work and teams:

  • Build a diverse team. This may require additional investment in recruiting and a willingness to do the research and networking necessary to find the right participants. There are a lot of talented practitioners already working in this space. They may not be in your network, but that does not mean they do not exist. Leverage resources like NextGen NatSec and #ShareTheMicInCyber, and if you cannot find these resources, build them. I created these initiatives because I recognized that these communities, although present and talented, did not have sufficient visibility or a collective voice. That absence meant an assumption that there was a dearth of talent among these diverse communities; this could not be further from the truth. While initiatives like this may start small, efforts that include the voices of the community you seek to reach grow rapidly because the talent is in circles you have not yet encountered. Additionally, they serve as beacons for future entrants into the space, creating pipelines because of representation and exposure.

    Recruiters and hiring managers should understand the actual skills required to contribute to an influence operations team, including those that can be taught, and ensure they are not limiting themselves to past indicators of those skills when identifying potential candidates. These past indicators can include university affiliates, degrees, past work experience, and so on. There is still a great deal of elitist credentialism in this field, especially around university affiliations and internship placements. Lots of candidates who show potential and work ethic continue to be overlooked. There are multiple pathways to developing the skills necessary to do a job, and confining recruitment to programs and profiles that have traditionally been successful limits the candidate pool and the opportunity to enrich the perspectives working on the issue.
  • Consider the circumstances of participants. When conducting interviews or focus groups, consider paying participants to lend their time and experience. This may help overcome some of the economic barriers to participation. If the practitioner is connected to an institution, earning a salary, and has been contracted for support so the practitioner is being appropriately compensated, this may not be a consideration. However, as you engage independent researchers, people in low- or middle-income countries, or people potentially putting themselves at risk by engaging, make compensation part of the engagement strategy. It may not only help you get more participants but also ensures you are not unduly burdening the people you are engaging.
  • Build diverse networks and partnerships. You will be able to lean on these networks to gain a better understanding and source new talent and insights. Partner with organizations that operate in the communities relevant to your That may be a civil rights organization in the United States or an nongovernmental organization in another country with the relevant societal context and expertise.
  • Create mechanisms to leverage people that are not trained influence operations researchers. These people will be able to inject societal context, specific expertise, and diverse perspectives at different stages of the work. When I conduct research, I convene a diverse brain trust of experts with a variety of backgrounds, ethnicities, disciplines, and other characteristics to react to the work at different stages. I am always rewarded with insight and understanding previously beyond my reach and encounter innovative approaches to the work. This is also a great opportunity to identify potential candidates for additional investment and training to build the pipeline of talent. Depending on your specific research area and project, there will be other, creative ways to incorporate diverse perspectives—seek these out.

Organizations seeking to make meaningful impact in the area of influence operations should diversify the perspectives represented within research teams. A culture of inclusivity and openness contributes to improved outcomes. The most impactful organizations will invest in short- and long-term strategies to promote diversity in their work and the field at large. If influence operations research continues to suffer from a lack of representation, especially of historically marginalized groups, then research findings will remain incomplete. Persistent knowledge gaps would, in turn, make global digital public squares harder to responsibly govern—rendering societies more vulnerable to the whims of demagogues, foreign manipulation, and even physical threats like disease.

About the Author

Camille Stewart Gloster is an Atlantic Council DFRLab nonresident fellow and the global head of product security strategy at Google.

Return to top ↑

The Risk of Gender Bias in Conducting and Reporting on Influence Operations

Kristina Wilfore and Alcy Stiepock MacKay

Introduction

A growing body of research shows that women in politics are disproportionately targeted by gendered disinformation campaigns that feature fake stories, threats, and humiliating or sexually charged images. Gendered disinformation aims to undermine democratic systems and discourage women from entering leadership roles or being part of public life. Such attacks are pervasive globally, particularly for high-profile women electoral candidates and women journalists. The unique volume, content, and impact of this subsection of disinformation makes it an essential focus when investigating influence operations.

To research the harm to women leaders and democratic systems, it is essential to examine the intersection of disinformation and gender identity, ethnicity, race, and religious affiliation. Investigators have a critical role to play in studying gendered disinformation in larger influence operations. Research teams should seek to acquire subject matter experts in information pollution and its impact on gender. A research team with diverse expertise and backgrounds can help maintain the appropriate cultural understanding necessary for an analytic focus on the role of gender in disinformation campaigns.

Gendered Disinformation: Defining the Problem and Existing Research

Definitions

There are multiple working definitions of gendered disinformation. But ultimately, gendered disinformation relates to the intentional spread of false information, allegations, or character attacks against persons or groups based on their gender identity, or using gender as an angle of attack, and it relates to the ecosystem in which this information is organized. This phenomenon is separate from gender-based violence (GBV), which involves targeting and abusing individuals based on their gender identity.

The concept of GBV online often gets confused with gendered disinformation, when in fact they refer to similar but distinct factors at play. Online GBV consists of threats, abuse, hate speech, harassment, and smear campaigns on the internet, sometimes with the goal or potential of facilitating violence off-line. Online GBV and harassment involve targeting and abusing an individual based on their gender identity.

Lawyer Cynthia Khoo coined the umbrella term “technology-facilitated gender-based violence” (TFGBV), which includes a spectrum of activities and behaviors, including both online GBV and gendered disinformation, made possible because of the role of digital platforms as a central player in perpetuating violence, abuse, or harassment against cis and trans women and girls. Khoo argues that digital platforms have simultaneously provided new and efficient mechanisms for abusive users to engage in TFGBV.

As researchers, focusing solely on manifestations of online GBV is not sufficient. We should take a more holistic approach to investigations to improve our understanding of the role of technology and digital platforms in harassment and of the tactics of malign actors, whether individuals or organized campaigns.

The Organization and Spread of Gendered Disinformation

Gendered disinformation is the combination of three vectors of threats facing women leaders on social media: sexism and misogyny engendered by the so-called manosphere (“disparate, conflicting and overlapping men’s groups . . . shar[ing] a hatred of women and strong antifeminism”); online violence; and disinformation.1 Gendered disinformation campaigns are pervasive around the world, yet they are still mostly overlooked by traditional disinformation research. Mainstream research and investigations into disinformation campaigns often treat gender as an afterthought, if it is included at all. Yet, gendered disinformation campaigns specifically build on, and are rooted in, deeply set misogynistic frameworks that reinforce a reductive, binary view of gender. These campaigns portray people with supposedly masculine characteristics (such as outspokenness or combativeness) as those fit for leadership, while stereotypically feminine temperaments (including sensitivity or a supposed tendency toward irrationality) are inherently unfit to lead. As part of this simplified conception of gender, women who do display perceived masculinized leadership qualities are often figured as unnatural and subversive and therefore also become undesirable candidates. These false perceptions are reinforced through a selection of narratives rooted in classic misogynistic stereotypes.

Gendered disinformation narratives portray women as untrustworthy, unqualified, unintelligent, and unlikable. One constant, underlying theme of gendered disinformation is that women in politics are too emotional or libidinous to hold office or participate in democratic politics. Sexualized attacks are a constant backdrop to disinformation aimed at discrediting women, with pervasive memes and graphics accusing women of “sleeping their way to the top” or being sexually promiscuous. Another widely circulated narrative of abuse is rooted in transphobia as well as misogyny. By suggesting that women leaders are “secretly transgender,” some attacks play into the trope of duplicitous women. This not only frames transgender individuals as inherently deceptive but also implies that “this deception is responsible for the power and influence that these women hold.” For example, New Zealand Prime Minister Jacinda Ardern was the target of online abuse insinuating she was transgender after a video claiming that a pleat in her dress was male genitalia circulated online. Women politicians in the United States—including Vice President Kamala Harris, Michigan Governor Gretchen Whitmer, former first lady Michelle Obama, and Congresswoman Alexandria Ocasio-Cortez—have all been attacked with this same narrative.

Social media acts as an amplifier for these misogynistic lines of attack, and, with such a widespread base of users across societies and countries, coordinated attacks can spread with ease. Approximately 42 percent of women legislators reported that sexualized images of them were spread online and that the offending posts were often shared through fake accounts and/or bots in a coordinated way. And 85 percent of women globally reported witnessing online violence against other women from May 2019 to May 2020.

Common tactics among misogynistic social media influencers who drive these attacks are to create Instagram accounts to spread harmful images that mock women leaders for their appearance or to edit images of male celebrities and politicians to make them look like women.2 Such tactics do not violate digital platform policies, yet they are a pervasive part of the gendered disinformation formula, proving a failure of platform moderation.

When digital platforms do not enforce regulatory measures, malign actors are able to easily take advantage of weak self-regulation structures. Social media platforms do not consistently punish repeat offenders or remove posts that violate terms of service. The curation of content based on engagement and attention, not quality, is an impeding factor to addressing gendered disinformation. For example, Meta, Facebook’s parent company, has policies against threats of violence, hate speech, violent and graphic content, nudity and sexual activity, cruel and insensitive content, manipulated media, deepfakes, fake accounts, and coordinated inauthentic behavior. Yet digital forensic analysis, and even unsophisticated searches, demonstrate that gendered disinformation that violates such policies is still thriving on the platform.

Disinformation analysts point to inconsistent policy enforcement or a lack of sanctions for rule-breakers as some of the main problems contributing to an unsafe environment for women and an uneven playing field for women in politics. Opaque platform data preventing independent oversight or research are also a problem, as well as platforms’ reliance on external researchers to flag violations reactively. Overall, weak supervision and a lack of transparency in decisionmaking make the technical challenges of curtailing disinformation even more complex.

Research Gaps

There are many areas in which research and investigations could expand the current understanding of gendered disinformation and offer valuable insights on how to mitigate its harmful impacts.

Gender disinformation network and narrative analysis. Most actors working on gendered disinformation approach these attacks as the result of misogyny and patriarchal social norms, and they focus on supporting women to find individual protection from online harms. This framing categorizes these attacks within the frameworks of GBV and violence against women in elections (VAWIE), and it focuses on the management of harm toward individual women.

While sexist attitudes are integral to violent extremism and political violence, for example, social norms per se don’t explain how attacks against women in politics have been weaponized for political gain and cynically coordinated by illiberal actors that take advantage of algorithmic designs and business models that incentivize fake and outrageous content. More research into the intent and profitability of gendered disinformation campaigns for social media platforms is necessary, as is increased transparency of algorithmic preferences.

Geographic diversity. Overall, the existing research on gender and online harms, related specifically to weaponized campaigns against women politicians on digital platforms, is centered almost exclusively around the United States and Europe. To truly understand the nature of gendered disinformation and its intersection with women leaders, more research is needed particularly where elections are contentious and where the balance of power against prodemocracy forces is being challenged. Disinformation researchers in Latin America, Southeast Asia, and Africa have been vocal about the disparity between the tools and research available for them to investigate disinformation and online harms and the resources for those researching the same subjects in the United States and Europe. Now, evidence from the Wall Street Journal’s investigation, The Facebook Files, builds on what researchers and advocates in the Global South have been alleging: Facebook chronically underinvests in non-Western countries, leaving millions of users exposed to disinformation, hate speech, and violent content.

Analysis of offline impacts. Much of the existing research focuses on qualitative experiences, largely through interviews with impacted women leaders. To further demonstrate the scope of the issue, evidence of the impact of disinformation on electoral outcomes, political opinions, and trust in political institutions would provide further evidence about the extent of the harm that gendered disinformation is doing to democratic systems. More work is also needed to understand disinformation as it interacts with misogyny and racism in countries and communities where verification and regulation are even more difficult and data voids exist, as is the case currently across the Spanish-speaking world.

Intersectionality. More broadly, more work is necessary to define and measure disinformation, assessing through a lens of gender and race to understand how disinformation leverages false narratives based in traditional racism and misogyny. In doing this, greater examination of the degree and methods of coordination of disinformation narratives will be key to understanding their pervasive and impactful nature. All research needs to be rooted in an examination of how disinformation campaigns are furthering traditional systems of oppression. While most of the existing research focuses on the experiences of women online, leaders of LGBTQ identity are subject to increased online attacks of this sort as well. Although not yet widely researched, data on the experiences of transgender individuals online should be a definite area for further examination.

Culture competency of researchers. At present, there are too few women involved in research and policy spaces addressing disinformation in politics. This lack of diversity is a major obstacle to understanding gendered disinformation. For example, of the twelve Social Media and Democracy Research Grants awarded by the Social Science Research Council in 2019, which allowed privileged access to Facebook data, only two grants went to women researchers, and no projects included plans to address the nexus between gender and disinformation. In order to cover a diversity of perspectives and experiences, it is important that more women, specifically women of color, are among those focusing on disinformation and influence operations.

Inoculation with gender consideration. While there is a growing body of academic research evaluating methods for debunking and inoculating against misinformation and disinformation, virtually none of the existing research takes into consideration gender and racial bias. Communication strategists suggest that a debunking formula should discredit malign actors, call out their motives, correct specific facts, and, most importantly, offer a more proactive version of the truth.

However, research on debunking is only effective if it considers how individuals behave on social media when algorithms dictate the spread of both information and narratives that trigger bias. When debunking disinformation on social media, the goal is not only to correct false information but also to prevent its spread. Yet, disinformation that includes racial and gender bias—for example, comments relating to someone’s appearance or interpretations of their behavior—plays into inherently sexist attitudes, rather than facts that can be debunked. And the repetition of the attack line while debunking often helps spread the false accusations or character attacks further.

Other considerations. In addition, essential individual areas for heightened research and examination include forms of online harassment and their connection across platforms, women’s leadership perceptions and social media, the prevalence and nature of attacks and their intersection with technology, the impact of attacks and consequences for society, authoritarianism and gender disinformation, artificial intelligence and bias, the manosphere, and methods of debunking formulas with consideration of gender bias.

The Impact of Evidence-Based Data

Within the arena of women, peace, and security, which increasingly dovetails with misinformation and disinformation studies, there are a number of qualitative, personal narratives being documented. Yet, the quantitative data on the issue has substantial room for development. While policymakers report being more open to changes in the status quo regarding gender stereotypes, governments continue to deprioritize the gender-led data necessary for countering gendered disinformation. One possible pathway toward addressing this deficit, originally suggested by the International Foundation for Electoral Systems, involves breaking the gendered dimensions of disinformation data into five components corresponding to potential intervention points: actor, message, mode of dissemination, interpreters, and risk. It is clear that further developing disinformation research is key to informing the policy dimensions of digital technology reform.

Conclusion

All over the world, women leaders, particularly women of color, have been experiencing relentless, overwhelming volumes of online abuse, threats, and vicious gendered disinformation campaigns, framing them as untrustworthy, unqualified, too emotional, or overly sexual. Effective counterdisinformation tactics, if correctly implemented, have the potential to be gender-transformative in shifting the use of traditional stereotypes currently shaping the narratives. However, unless research on information pollution is properly gender-informed, it risks promoting greater harm by allowing the continuation of gendered disinformation on social media platforms unregulated and largely under the radar. In order to mitigate the harm for women leaders and democratic systems, it is essential that further research is conducted examining the relationship of disinformation, gender, and intersectionality.

These types of online abuses against women are growing, amplified by a lack of both oversight and transparency, as well as algorithmic preferences on social media platforms that reward fake, outrageous, and obscene stories and a business model that profits from engagement. Ultimately, the use of traditional gender stereotypes in gendered disinformation campaigns works to support and uphold entrenched power structures of heteronormativity and patriarchy. To stop these trends in their tracks, investigators must apply proper analysis to the role of gender, and actively recruit diverse teams to do so, before more harm is caused.

About the Authors

Kristina Wilfore is an international expert in democracy and election integrity, co-founder of #ShePersisted, and adjunct professor with The George Washington University’s Elliott School of International Affairs.

Alcy Stiepock MacKay is an affiliate coordinator for Emerge America. She is a graduate of George Washington University with a B.A. in Political Science and Women’s, Gender and Sexuality Studies. She served as a program associate for #ShePersisted and intern for EMILY’s List prior.

Notes

1 The definition of “abuse” is modeled from the Center for Democracy and Technology and includes any direct or indirect threats of any kind, content that promotes violence against the individual based on any part of their identity (gender, race, ethnicity, religion, age), or content which attempts to demean and belittle the individual based on any part of their identity (including insults and slurs directed at the individual). Dhanaraj Thakur and DeVan L. Hankerson, Facts and Their Discontents: A Research Agenda for Online Disinformation, Race, and Gender (Washington, D.C.: Center for Democracy and Technology, 2021), 24.

2 GQR, unpublished report, 2021.

Carnegie does not take institutional positions on public policy issues; the views represented herein are those of the author(s) and do not necessarily reflect the views of Carnegie, its staff, or its trustees.