Influence operations are a source of growing public concern, yet it remains difficult to evaluate their impact with any precision. The current understanding has largely been informed by data from social media companies and the operatives themselves, which, in both cases, can exaggerate efficacy. Those behind a campaign have an understandable interest in claiming effectiveness, while social media companies often share statistics such as reach or engagement that do not directly reflect the effect on the audience.
Social media companies often share statistics such as reach or engagement that do not directly reflect the effect on the audience.
According to the confessions of Andrés Sepúlveda, a now-imprisoned Colombian influence operative, campaigns across Latin America used fake social media accounts to create the illusion of popular support or dissent, installed spyware to gain blackmail material, defaced websites, and spread disinformation. Sepúlveda claimed that his team was paid through a chain of intermediaries to work on presidential elections in Colombia, Costa Rica, El Salvador, Guatemala, Honduras, Mexico, Nicaragua, Panama, and Venezuela. Their goal was to help the client candidate win by sowing discord and discrediting opponents. His measurements of success, however, included the rate at which rumors he planted were picked up and spread. Given the many factors that determine how people decide to cast their votes, this metric does not say much about actual influence.
More recently, following the 2016 U.S. elections and the UK’s Brexit referendum, former staff of the disgraced firm Cambridge Analytica insisted that their data-harvesting efforts swayed votes but have yet to provide measurements to back up their claims.
To understand and counter influence operations, analysts must uncover who is behind a campaign and identify their motives and goals. Only once they establish the desired effects can they attempt to examine whether or how they were achieved. In most instances, much of this information is not available. Nor is it easy to isolate the activities of a single campaign to determine what was the sole or greatest factor that influenced an audience’s behavior. Evidence of activity is not the same as proof of effect.
Analysts must uncover who is behind a campaign and identify their motives and goals.
It may also turn out that focusing on single campaigns is not the best way to analyze the effects of such activity. Instead, stepping back to take a more systemic view of the information environment and of patterns in user engagement over time could prove more revealing. Regardless of approach, assessing the effects of influence activity requires a baseline to measure against, a rationale to connect an action to a specific change, and a means for tracking that change within an audience.
To make progress on these questions and begin translating the answers into policy, many stakeholders will have to cooperate. The burgeoning research community focused on influence operations can gather data on how and by whom such operations are conducted and help to standardize terminology and metrics of efficacy. This will only be possible if social media platforms provide data (with appropriate privacy and proprietary safeguards). Collaboration across academic disciplines—from computer science to psychology and linguistics—will add essential insights. For their part, governments and civil society organizations should provide parameters for acceptable measures to counter influence operations, which platforms would be asked or ordered to implement.
Understanding the complexity and magnitude of influence operations may seem like a daunting goal. But the stakes are too high not to try. The alternative will be further waves of alarm that prompt uninformed responses and leave every corner of society at risk.