Over the past decade, experts from government, tech, academia, and civil society have tried to address the scourge of foreign information manipulation and propaganda, typically under the shorthand of “disinformation.” Particularly since Moscow’s 2014 invasion of Crimea and 2016 interference in U.S. elections, a wellspring of transatlantic institutions has marshalled resources to expose, pre-bunk, debunk, and deplatform these campaigns. This endeavor has had some success in inoculating the global public and in exposing propaganda peddlers, yet the tide doesn’t appear to be turning. Falsehoods continue to spread, those who generate and amplify them are developing new tricks, media interventions are often tepid and ineffectual, and regulatory fixes prove slow or unworkable.
Meanwhile, convenings on countering disinformation typically share a common feature: an often exasperating semantic debate about terminology. “Dis-,” “mis-,” or “mal-” information, “computational” or “participatory” propaganda, “influence operations,” “coordinated inauthentic behavior,” “perception hacking,” and other such monikers abound. But the question usually boils down to a straightforward one: What problem are we actually trying to solve?
The answer is relatively simple: falsehood is spreading, both inadvertently and deliberately, at unprecedented pace and scale. It would be easy to attribute such foundational disputes to the relative infancy of the discipline or to the rapidly evolving media and technological environment. But doing so would dismiss the fact that the definition of “information” itself was once (and in some arenas, remains) hotly contested—and for familiar reasons.
In the 1940s–50s, Claude Shannon, godfather of information theory, set out to solve problems around the transmission and compression of information, first at MIT and later at Bell Labs. He successfully did so by distilling information down to its fundamental units—“as if to prove that the concepts they have been talking and talking around have at last been captured by a number,” his biographers wrote. He ultimately settled on what became a science, adopted to serve as the basis of modern computing: information was a basic, binary choice between ones and zeroes. We now call this measurement a “bit,” the rapid transmission and storage of which has come to mediate our daily lives.
Shannon’s definition of information was not undisputed. Some of his contemporaries assailed it as being overly narrow, too focused on engineering problems, and too dismissive of information corollaries like meaning, interpretation, and context. But alternative definitions still lacked Shannon’s competitive advantage: a model to quantify and mathematically demonstrate the theory. “Rather than deal with the fact that the exchange of information among humans involves a certain amount of subjectivity, proponents of Shannon information theory chose to ignore this essential element,” claims media ecologist Robert K. Logan.
Arguably the most successful concurrent theory was Norbert Wiener’s cybernetics, which explored control and feedback loops in communication, particularly between human and machine, in complex environments. As the Industrial Age was slowly yielding to automation, this “theory of everything” lent both explanatory power and innovative insights into thermostats, the human nervous system, and countless systems in between. It gradually expanded into so many adjacent fields of study that by the 1980s it had largely collapsed as a unified concept. Shannon perhaps foresaw such an outcome, having voiced skepticism toward what he called a “scientific bandwagon” that came to accompany information theory: “The basic results of the subject are aimed in a very specific direction, [one] not necessarily relevant to such fields as psychology, economics, and other social sciences.” . . . “Ideas set in motion might become so diffuse as to lose all meaning.”
The divergent fates of Shannon’s and Wiener’s theories are emblematic of a paradox: the entirety of the Information Age is predicated on information’s narrowest technical definition, yet grappling with information’s foundational utility—communication—requires forays into numerous less calculable disciplines.
The paradox is familiar territory to those now seeking to confront disinformation. An arsenal of investigative tools can be applied to the supply-side of false narratives to break them down to their constituent pieces, tabulate and measure their spread through likes and shares, and categorize clusters of users in vivid detail. This methodology has done wonders to explain and resolve key aspects of the problem (while being confined to very limited sample sets). As they encounter the demand-side—the behavioral economics, political psychology, cognitive science, and other drivers of human susceptibility to untruth—prevailing notions about countering disinformation tend to lose steam. Policymakers, activists, and innovators find their foundational paradigms, which are reductive by necessity, colliding with systems that are complex by nature.
Shannon’s experience in this regard is instructive. The problems experts focus on tend to assume the characteristics of the solutions or measurements at their disposal. By insisting on a narrow definition, Shannon “ran up against a human habit much older than him: our tendency to reimagine the universe in the image of our tools,” his biographers recount. “We made clocks, and found the world to be clockwork; steam engines, and found the world to be a machine processing heat; information networks—switching circuits and data transmission and half a million miles of submarine cable connecting the continents—and found the world in their image, too.” By extension, it is worth questioning whether the platform back-ends, machine-learning algorithms, social listening tools, and media literacy campaigns we have built have perhaps lent a skewed or insufficient paradigm.
This dilemma is why disinformation can accurately be described as a “wicked problem”: one deeply entangled with other systemic issues, where causal relationships are poorly understood, and where interventions to correct one harmful aspect create unwanted ripple effects elsewhere. There is an inherent irony in the fact that the core answers to such problems are usually apparent and relatively simple. This irony lends itself to what scholars call reductive tendency, a process whereby people simplify complex systems into easily digestible parts. This distillation can offer benefits, such as quicker decisionmaking, but is often inaccurate and overlooks the complexities of the problem.
This inclination is natural. Like Shannon, disinformation experts are likely to find the measurable and quantifiable aspects of how false narratives are transmitted more alluring and readily applicable to their work than the still-fuzzy understandings of how narratives are perceived and acted upon by human beings, individually and collectively. Consequently, a cohesive counter-disinformation agenda seems to arise only insofar as it can be unmoored from the sociological, economic, and political contexts that enable disinformation to thrive in the first place.
It is this increasing tendency that prompts wariness from Neville Bolt, a professor of strategic communications at of King’s College London, that disinformation as a policy issue might follow the same well-worn path that terrorism did post-9/11: becoming divorced from geographic, cultural, and situational contexts and refracted primarily through ideological and militarized prisms. Stakeholders should resist a similar pull toward such a self-reinforcing and institutionalized “disinformationism,” he argued in a recent conversation. Former head of the UK signals intelligence service David Omand, also at King’s, previously lodged a similar warning, noting that our capacity to interrogate data from social media presents unprecedented research possibilities: “[W]e are currently much better at counting examples of online human behaviour than critically explaining why they are and what it might mean.”
The resulting policy formulations can be either ineffective or counterproductive. These are, as Russian security expert Mark Galeotti recently wrote, political responses that are driven by the need to be seen doing something. “Governments typically love ‘mythbusting’ organizations that seek out and counter ostensible falsehoods, not least because they lend themselves to the bureaucratic mindset: you can plot metrics and claim success through activity rather than impact,” he noted.
This bias toward action also risks inadvertently mirroring the very cynicism it is designed to mitigate: the notion that suasion is inherently wieldable; that illicit actors are merely outperforming truth-tellers; and that transparent, factual counterprogramming is the antidote. It would indeed be ironic to subscribe to the same flawed philosophy as Russia, the putative leader in global disinformation peddling. Scholar Eliot Borenstein, who penned the authoritative account of Moscow’s conspiratorial view of information, describes a world where “words are taken to be purely performative, their constative value (assessed according to truth or falsehood) becoming less a matter of cognition than of affect and allegiance.” So to some audiences, those most concertedly engaging with the propaganda can slowly become indistinguishable from those engaging in it.
This approach also assumes intentionality lies at the heart of every swing of public sentiment. This is a fallacy, says Bellingcat founder Eliot Higgins:
The biggest failure in countering disinformation is the idea that it’s the result of outside actors influencing communities, when it’s really about the communities that form organically, and how we respond to that. It's a lot simpler to blame Russia and factcheck than address the fundamental social issues that lead to this, especially when a lot of it is caused by real betrayals of the public trust by the media and governments.
In other words, disinformation proliferates as a natural byproduct of underlying societal factors as much as from concerted bad actors.
Even beyond environmental conditions, researchers at Cambridge University recently concluded that humans routinely find ignorance and self-deception to have “greater subjective utility than an accurate understanding of the world.” They suggest that focus on the production, acquisition, and distribution of information in markets and other social processes is misplaced, and that what is commonly referred to as the marketplace of ideas is in actuality a “market[place] for rationalizations, a social structure in which agents compete to produce justifications of widely desired beliefs in exchange for money and social rewards such as attention and status.” The authors note the crucial distinction between “beliefs that people merely happen to hold, and are thus open to revising, and beliefs that people want to hold, which generate a demand for rationalizations.” The idea that, if merely resourced sufficiently, democratically motivated coalitions can reorient the former and satisfy the latter may be a worthy aspiration. However, the hubris necessary to change hearts and minds at scale can easily assume the appearance of the very colonialism it aims to refute.
A myopic focus on the purveyors of disinformation and the mechanics of its spread might only offer what philosopher Karl Popper described as “an explanation of a social phenomenon [that] consists in the discovery of the men or groups who are interested in [its] occurrence. . .and who have planned and conspired to bring it about.” In the meantime, Western governments’ scramble to counter disinformation with carefully crafted messaging—however altruistic and democratic in spirit—might be more constructively redirected toward alleviating the underlying conditions that make perfidy so often preferable to reality.