Table of Contents

When the World Health Organization declared an “infodemic” in the midst of the COVID-19 pandemic, it signaled that the spread of health disinformation had become a global concern. Countries have taken different approaches to addressing disinformation in this and other contexts. Some, like Singapore, have enacted formal legislation; others, such as Argentina, have prosecuted individuals for disseminating fake news as a “crime against public order.” But mostly, there has been increased pressure on internet companies, particularly social media platforms, to monitor, identify, and filter “untruthful” content circulating on their networks. The willingness of these companies to accommodate the new demands constitutes a paradigm shift.

This paradigm shift has been generally welcomed around the world and has become an important focus of civil society and academia. Many of the solutions proposed by policymakers and platforms, however, represent quick fixes, such as removing or blocking harmful content from heads of state, labeling expressions from public officials, or prohibiting content that contradicts official sources of information. Notwithstanding the effectiveness (or lack thereof) of these measures, they entail a fundamental break with existing standards and represent a shift in how states assess the value of free speech and the free flow of information and ideas for democratic self-governance.

Agustina Del Campo
Agustina Del Campo heads the Center for Studies on Freedom of Expression and Access to Information at the University of Palermo in Argentina. 

Defining what constitutes disinformation and how to prevent its spread is complicated and requires special consideration. Disinformation is ill-defined and is different from other targets of content moderation—like hate speech, threats, or fraudulent activity. First, disinformation seemingly represents the introduction of a new social harm. Second, it encompasses different types of falsehoods and therefore differently defined social harms—some legal, others illegal—such as libel, slander, fraud, and propaganda. And, third, moderating disinformation assumes one can make a clear distinction between truthful and untruthful information—that there is a unique source against which truthfulness can be tested.

Policymakers increasingly assess disinformation to be an existential challenge to democratic governance. The European Union has argued that disinformation is a threat to democracy and European values. Across the Atlantic, statements from U.S. President Joe Biden’s administration on disinformation and its impact on the COVID-19 vaccine campaign reinforce this idea. This rhetoric challenges international human rights standards that widely protect free speech—including the dissemination of false information in public discourse. Under the Inter-American Convention on Human Rights, for example, states are specifically obligated to protect against private restrictions of freedom of expression that may result in indirect censorship. The new consensus against disinformation, which conditions free-speech protections on truth, does not only challenge free-speech norms; it also empowers private companies to arbitrate truth while denying this power to others in society. This shift should not be taken lightly.

The treatment of falsehoods and whether they constitute a social harm that requires state action differs significantly from one country to the next. Abundant jurisprudence exists that defines libel and slander, and most democracies have also identified specific instances in which society and/or the state may punish individuals for telling lies (such as witnesses in court proceedings or public officials). These concepts arrived at their current iterations following long debates. Accordingly, states already have determined which falsehoods constitute threats to their democracies and which should be tolerated as a condition for self-government. Addressing anew these categories in bulk under the term disinformation, whether through regulation or moderation, would mean discarding the democratic deliberations that led to current definitions of legal speech.

Finally, unlike in other moderation areas, efforts to counter disinformation assume that there is a single authoritative source against which all information can be assessed for truth. This assumption is particularly problematic when it comes to political disinformation, which is likely why most jurisdictions distrust the state to regulate false or misleading political discourse. But subjecting speech to a single source “truth” test can also be problematic when it comes to regulating more objective topics, such as science. The COVID-19 pandemic provides a useful illustration of the problem. As scientists learn more about the virus, some of their core assumptions and subsequent guidance has changed. For instance, scientific consensus now holds that the virus is transmitted primarily through the air, contrary to scientific views at the beginning of the outbreak. Science thrives when peers can build on and correct each other’s mistakes. Confronted with differing expert opinions from different countries, schools of thought, and scientific institutions, who should internet companies hold as the ultimate authority to validate the truth? In other words, is it possible for them to craft legitimate rules regulating disinformation without veering into censorship?

The disinformation dilemma speaks to cultural, political, and legal weaknesses and strengths within each democracy. At the heart of the issue is a crisis of legitimacy among traditional knowledge producers. As legal scholar Jack Balkin writes, “A public sphere doesn’t work properly without trusted and trustworthy institutions guided by professional and public-regarding norms.” He argues that social media companies need to earn and develop that legitimacy while acknowledging that the same standards apply to societal institutions that traditionally have maintained public spheres and which now struggle over questions of disinformation.