Source: Getty
article

How to Tackle Europe’s Digital Democracy Challenges

The EU is poised to release new policies to bolster democracy in a digital age. How well these policies fare will depend on how well Europe tackles domestic challenges to democracy.

Published on October 15, 2020

European Commission President Ursula von der Leyen has made it clear that tackling online disinformation and reining in internet platforms are at the top of her digital agenda. Before the end of 2020, the commission plans to release two major policies—the European Democracy Action Plan (EDAP) and the Digital Services Act (DSA)—laying out clear principles for how it will respond to rampant disinformation, election interference, and broader concerns about a lack of accountability and transparency by online platforms. These policies are being presented as part of a broader push to promote democratic resilience and mitigate extreme speech in Europe.

How should European officials and citizens alike think about these oncoming changes? And what will the impact of these proposed policy fixes be?

The EU’s Coming Proposals to Cure Its Digital Headaches

Over the past few years, EU member states have been witnessing and weathering manifold digital challenges. Foreign-sponsored disinformation is rife throughout the bloc, a problem that the coronavirus pandemic has exacerbated. In a June 2020 report, the commission acknowledges, “the coronavirus pandemic has been accompanied by a massive wave of false or misleading information, including attempts by foreign actors to influence EU citizens and debates.” Alongside disinformation, European right-wing fringe groups promoting hate speech and violence are finding new ways to mobilize and increase their relevance via online platforms. And a new slate of authoritarian-minded populist leaders, particularly in Eastern Europe, are deftly wielding digital strategies to their advantage, incorporating menacing surveillance and censorship techniques.

What, then, are the aims of European reform proposals like the EDAP and the DSA? The EDAP will focus on three broad areas: improving the integrity of elections, including by regulating online political advertising; enhancing media freedom and pluralism; and charting out strategies to counter disinformation in the EU.

In a similar vein, the DSA will aim to establish “clear, updated, and binding rules” to counter online hate speech and regulate “opaque political advertising.” The DSA will thereby provide a common framework for how global tech platforms with a digital gatekeeping role may operate in Europe. Its most difficult task will be striking a balance between protecting freedom of expression while safeguarding “European and democratic values” from harmful speech.

The Effectiveness of the EU’s New Approach

Two major questions loom. First, will these policies prove effective? Will they counteract disinformation, electoral manipulation, and the impunity that digital platform currently enjoy? Will they help turn the tide on illiberal forces that are exploiting online platforms to strain European democracy? Second, are they focusing on the right issues? Do online disinformation and the lack of transparency from tech platforms represent the most critical digital threats to European democracy?

The outcome of Europe’s quest to bring order to its digital landscape will be shaped by four major factors. These include: (1) how well European regulators avoid unintended consequences, (2) how ably Europeans root out their own domestic sources of disinformation and hate speech, (3) how readily Europeans recognize that their digital ills are enabled by and amplified by corresponding real-world problems, and (4) how much Europeans own up to and hold themselves accountable for advancing the use and spread of digital surveillance technologies.

Beware the Law of Unintended Consequences

First, the commission should be careful not to create new problems when attempting to solve existing ones. The DSA in particular will likely impose new legal responsibilities on platforms regarding their content, such as potentially requiring companies to filter or take down certain categories of harmful speech. As of now, platforms are shielded from liability for harmful content their users may post, such as hate speech or incitement to violence under the EU’s e-Commerce Directive (in the United States, such protections are codified in Section 230 of the Communications Decency Act.) Increasingly, European governments are contemplating stripping away intermediate liability protections to place a greater onus on platforms to police their content. But the end outcome may lead to a worse result: such regulations could tip the scales in the opposite direction and incentivize overcompliance by platforms wary of violating the new rules. Thus, in a bid to stop disinformation, the commission may instead facilitate heightened suppression of online discourse.

The experience of Germany’s Network Enforcement Act (also known as the NetzDG law) stands as a warning. The law came into force in 2017 and requires that platforms take down “manifestly illegal” content within twenty-four hours. Failure to comply can result in millions of euros in fines. As David Kaye, former UN special rapporteur on freedom of expression, writes in his book Speech Police, the law brings two overlapping problems. It firstly delegates full discretion to companies to decide what qualifies as hate speech, making corporations “judge and jury not only of their own standards but also of German law.” Secondly, the companies are incentivized to take down content and suspend accounts, particularly because the law does not include any “countervailing” reasons to keep legitimate content up. The law has done real damage to online speech in Germany, and its adoption as a potential model for Europe is troubling. While platforms have done a poor job mitigating extreme or violent speech, this doesn’t mean the right solution is to remove intermediate liability protections. Better alternatives exist.

One idea that is gaining traction is the concept of a “quid pro quo benefit,” described at length in a 2019 report from the University of Chicago’s Stigler Center. The authors recommend that governments treat intermediate liability protection as a free speech subsidy that should be conditioned on whether platforms adhere to specified public interest requirements. For example, governments could mandate that social media companies refrain from using their algorithms to steer viewers toward extreme or violent content, or regulators could require that companies provide much greater transparency about how content shaping algorithmic systems work and how they subsequently influence users’ online experiences. Under this proposal, companies that adhere to such guidelines would retain liability protection, while those in breach would face legal sanction. Such a bargain could shield platforms from liability but demand in return that they become more responsive to public policy concerns.

Europe Is Rife With Domestic Disinformation

Second, the commission should broaden its digital focus beyond preventing foreign disinformation or election interference (particularly from Russia). While Russia’s activities are undoubtedly disruptive, they are far from the only digital challenge facing EU member states. Hate speech emanating from domestic political parties and heightened online polarization leading to violence, for example, also represent pressing threats requiring meaningful action.

A look at 2019 data collected by the Digital Society Project reinforces this point. The survey data in question examines the extent of foreign government disinformation, hate speech by homegrown political parties, and offline violence sparked by social media. The data derives from expert-coded surveys from multiple scholars and researchers with deep knowledge of particular countries and political institutions. While the survey measures these experts’ perceptions of how prevalent these societal problems are (it does not measure quantitative tallies of specific incidents), the survey’s methods and findings are widely followed.

The data shows that domestic online threats are more pervasive than foreign interference efforts. The relationships between perceived levels of foreign government–sponsored disinformation, how much online hate speech is disseminated by domestic political parties, and how social media is used to organize offline violence are of particular interest. To analyze the relationships between these variables in the European context, I looked at the project’s data across the EU’s twenty-seven member states plus the UK.

Figure 1 compares the levels of disinformation from foreign governments to the levels of hate speech by domestic political parties in 2019, while figure 2 compares the scale of disinformation from foreign governments to that of offline violence initiated on social media.1

As both graphs illustrate, experts perceive that hate speech peddled by homegrown political parties and offline violence fueled by social media were much more prevalent in most EU countries than disinformation from foreign governments. Figure 1 shows that in seventeen EU countries the perceived levels of hate speech emanating from domestic political parties measure above the global mean, whereas only nine countries scored above the mean on the perceived prevalence of foreign state-sponsored disinformation. In other words, the bulk of European countries were seen to be facing unusually high levels of party-sponsored hate speech, while fewer seemed to be facing outsized levels of disinformation emanating from foreign governments.

Several countries particularly stand out. Survey results for Italy, the UK, and Hungary display the highest perceived levels of hate speech from domestic political parties (and each of them, by contrast, possesses much lower perceived levels of foreign disinformation). Only the survey results for Denmark, Spain, Greece, and Estonia reported perceptions of higher levels of foreign disinformation than hate speech from domestic political parties.

Likewise, the data in figure 2 indicate that twelve countries in the EU score above the mean on the perceived prevalence of social media incitement to offline violence. Once again, more countries (twelve) were seen as experiencing higher perceived levels of such incitement to violence than corresponding perceptions of heightened levels of disinformation from foreign governments (nine countries).

What both sets of data reinforce is that Europe’s digital ecosystem is plagued by a variety of ills reflecting deeper societal and political divisions. Increasingly, tech platforms like Facebook are acknowledging this trend. Nathan Gleicher, for example, Facebook’s head of cybersecurity, revealed in a recent call that half of ten campaigns the firm had removed globally in September and October emanated from domestic sources and catered to domestic audiences. He said, “Domestic actors understand the political actions in their country the best and have a strong motivation to want to change that discussion.” While foreign disinformation and influence operations may play a role in normalizing extreme speech and violence, the data make clear that the largest online threats generally seem to come from within a country’s own borders.

When All You Have Is a Digital Hammer. . .

Third, problems that proliferate digitally aren’t always best solved through digital means. A particularly relevant example relates to Russian electoral interference in the UK. In July, the UK Parliament’s Intelligence and Security Committee released a long-awaited report about Russia’s promotion of disinformation and attempted political manipulation in the UK. One of its key findings was that Russia carried out successful exploits in part due to bureaucratic disorganization in the UK.

The report describes how government agencies treated the Russian threat like a “hot potato” to be pawned off on another agency. Intelligence services have ample capacity but a deep reluctance to involve themselves in domestic affairs, while the UK’s electoral commission and the Department for Digital, Culture, Media and Sport possess organizational responsibilities but lack appropriate capabilities to counter Russian actions. Consequently, this bureaucratic inertia has allowed Russia’s agents to operate with significant impunity and interfere with the UK’s democratic processes.

The parliamentary report also notes Russia’s “expansive patronage networks” in what it calls “Londongrad.” This moniker refers to the vast pools of Russian capital linked to Russian oligarchs and President Vladimir Putin’s state apparatus that have flooded the UK. The tempting nature of such opaque but lucrative windfalls and the potential strings attached not only enabled Russia to carry out systematic disinformation operations in the UK but also suggest that future preventative measures may only limit the damage rather than block it altogether.

The broader lesson for Europe, particularly for countries on its eastern flank already prone to Russian influence and corruption, is that foreign interference and online disinformation are not just social media problems. Their expansion stems from a variety of sources, including old-fashioned bureaucratic disorganization and the potency of dirty money in business and politics.

Europe’s Embrace of Digital Surveillance Is Disturbing

Fourth, if the commission is genuinely committed to advancing a democratic digital agenda, then it needs to take a much more serious look at reining in digital surveillance at home and curbing the export of such technologies abroad. Advanced surveillance measures in Europe are proliferating rapidly without adequate safeguards or consistent rules of use. I have previously written about the expansion of advanced surveillance tools in countries like France, Italy, Spain, and Denmark. As Morgan Meaker has written for the online news outlet Coda Story, cities like Marseille have implemented extensive surveillance programs involving the deployment of thousands of cameras “using ‘filters’ that can detect people, vehicles, certain clothing colors or objects moving in specific directions or at certain speeds.”

These trends show no sign of abating, and regulatory frameworks have fallen increasingly behind. While experts anticipate that the EU will come out with a comprehensive legislative proposal governing the use of artificial intelligence (AI) in 2021, it is not clear how much this proposal will specifically impact digital surveillance. It is also worth noting that many of the surveillance measures in question do not fall squarely in the AI category, further limiting the applicability of the forthcoming policy.

Not only has the EU lagged behind on regulating the use of surveillance within its member states, but it has done an even poorer job overseeing the export of advanced surveillance technologies overseas, particularly to authoritarian and illiberal governments. Amnesty International released a report in September outlining how several European companies—including France’s Morpho (now Idemia), Sweden’s Axis Communications, and the Netherlands’ Noldus Information Technology—have actively exported digital surveillance technology to China. The authors write, “these technologies included facial and emotion recognition software, and are now used by Chinese public security bureaus, criminal law enforcement agencies, and/or government-related research institutes, including in the region of Xinjiang.” It goes without saying that the legitimacy of the EDAP’s intended purpose—stemming the tide of illiberalism at home and abroad—will be jeopardized if it fails to confront the export of powerful tools of repression to autocratic regimes.

Conclusion

The commission should be commended for recognizing that Europe’s democratic resilience is increasingly linked to the health of its digital communities. Because these technologies are evolving so quickly, regulators have to be careful to not devise new rules in response to yesterday’s problems. A framework will only be effective when it is comprehensive enough to address a wide array of digital challenges and flexible enough to provide high-level direction without constraining individual approaches. The picture in Europe is sobering, but the commission has a unique opportunity to achieve a better balance between unleashing technological innovation and building a sound democratic future. Ultimately, the success of policies like the EDAP and the DSA hinge not only on whether they can cut down on polarizing online rhetoric without doing grave damage to free speech but also on whether the European Commission will also consider the broader range of digital tech issues that are challenging the cornerstones of Europe’s democracies.

Notes

1 The survey queried experts on the following questions: 1) How routinely do foreign governments and their agents use social media to disseminate misleading viewpoints or false information to influence domestic politics in this country? 2) How often do people use social media to organize offline violence? and 3) How often do major political parties use hate speech as part of their rhetoric? Higher scores on the survey indicate that the respondents perceived greater prevalence of a specific variable (foreign government disinformation, hate speech from homegrown political parties, or social media–fueled offline violence). The project’s measurement model aggregates ratings provided by country experts—taking disagreements and measurement errors into account. The data captures perceptions of country experts but does not measure actual numbers of incidents occurring in a given year.

The point estimates are the median values of these distributions for each country and year pairing. The scale of the measurement model variable is typically between -5 and 5, with 0 approximately representing the mean for all country and year pairings in the sample. Therefore, a country displaying a negative score means that it is performing below the mean for that particular variable (that is to say, displaying lower levels of foreign disinformation, hate speech by domestic political parties, or social media–fueled violence). By contrast, countries displaying positive scores are performing above the mean for the given variable (meaning that they are showing a higher prevalence of undesirable outcomes linked to foreign disinformation, hate speech by domestic political parties, or social media–fueled violence).