A coalition of states is seeking to avert a U.S. attack, and Israel is in the forefront of their mind.
Michael Young
Source: Getty
Amid the coronavirus pandemic, Europe and the West are grappling with a host of thorny dilemmas posed by disinformation and foreign influence operations.
This paper is the first of a three-part series looking into the future of the European Union’s (EU) disinformation policy.
This series was commissioned by the European External Action Service’s (EEAS) Strategic Communications Division and prepared independently by James Pamment of the Partnership for Countering Influence Operations (PCIO) at the Carnegie Endowment for International Peace. Over one hundred experts, practitioners, and scholars participated in five days of workshops, made written submissions, and/or completed surveys that fed into these papers. The resulting publications are the sole responsibility of the author and do not reflect the position of the EEAS or any individual workshop participant.
The first paper, “Taking Back the Initiative,” focuses on future threats and the extent to which current EU disinformation policy instruments can meet the challenge. With the coronavirus pandemic erupting during the drafting of these papers, the overview of current instruments has been supplemented with discussion of lessons learned from the ongoing experience of this crisis. This first paper also outlines the overall policy recommendations detailed in the three papers.
The second paper, “Crafting an EU Disinformation Framework,” establishes terminology and a framework around which EU institutions can organize their disinformation policy. The paper begins with a discussion of terminology and then outlines the ABCDE (actors, behavior, content, degree, effect) framework for analyzing influence operations. This supports further analysis of areas of institutional responsibility, including ownership of different aspects of the disinformation policy area.
The third paper, “Developing Policy Interventions for the 2020s,” outlines three areas of intervention necessary for developing an EU disinformation policy capable of meeting future threats. The first is work that deters actors from producing and distributing disinformation. The second consists of nonregulatory interventions, which focus primarily on policies that can be enacted informally with stakeholders. The third covers regulatory interventions, including legislative responses based upon an auditing regime.
Lutz Güllner
Understanding the challenge that disinformation constitutes for our societies is a prerequisite for the development of proper policies and strategies to effectively address it.
This has become particularly clear during the COVID-19 crisis: we have seen that disinformation can do real damage; in some cases, it can even kill. Disinformation knows no digital or physical borders, and this has proven to be true time and again: disinformation poses a global threat to open and democratic societies.
The COVID-19 infodemic has highlighted the complex environment in which false information spreads: one of the lessons learned from this crisis is the need to clearly differentiate between the various forms of false or misleading content and to calibrate appropriate responses. To this end, it is important to distinguish between illegal and legal content, as well as whether there is a clear intention to mislead. Especially the latter is a key element in the distinction between misinformation, that is, the organic and unintentional spread of falsehoods, and disinformation—intentional and coordinated manipulations designed to exploit tensions in our societies, harm us, and interfere in our public debates.
Another category includes influence operations by third countries. Disinformation is often part of such influence operations, in combination with other tactics of manipulative interference. For example, pro-Kremlin disinformation actors have spread conspiracy theories and orchestrated disinformation campaigns, sowing confusion and targeting the EU, its member states, and its neighbours by alleging a lack of solidarity and an internal crisis within the EU. China has used the crisis to promote its image, present its state system as better equipped to tackle the pandemic than democracies, and deflect blame over the handling of the virus.
Everyone has a role to play in tackling disinformation. Governments cannot and should not be the only ones to tackle disinformation and influence operations. Especially researchers, independent journalists, and fact-checkers have been at the forefront of monitoring, analyzing, and reporting about disinformation, and they have provided valuable insights into the spread of disinformation. A calibrated response is needed from all parts of society, depending on the degree of harm, the intent, the form of dissemination, the actors involved, and their origins.
This series of papers addresses these issues. It helps us to better understand the nature of the threat and the scope of the challenge at hand. The insights of these publications will feed into the EU’s ongoing work on disinformation.
Lutz Güllner is the head of strategic communications at the European External Action Service.
Amid the coronavirus pandemic, Europe and the West are grappling with a host of thorny dilemmas posed by disinformation and foreign influence operations. While these problems predate the viral outbreak, the public health crisis has certainly exacerbated them. Brussels has taken some steps to meet this set of challenges, some of which are already paying dividends.
But there is more that Europe can do to make its response more effective. Specifically, the EU should formulate shared terminology for combating disinformation, assertively deter adversaries who are spreading disinformation and conducting influence operations, craft sensible nonregulatory interventions to protect online users, and establish an independent, transparent auditing regimen for certain online platform functions.
Europe and the West are targets of disinformation, influence operations, and foreign interference. And the responses of most Western countries have been piecemeal and slow, hampered by legal restraints and bureaucracy and lacking in real political understanding of the problem and evidence of its impact.
Adversaries include states, organizations, and individuals. They have developed well-established techniques and have laid the groundwork in terms of building networks, disseminating narratives, and tapping into local issues to gain unwitting grassroots supporters for current and future campaigns. This puts the EU and its member states at a disadvantage when it comes to countering these malicious activities.
The following factors give adversary actors a significant advantage. Some of them pertain to the nature of the disinformation activities they pursue. These adversary actors often employ low-cost, low-risk, and high-reward tactics. They are first movers and use marginal technological advantages, meaning that their activities can be fully under way before they are noticed. Moreover, they are less restricted by legal, ethical, and bureaucratic constraints, and the broad range of illegitimate influence tools and techniques available to them make it difficult to identify and counteract the full extent of their campaigns.
Adversaries also benefit from the limitations and drawbacks of the EU’s approach to date. Targeted countries can often muster only limited political will to acknowledge disinformation and particularly to attribute, punish, and deter adversary actors. In addition, it is easier to craft mitigation plans that focus on a single event, like an election campaign, rather than ones that focus on ongoing public discourse.
Moreover, relevant government actors often have limited technical capabilities to monitor, identify, analyze, attribute, and share information about disinformation. Even digital platforms themselves are limited in their capabilities to identify problematic cross-platform behavior. To make matters worse, traditional media often leverage aspects of these influence campaigns in their coverage, unwittingly amplifying central disinformation narratives. Finally, the evidence of harm caused by disinformation and influence operations is patchy, and sufficient evidence for the effectiveness of countermeasures is lacking.
In Europe, experts view Russia as the dominant hostile actor currently spreading disinformation. However, the political consensus to attribute these activities to Russia, which was strong in the aftermath of the annexation of Crimea and the 2016 U.S. presidential election, has waned.1 Experts regard Russia as having achieved widespread penetration of its narratives in multiple countries across Europe and elsewhere in the world.
There is a sense that its deployment of a wide range of narratives implies that it is not sure what will or will not work, and the low cost of such operations affords it a strategy of trial and error. Russia’s scattershot approach to disinformation entails running multiple campaigns simultaneously, with some achieving notoriety while the vast majority of them fail to attract attention from the target audiences or gain traction. Experts express concerns about the extent to which learning from previous successes and failures can increase the efficacy, and therefore the impact, of future Russian disinformation activities.
Experts also raise concerns about influence operations run by China. Many view China as Russia’s superior in terms of its potential capabilities and intent to spread disinformation and develop influence campaigns, as well as to coordinate them with broader forms of soft power. Not all its disinformation activities appear particularly sophisticated at present, but experts express much interest in how it might develop and test techniques at home before expanding their reach abroad. However, Western countries have, to date, attributed few influence operations to China. This is due to political concerns rather than a lack of attributable activities.
Traditionally known for its cyber capabilities, Iran also poses an increasing disinformation challenge. Smaller authoritarian and semi-authoritarian states like it may increasingly see disinformation and interference, in conjunction with other hybrid activities, as low-cost, high-reward opportunities to achieve limited political goals in EU member states.2 They will likely predominantly aim such activities at their own populations, including through mainstream political parties and their supporters. Disinformation seeded domestically can then be drawn upon as a source for foreign interference. Experts also express increasing concerns that EU member states themselves are becoming a source of misinformation and disinformation.
The three papers in this series establish a five-part framework for analyzing influence operations known as the ABCDE framework (actors, behavior, content, degree, effect):
The second paper in this series discusses the utility and application of the ABCDE framework in greater depth. The framework is useful because it provides a standardized means of collecting, analyzing, and presenting information about disinformation.
The EU has taken some actions to counter disinformation and is grappling with how to counter its adversaries in the information space. But its current policy on disinformation is characterized by a lack of terminological clarity, unclear and untested legal foundations, a weak evidence base, an unreliable political mandate, and a variety of instruments that have developed in an organic rather than a systematic manner. The limited successes the EU has achieved so far—in terms of the creation of instruments such as the Code of Practice on Disinformation, the Action Plan Against Disinformation, the East StratCom Task Force, and the Rapid Alert System—have been hard earned.
One should not underestimate the challenges posed by the different approaches of member states and EU institutions to the disinformation problem; for example, many member states do not recognize the problem, do not publicly attribute particular malign activities to the offending adversaries, or are under political pressure to limit support to EU-level activities to counter disinformation. Within EU institutions, significant ownership and coordination challenges abound. In this context, the creation of these instruments should be considered a bureaucratic success. However, adversary states can turn this inconsistent approach to their advantage, potentially weakening the processes that are in place and undermining the EU’s countervailing system as a whole.
Disinformation is a term that defines a policy area but lacks a firm legal basis, though some principles enshrined in human rights–related legislation are relevant. The Charter of Fundamental Rights of the European Union (ECHR) and the International Covenant on Civil and Political Rights (ICCPR) enshrine the rights to freedom of expression, to privacy, and to political participation.3 The EU and its member states have the positive obligation to enable an environment where freedom of expression can be enjoyed. Many argue that disinformation and public debate driven by artificial intelligence undermine this fundamental right.
Article 10 of the ECHR and Article 19 of the ICCPR make provisions for freedom of expression that protect the right to think and express oneself without interference as well as offer a degree of protection to those who spread disinformation on the same grounds.4 Article 8 of the ECHR and Article 17 of the ICCPR protect the right to privacy, which may be relevant in areas such as the use of personal data to target groups and individuals with disinformation.5 Finally, Article 25 of the ICCPR and Article 21 of the Universal Declaration of Human Rights codify the right to participate in public affairs and elections, which aspects of disinformation may diminish.6 In all these cases, these fundamental rights are relatively unexplored in relation to disinformation.
Some scholarly work has explored whether state-backed information operations may breach international law related to the principle of nonintervention. Others have noted that influence operations that threaten the use of force likely breach Article 2(4) of the United Nations Charter, which explicitly prohibits the threat or use of force.7 The more general principle of nonintervention, which “provides a state with the right to be sovereign within its own territory, free from external interference,” requires that states do not coerce one another in their relations.8 This principle applies most obviously to political expression, particularly in the context of elections. The targeting of critical infrastructure by influence operations, including election systems if they are protected as such by individual member states, should be considered intervention.9 However, political campaigns—and public debate—are unlikely to be covered.
Some have argued that the principle of nonintervention also protects the right to self-determination and freedom of thought and, therefore, the right to “the conditions that enable the people to form authority and will and to make free choices.”10 An influence operation can thus be coercive if it “substitutes the authentic process of self-determination with an artificially constructed process in order to generate particular attitudes and results aligned to the intervenor’s will.”11 The right to freedom of thought offers a potential human rights focus on influence operations that could be grounded in a better understanding of manipulation techniques rather than content alone.
Additional applications of fundamental freedoms among EU member states specifically include content-related restrictions on expression under private and criminal law, such as defamation, insult, and incitement to hatred or violence. Member states also regulate their respective election frameworks, including rules on political campaigning, campaign finance, political parties, and political advertising. Their relevant bodies of national security legislation are also of relevance in relation to foreign interference.
There are two specific divisions within the European External Action Service (EEAS) tasked with assuming various strategic communications responsibilities relevant to disinformation.12 The Communications Policy and Public Diplomacy side leads outreach to EU and external audiences on EU foreign affairs, security and defense, and external action, developing political communications on behalf of the high representative for foreign affairs and security policy. It provides guidance, training, and strategic support to EU delegations and missions/operations. The division also manages communications campaigns, internal communication, social media accounts, and digital platforms as well as public and cultural diplomacy. It does not have a formal role related to disinformation but rather fulfills advocacy and engagement functions for political and cultural EU objectives, including support to all digital media campaigns.
The Task Forces and Information Analysis side focuses on the Western Balkans and Europe’s eastern and southern neighborhoods. Its main role is to develop and implement proactive communications activities and campaigns, including political advocacy and initiatives in public and cultural diplomacy for these regions. It provides analytical support for evidence-based communications and policies and has a specific mandate to address disinformation and foreign manipulative interference in the information space through the task forces (see below).13 It is responsible for implementation of the EU’s Action Plan Against Disinformation and the Rapid Alert System (see below) and for the development of future policy in this field. It also has the mandate to support independent media and civil society in the two neighborhoods and the Western Balkans.
The EU first addressed disinformation as a matter of priority for security reasons. Following its annexation of Crimea in February 2014, Russia demonstrated disinformation to be a key method of hybrid warfare. In response to representations from a small group of concerned member states, the European Council “stressed the need to challenge Russia’s ongoing disinformation campaigns” in March 2015.14
This push resulted in the creation of the East StratCom Task Force within the EEAS’s Strategic Communications Division. The task force was established to effectively communicate and promote EU policies toward the eastern neighborhood; strengthen the overall media environment in the eastern neighborhood and in member states, including by supporting media freedom and strengthening independent media; and improve the EU’s capacity to forecast, address, and respond to disinformation activities by Russia.15
Many observers hoped that the East StratCom Task Force would find evidence of how Russian state-sponsored disinformation infiltrated Western media debates and support civil society to push back against it.16 The task force produces a weekly review of pro-Kremlin disinformation targeting the West as a flagship product on the EUvsDisinfo web platform, and its database features over 8,000 examples of disinformation.17 Its team has now grown to sixteen staff members with extensive (but presently outsourced) capabilities in the areas of media monitoring and strategic communications, following three years of funding from the European Parliament. This funding source expires at the end of 2020 and is not renewable.
The EU has also sought to collaborate with private companies to help stem the tide of hostile disinformation. In September and October 2018, it launched a Code of Practice on Disinformation together with roadmaps for implementation from partners in the private sector. Running for a twelve-month trial period (which covered the European Parliament elections in May 2019), the code was an experiment in voluntary self-regulation by the tech industry.
Signatories made commitments in five areas: online advertisements, political advertising, integrity of services, transparency for consumers, and transparency for researchers.18 Private sector partners published reports detailing their actions to mitigate disinformation. However, the signatories self-reported their progress, and the information was not verified by an external body. The lessons from this process will feed into further EU policy developments in this area. A recent Carnegie paper details some of the most important lessons from the code process.19
In December 2018, the European Commission launched its Action Plan Against Disinformation, which remains a key pillar of EU policy, granting mandates to several operational instruments. This measure placed disinformation within the context of hybrid threats and highlighted the role of strategic communications by the EEAS “as a priority field for further work.”20 The action plan emphasized four areas of work: improving the capabilities of EU institutions to detect, analyze, and expose disinformation; strengthening coordinated and joint responses to disinformation; mobilizing the private sector to tackle disinformation; and raising awareness and improving societal resilience.21 It proposed maintaining the mandate of the East StratCom Task Force and reviewing the mandates of the Western Balkans and South Task Forces.22 The action plan recommended an expansion of their resources and capabilities, as well as the creation of a Rapid Alert System to strengthen coordination among EU institutions, member states, and other relevant international networks. It also proposed initiatives in the areas of strategic communications, media literacy, and high-quality journalism.
The EEAS launched the Rapid Alert System in March 2019 to enable common situational awareness related to disinformation spread across EU member states, as well as the development of common responses.23 The system consists of a rudimentary platform for information sharing, as well as a network of points of contact in the various EU member states. The Rapid Alert System is intended to connect to existing real-time monitoring capabilities inside and outside of the EU, such as the Emergency Response Coordination Centre and the EEAS Situation Room, as well as the G7 Rapid Response Mechanism and the North Atlantic Treaty Organization (NATO), though this goal has been only partially realized. The system is therefore, in theory at least, an important platform for information sharing from an international perspective.
So far, relatively few highly engaged EU member states share information through the Rapid Alert System. Major differences in how member states view the threat of disinformation are reflected in the use of the platform. In particular, a lack of trust between member states has led to low levels of information sharing and engagement. A successful aspect appears to be the networks and relationships formed among small coalitions of like-minded actors. Regular meetings have been held since early 2019, but the system’s alert function had not yet been triggered as of June 2020.
EU-affiliated election observation missions also have a role to play. In October 2019, the European Council issued a document titled “Council Conclusions on Democracy,” which observed new challenges to democracy emerging around the world.24 These include the undermining of democratic processes and institutions, low levels of trust, shrinking democratic space for civil society, increased violations of human rights and fundamental freedoms, and manipulation using online technologies. This last point includes issues of disinformation, hate speech, privacy, and campaign funding. The European Council made commitments to strengthening the EU’s democracy-building capabilities around the world, including promoting instruments created to mitigate the effects of online interference during elections.
As a first step, election observation missions of the EU and its member states have been developing a methodology to monitor online political campaigns. In the case of EU missions, this methodology has been road tested in elections in Peru, Sri Lanka, and Tunisia, and it will become a standard part of all future missions. It will, in addition, create a basis for EU support to strengthen research, monitoring, and oversight capacities in third-country academic circles and civil society.
In addition to the aforementioned measures, the European Commission is also developing two major new policies. First, it is preparing the 2020–2024 European Democracy Action Plan,25 which includes specific commitments to project EU values worldwide.26 This will likely include significant policy commitments at the intersection of disinformation, electoral protection, digital technologies, and public-private partnerships. In this regard, it will set out next steps for building on the Code of Practice and the Action Plan Against Disinformation. Second, building on existing e-commerce rules, the EU is preparing a Digital Services Act.27 Among other things, this measure will set out regulatory powers for the EU over digital platforms, which are likely to include powers of regulation and auditing relating to online disinformation.
Disinformation has been an ongoing threat to the EU during the coronavirus pandemic. The ability of EU institutions to handle this challenge provides a valuable snapshot of the strengths and weaknesses of their current policy instruments. This paper does not analyze the overall EU response to the pandemic but instead concentrates on the lessons relevant to the future of EU disinformation policy. It assumes that the EU continues to grapple with disinformation threats with some distinctive characteristics, as highlighted by multiple sources.28
The EU institutions were committed to exposing the threat of disinformation during the early phases of the pandemic. However, a lack of coherent policy—for example, in terms of supporting member states such as Italy that were exposed to the virus early on—contributed to an environment in which disinformation could spread more readily. Lacking in clear strategic communications of their own policies at that point, the EU institutions focused on exposing disinformation spread from Russia and misinformation spread by individuals. The focus allowed the disinformation policy area to achieve perhaps its highest-ever profile within the EU institutions, but at the same time this outcome risked politicizing and placing undue pressure on the instruments for countering disinformation.
As of early June 2020, the EUvsDisinfo web platform had identified around 570 cases of coronavirus-related disinformation emanating from pro-Kremlin media.29 Simultaneously, in light of the increased political and public interest in disinformation, the EEAS Strategic Communications and Information Analysis Division produced special reports on coronavirus disinformation, which were published on the EUvsDisinfo platform as EEAS special reports. These summarize the research available from multiple international sources and serve as snapshots of findings from the expert community rather than independent research conducted by the EEAS.30
Subsequent leaks of these reports, and the decision to produce separate public and internal versions, demonstrate the ad hoc nature of the processes by which these documents were created. The details of the most serious leaks, which relate to alleged Chinese pressure on the EU to change the third report, are not discussed here.31
Some immediate recommendations tailored to various relevant actors for the future of EU disinformation policy are outlined below.
An approach for the next stage of EU policy on disinformation—particularly with respect to the European Democracy Action Plan— is outlined below. It defines a way ahead and lays out the main options that should be considered to meet future threats. It rests on four pillars, which build on the principles described above and create a coherent stance fit for the current and near-future threat landscape.
The four pillars are:
The EU should first revise the terminology used to support disinformation policy and analysis to make it easier to distinguish between different aspects of the problem. Disinformation is currently used as a catchall term that does not help the EU institutions define different areas of problematic behavior. It muddles the actions of individuals inadvertently sharing incorrect information with the hybrid influence campaigns of hostile states.
A first step is therefore to create a rigorous framework designed to define the scope of the challenge and assign responsibilities. The second paper in this series proposes terminology and a framework capable of systematizing the EU institutions’ disinformation work, which is briefly outlined below.
Four terms should become authoritative definitions preferred by all EU institutions in their engagements with stakeholders: misinformation, disinformation, influence operations, and foreign interference. The benefit of differentiating terms is to align democratic concerns—such as freedom of expression, data transparency, and privacy—with the concerns of national and European security, including around elections. To resolve these tensions, the paper recommends using terminology that reflects institutional ownership of various policy areas.
Misinformation should be defined as the distribution of verifiably false content without an intent to mislead or cause harm. Countermeasures should be developed from the perspective of home affairs and the health of public debate, and they should fall under the responsibility of the European Commission’s Directorate-General for Justice and Consumers (DG JUST); the Directorate-General for Communications Networks, Content, and Technology (DG CNECT); the Directorate-General for Education and Culture (DG EAC); the Directorate-General for Communication (DG COMM); and the Joint Research Centre (JRC).
By contrast, disinformation should be defined as the creation, presentation, and dissemination of verifiably false content for economic gain or to intentionally deceive the public, which may cause public harm. Countermeasures should cover security considerations, strategic communications for countering disinformation, oversight of digital platforms, and research collaboration, and they should fall under the responsibility of the above institutions as well as the Directorate-General for Neighbourhood and Enlargement Negotiations (DG NEAR) and the EEAS.
Meanwhile, influence operations should be defined as coordinated efforts to influence a target audience using a range of illegitimate and deceptive means, in support of the objectives of an adversary. The development of countermeasures should be the responsibility of the EEAS, in conjunction with other institutions listed above, in line with its responsibilities for external relations, foreign interference, threat monitoring and analysis, resilience building in the EU’s neighborhoods, and third-country election monitoring.
Finally, foreign interference should be defined as coercive, deceptive, and/or nontransparent efforts by a foreign state actor or its agents to disrupt the free formation and expression of political will, during elections, for example. The development of countermeasures should be the responsibility of the EEAS, in conjunction with the other institutions listed above, and these countermeasures should emphasize the distinct nature of responses targeting foreign state actors. Integrating such responses with intelligence, hybrid, and cyber capabilities would also be required for countering influence operations and foreign interference.
This terminology is escalatory. Foreign interference can involve several influence operations. Influence operations can include many examples of disinformation. Disinformation can cause or be derived from misinformation. Institutional ownership should be developed on this understanding; for example, the EEAS would be responsible for countering disinformation spread by pro-Kremlin sources on the grounds that such disinformation is part of influence operations and a tool of foreign interference.
The benefit of this approach is that misinformation and disinformation are treated primarily as problems of democracy to be dealt with by improving the health of public debate, while influence operations and foreign interference are treated as security concerns in the context of attempting to influence the calculus of adversary actors. This approach also acknowledges that actor-specific knowledge is a necessary foundation of the disinformation debate. The third paper in the series, “Developing Policy Interventions for the 2020s,” outlines the main options for action, which are summarized below.
The third paper in the series, “Developing Policy Interventions for the 2020s,” outlines a range of regulatory and nonregulatory measures that should be developed into a coherent, comprehensive EU disinformation policy. Nonregulatory measures are particularly important, as digital platforms have multiple tools at their disposal for modifying user behavior. Building on the results of the Code of Practice on Disinformation, the EU should work with and better support platforms in establishing guidelines on best practices that set clear standards of responsible platform behavior aligned with fundamental freedoms.
The paper details several examples where EU guidance on areas such as terms of service, terminology, promoting/demoting content, political parties, and research collaboration could be beneficial to current and emerging digital platforms, member states, and the international community. The paper suggests the following:
The European Commission will likely favor regulatory interventions, particularly in areas where the voluntary code of practice fell short of delivering the desired results. The paper on policy interventions outlines an overall approach to regulation that places the onus on digital platforms to fulfill a duty of care, while enabling independent verification of their results. It suggests a differentiated approach to data access, on the grounds that data transparency should be viewed as a means of improving policymaking, not as an end in itself.34
This approach includes the following tasks.
The overall package of policy recommendations presented here is designed with a view not just to solving European problems but rather to creating a default global disinformation policy for democracies to adapt to local conditions. The recommendations promote evidence-based policymaking based on a logical alignment between terminology, interventions, and empirical data. They balance concerns about fundamental freedoms with security and distinguish between the types of problems that disinformation entails and the types of actors that it involves. Interventions are designed to reframe the issue in terms of influencing the calculus of adversary actors so that they no longer perceive disinformation and interference as a beneficial course of action.
1 In a speech in January 2020, Vice President of the European Commission for Values and Transparency Věra Jourová named Russia and China as “specific external actors . . . that are actively using disinformation and related interference tactics to undermine European democracy.” See Věra Jourová, “Opening Speech of Vice-President Věra Jourová at the Conference ‘Disinfo Horizon: Responding to Future Threats,’” European Commission, January 30, 2020, https://ec.europa.eu/commission/presscorner/detail/en/speech_20_160.
2 See the annotated list of foreign influence efforts in Diego A. Martin and Jacob N. Shapiro, “Trends in Online Foreign Influence Efforts,” Empirical Studies of Conflict Project, July 8, 2019, https://scholar.princeton.edu/sites/default/files/jns/files/trends_in_foreign_influence_efforts_2019jul08_0.pdf.
3 Judit Bayer, Natalija Bitiukova, Petra Bárd, Judit Szakács, Alberto Alemmano, and Erik Uszkiewicz, Disinformation and Propaganda: Impact on the Functioning of the Rule of Law in the EU and Its Member States (Brussels: European Union, 2019), 70, 74, https://www.europarl.europa.eu/RegData/etudes/STUD/2019/608864/IPOL_STU(2019)608864_EN.pdf; and Michael Meyer Resende, Marek Mracka, and Rafael Goldzweig, “EU EOMS Core Team Guidelines for Observing Online Campaign (2.0),” European Union Election Observation, June 3, 2019, 4–7.
4 Bayer et al., Disinformation and Propaganda, 74; and Resende, Mracka, and Goldzweig, “EU EOMS Core Team Guidelines for Observing Online Campaign (2.0),” 5–6.
5 Bayer et al., Disinformation and Propaganda, 74; and Resende, Mracka, and Goldzweig, “EU EOMS Core Team Guidelines for Observing Online Campaign (2.0),” 7.
6 Bayer et al., Disinformation and Propaganda, 74; and Resende, Mracka, and Goldzweig, “EU EOMS Core Team Guidelines for Observing Online Campaign (2.0),” 6–7.
7 Duncan Hollis, “The Influence of War; The War for Influence,” Temple International and Comparative Law Journal 32, no. 1 (Spring 2018): 39.
8 Duncan Hollis, “Why States Need an International Law for Information Operations,” Lewis and Clark Law Review 11, no. 4 (2007): 1050; Hollis, “The Influence of War; The War for Influence,” 40; and Nicholas Tsagourias, “Electoral Cyber Interference, Self-Determination and the Principle of Non-Intervention in Cyberspace,” European Journal of International Law EJIL:Talk! (blog), August 26, 2019, https://www.ejiltalk.org/electoral-cyber-interference-self-determination-and-the-principle-of-non-intervention-in-cyberspace.
9 Nicholas Tsagourias, “Electoral Cyber Interference, Self-Determination and the Principle of Non-Intervention in Cyberspace.”
10 Ibid.
11 Ibid.
12 “Strategic Communications,” European External Action Service, https://eeas.europa.eu/headquarters/headquarters-homepage/100/strategic-communications_en.
13 The concept of foreign manipulative interference has been used on occasion in official documentation without being defined. The second report in this series offers a detailed terminological discussion.
14 “European Council Conclusions on External Relations (19 March 2015),” European Council, March 19, 2015, http://www.consilium.europa.eu/en/meetings/european-council/2015/ 03/19-20/.
15 High Representative of the Union for Foreign Affairs and Security Policy, “Action Plan Against Disinformation,” December 5, 2018, 4, https://ec.europa.eu/commission/sites/beta-political/files/eu-communication-disinformation-euco-05122018_en.pdf.
16 For a discussion of the East StratCom Task Force’s launch period policies, see Corneliu Bjola and James Pamment, “Revisiting Containment Strategy in the Digital Age,” Global Affairs 2, no. 2 (May 2016): 131–142, https://doi.org/10.1080/23340460.2016.1182244.
17 “EUvsDisinfo,” European Union East StratCom Task Force, https://euvsdisinfo.eu.
18 “EU Code of Practice on Disinformation,” European Commission, September 26, 2018, https://ec.europa.eu/newsroom/dae/document.cfm?doc_id=54454.
19 James Pamment, “EU Code of Practice on Disinformation: Briefing Note for the New European Commission,” Carnegie Endowment for International Peace, March 3, 2020, https://carnegieendowment.org/2020/03/03/eu-code-of-practice-on-disinformation-briefing-note-for-new-european-commission-pub-81187.
20 High Representative of the Union for Foreign Affairs and Security Policy, “Action Plan Against Disinformation,” 1.
21 Ibid., 5.
22 Ibid.
23 “Factsheet: Rapid Alert System,” European External Action Service, March 2019, https://eeas.europa.eu/sites/eeas/files/ras_factsheet_march_2019_0.pdf.
24 “Democracy: EU Adopts Conclusions,” European Council, October 14, 2019, https://www.consilium.europa.eu/en/press/press-releases/2019/10/14/democracy-eu-adopts-conclusions.
25 “Human Rights and Democracy: Striving for Equality Around the World,” European Commission, March 25, 2020, https://ec.europa.eu/commission/presscorner/detail/en/ip_20_492.
26 “Legislative Train Schedule: A New Push for European Democracy,” European Parliament, https://www.europarl.europa.eu/legislative-train/theme-a-new-push-for-european-democracy/file-european-democracy-action-plan.
27 “New EU Rules on E-commerce,” European Commission, last updated March 8, 2020, https://ec.europa.eu/digital-single-market/en/new-eu-rules-e-commerce.
28 “EEAS Special Report: Disinformation on the Coronavirus—Short Assessment of the Information Environment,” EUvsDisinfo, March 19, 2020, https://euvsdisinfo.eu/eeas-special-report-disinformation-on-the-coronavirus-short-assessment-of-the-information-environment; “EEAS Special Report Update: Short Assessment of Narratives and Disinformation Around the COVID-19 Pandemic,” EUvsDisinfo, April 1, 2020, https://euvsdisinfo.eu/eeas-special-report-update-short-assessment-of-narratives-and-disinformation-around-the-covid-19-pandemic; “EEAS Special Report Update: Short Assessment of Narratives and Disinformation Around the COVID-19/Coronavirus Pandemic (Updated 2–22 April),” EUvsDisinfo, April 24, 2020, https://euvsdisinfo.eu/eeas-special-report-update-2-22-april; “Coronavirus Disease (COVID-19) Advice for the Public: Myth Busters,” World Health Organization, https://www.who.int/emergencies/diseases/novel-coronavirus-2019/advice-for-public/myth-busters; Jeff Kao and Mia Shuang Li, “How China Built a Twitter Propaganda Machine Then Let It Loose on Coronavirus,” ProPublica, March 26, 2020, https://www.propublica.org/article/how-china-built-a-twitter-propaganda-machine-then-let-it-loose-on-coronavirus; Laurence Dodds, “China Floods Facebook With Undeclared Coronavirus Propaganda Ads Blaming Trump,” Telegraph, April 5, 2020, https://www.telegraph.co.uk/technology/2020/04/05/china-floods-facebook-instagram-undeclared-coronavirus-propaganda; and Betsy Woodruff Swan, “State Report: Russian, Chinese and Iranian Disinformation Narratives Echo One Another,” Politico, April 21, 2020, https://www.politico.com/news/2020/04/21/russia-china-iran-disinformation-coronavirus-state-department-193107.
29 “Coronavirus: Stay Up To Date,” EUvsDisinfo, https://euvsdisinfo.eu/disinformation-cases/?disinfo_keywords%5B%5D=106935&date=.
30 “EEAS Special Report: Disinformation on the Coronavirus—Short Assessment of the Information Environment,” EUvsDisinfo; “EEAS Special Report Update: Short Assessment of Narratives and Disinformation Around the COVID-19 Pandemic,” EUvsDisinfo; and “EEAS Special Report Update: Short Assessment of Narratives and Disinformation Around the COVID-19/Coronavirus Pandemic (Updated 2-22 April),” EUvsDisinfo.
31 Alberto Nardellli, “The EU Was Accused of Watering Down a Report About Chinese Coronavirus Disinformation. In Response, It Has Attacked Leaks and Whistleblowers,” BuzzFeed News, May 5, 2020, https://www.buzzfeed.com/albertonardelli/eu-china-coronavirus-disinformation-report.
32 Josep Borrell Fontelles, “EEAS Special Report on the Narratives and Disinformation Around the COVID-19/Coronavirus Pandemic: Opening Statement by Josep Borrell Fontelles, High Representative and Vice-President of the EC,” European Parliament, April 30, 2020, https://multimedia.europarl.europa.eu/en/eeas-special-report-on-the-narratives-and-disinformation-around-the-covid-19coronavirus-pandemic_I190055-V_v; and Hannah Ritchie, “EU Chief Denies Disinformation Report Was Watered Down for China,
CNN, May 1, 2020, https://edition.cnn.com/2020/05/01/europe/eu-ursula-von-der-leyen-amanpour-china-intl/index.html.
33 Samuel Stolton, “EU Rapid Alert System Used Amid Coronavirus Disinformation Campaign,” Euractiv.com, March 4, 2020, https://www.euractiv.com/section/digital/news/eu-alert-triggered-after-coronavirus-disinformation-campaign.
34 Mark MacCarthy, “Transparency Requirements for Digital Social Media Platforms: Recommendations for Policy Makers and Industry,” Transatlantic Working Group on Content Moderation Online and Freedom of Expression,” February 12, 2020, 1, 10, https://www.ivir.nl/publicaties/download/Transparency_MacCarthy_Feb_2020.pdf.
Carnegie does not take institutional positions on public policy issues; the views represented herein are those of the author(s) and do not necessarily reflect the views of Carnegie, its staff, or its trustees.
A coalition of states is seeking to avert a U.S. attack, and Israel is in the forefront of their mind.
Michael Young
Baku may allow radical nationalists to publicly discuss “reunification” with Azeri Iranians, but the president and key officials prefer not to comment publicly on the protests in Iran.
Bashir Kitachaev
The country’s leadership is increasingly uneasy about multiple challenges from the Levant to the South Caucasus.
Armenak Tokmajyan
The GCC states’ use of Artificial Intelligence will generate much leverage over the global digital infrastructure and climate talks.
Camille Ammoun
The countries of the region have engaged in sustained competition that has tested their capacities and limitations, while resisting domination by rivals. Can a more stable order emerge from this maelstrom, and what would it require?
Hamza Meddeb, Mohamed Ali Adraoui