Artificial intelligence (AI) is expected to play a major role in shaping global competitiveness and productivity over the next couple of decades, granting early adopters significant societal, economic, and strategic advantages. As the pace of AI innovation and development picks up—underpinned by advancements in big data and high-performance computing—the United States and China are both in the driver’s seat. Europe, meanwhile, despite having certain advantages such as a strong industrial base and leading AI research and talent, is punching far below its weight. This state of affairs is especially due to the fragmentation of the EU’s digital market, difficulties in attracting human capital and external investment, and the lack of commercial competitiveness.

Fortunately, in recent years, European leaders have recognized the importance of not lagging behind on AI and have sought to raise their ambitions. Leaders such as German Chancellor Angela Merkel and French President Emmanuel Macron have stressed the need for Europe to become a leading global player on AI, and the new European Commission has made AI a top priority for the next five years. By declaring AI a major strategic priority, several member states and EU institutions are taking steps to advance the continent’s ambitions for AI leadership. This includes rolling out devoted national and EU-level AI strategy documents, boosting research and innovation, and exploring new regulatory approaches for managing the development and use of AI.

Central to the EU’s efforts is the notion of AI that is “made in Europe,” that pays attention to ethical and human-centric considerations, and that is in line with core human rights values and democratic principles. Given the need to address the societal, ethical, and regulatory challenges posed by AI, the EU’s stated added value is in leveraging its robust regulatory and market power—the so-called “Brussels effect”—into a competitive edge under the banner of “trustworthy AI.” Designed to alleviate potential harm as well as to permit accountability and oversight, this vision for AI-enabled technologies could set Europe apart from its global competitors. It can also serve as a key component of increasing the EU’s digital sovereignty by ensuring that European users have more choice and control.

Yet normative principles and regulation alone are not enough for the EU to become a global AI leader. What is also required is a reevaluation of European competitiveness in this field in a way that leverages its comparative advantages and preserves its interests in a world where technology is increasingly emerging as a key driver of great-power rivalry. Amid concerns that Europe is losing ground to the United States and China, EU member states should acknowledge that the amount of resources required to keep up with the latest AI developments cannot be met by going it alone. There is a clear rationale for a stronger EU-level role and for a more coherent European-wide approach to AI that complements member states’ own actions.

This paper takes stock of whether existing EU and national strategies and funding initiatives are sufficient for Europe to be able to seize the opportunities afforded by AI. It argues that for its AI ecosystem to thrive, Europe needs to find a way to protect its research base, encourage governments to be early adopters, foster its startup ecosystem, expand international links, and develop AI technologies as well as leverage their use efficiently. More importantly, to be able to take these steps, Europe needs to catch up on digitizing its economies and complete the establishment of the digital single market once and for all.


There are a host of steps that the EU and its member states can pursue to bolster Europe’s approach to AI. Specifically, this array of European stakeholders can:

  • complete the EU digital single market;
  • balance technological sovereignty with global supply chains;
  • lead on standard setting and regulations;
  • secure citizens’ trust in AI applications;
  • promote a vibrant Europe-wide AI ecosystem;
  • align the national AI strategies of EU member states;
  • safeguard dual-use technologies;
  • ensure close EU-UK cooperation on AI;
  • enhance transatlantic dialogue on AI;
  • engage global stakeholders on ethical AI; and
  • consider AI a facet of European strategic autonomy.


Artificial intelligence (AI) is expected to transform economies and impact virtually every aspect of human life over the next couple of decades.1 This disruptive potential has triggered rapidly growing investments in AI research and development (R&D) as well as speedy uptake in the public and private sectors worldwide. By 2030, AI could contribute as much as $13 trillion to the global economy, a figure that approaches the current annual economic output of China, the world’s second-largest economy.2 Moreover, as AI applications are expanding to a wide range of sectors, early adopters will be well positioned to reap significant economic and strategic benefits. While the hyped notion of an AI arms race is too simplistic to capture the complex dynamics of the global digital ecosystem, its continued use also risks further exacerbating global competition in this key strategic domain. The combination of potentially large economic, societal, and military dividends has propelled countries to join this so-called race and swiftly and effectively apply AI in as many sectors as possible.

The United States and China are ahead in the AI competition, although countries such as Israel, Russia, Singapore, and South Korea are investing heavily in AI as a strategic priority.3 Some European countries—such as France, Germany, Sweden, and the United Kingdom (UK)—are also at the forefront of the field, but they cannot necessarily compete globally alone. Europe as a whole is punching far below its weight. This reality is especially due to the fragmentation of the EU’s digital market, difficulties with attracting human capital and external investment from outside of Europe, and a lack of commercial competitiveness. While its policymakers frequently acknowledge the risks of lagging behind, Europe has some advantages, such as its strong industrial base and leading AI research and talent, that it can leverage to better compete globally.

Erik Brattberg
Erik Brattberg was director of the Europe Program and a fellow at the Carnegie Endowment for International Peace in Washington. He is an expert on European politics and security and transatlantic relations.
More >

European governments and EU institutions are taking steps to upgrade the continent’s ambitions for AI leadership. Countries are rolling out dedicated AI national strategies and digital approaches. The European Commission has emerged as a key driver and agenda setter for a more coherent approach to AI in Europe.4 Its plan, set out in the white paper on AI released in February 2020, is to further boost the EU’s research and innovation as well as its technological and industrial capabilities. The EU argues its added value is in leveraging its strong regulatory and market power into a competitive edge under the unique selling proposition of “trustworthy AI” that is “made in Europe.”5 This approach is characterized by transparency, diversity, and fairness, and it is designed to alleviate potential harm as well as to permit accountability and oversight, thus safeguarding social and environmental well-being. Central to the EU’s efforts in the digital space is also a strong desire to be more self-sufficient. Commissioner for Internal Market and Services Thierry Breton, in laying out his vision for the EU, said: “we have to work on our technological sovereignty.”6

One particular area where the EU has the potential to be instrumental is in shaping the global normative agenda on a “human-centric” approach to AI.7 The aim is to set up a framework for an ethics-driven and trustworthy development of AI technology and applications in line with European values as well as to prepare the groundwork for a global alliance in this domain. Although it has been criticized for focusing too much on legal and ethical guidelines, the EU sees itself as having a first-mover advantage as a regulatory powerhouse when it comes to ethical AI by setting the stage for global standards of design and usability and for ensuring legal clarity in AI-based applications. As shown by the General Data Protection Regulation (GDPR), the EU’s strategic edge primarily resides in its market, normative, and regulatory powers—what has been described as the “Brussels effect,”—though the digital single market is admittedly still a work in progress.8 However, while the European Commission’s digital sovereignty agenda may help advance certain AI developments in Europe, it is equally essential that the EU works closely with like-minded global partners on setting joint AI standards and regulations.

Yet regulations alone are not sufficient. The window of opportunity for consolidating a distinctive European approach to AI on the international stage is closing fast. What is needed is a reevaluation of European competitiveness in this field. Furthermore, Europe must critically assess and demonstrate foresight to take stock of the options it has at its disposal to shape AI in a way that leverages its comparative advantages and preserves its interests, especially in a world where technology is increasingly emerging as a key aspect of great-power competition, particularly between the United States and China. Fortunately, European leaders seem to recognize that more action is needed and have recently taken steps to upgrade Europe’s digital ecosystem, though it is still too early to see what these actions will amount to. France, Germany, and other EU member states have subsidized and tried to create or favor national champions in the tech and telecommunications sector for decades, usually with little impact.

This paper examines the national strategies of major European countries and significant EU initiatives in AI, with special attention to Europe’s leadership potential and capacity to combine its technological and industrial strengths with its agenda-setting and regulatory power. Though some municipal authorities in Europe are discussing proposals to regulate or limit the use of facial recognition and other AI-related technologies,9 the focus here is at the national and EU levels rather than the sub-national one. The paper starts by assessing Europe’s role in the global AI competition. It compares the national AI strategies of various European actors (including Czechia, Estonia, Finland, France, Germany, Sweden, and the United Kingdom) and discusses government efforts to shore up Europe’s leadership position on AI. It then outlines and evaluates the EU’s distinctive approach and recent AI initiatives and goes on to explore strategic considerations facing European policymakers in a more competitive global environment. Finally, the paper puts forward recommendations toward a more coherent European AI strategy.

Europe’s Place in the Global AI Competition

The global competition in AI is fierce, between companies as well as states. Countries’ competitiveness can be measured in terms of market share, investment, and innovation prowess, as well as the strength of regulatory and ethical frameworks.

Market Share

When it comes to the global AI startup ecosystem, a study from 2018 noted that the top three players (measured in terms of number of AI startups) are the United States with 1,393 startups (40 percent), China with 383 startups (11 percent), and Israel with 362 startups (11 percent).10 Four European countries are among the top ten (the UK is in fourth place, France is in seventh, Germany is in eighth, and Sweden is in tenth). Collectively, though, Europe is second only to the United States, with 769 AI startups (22 percent of the global total). This shows that, while single European countries may not be globally competitive, Europe has the potential to be a major player in AI if it can strengthen its digital single market, though Brexit will have long-term consequences on such efforts.

Raluca Csernatoni
Raluca Csernatoni is a fellow at Carnegie Europe, where she specializes on European security and defense, as well as emerging disruptive technologies.

Europe also lags in AI-related patent applications, though its filings for technologies related to the Internet of Things and the Fourth Industrial Revolution grew by 54 percent in 2016, driven mainly by increases in the UK, Germany, and France.11 The United States holds the most AI patent applications, receives the majority (66 percent) of AI-related private investment globally, and is home to the world’s highest-valued digital players (Google, Apple, Facebook, IBM, Microsoft, and Amazon).12 It therefore has a superior foundation for developing and implementing AI applications. But between 2013 and 2017, the number of patents in deep learning and AI published in China grew at a much faster rate than those in the United States.13 In 2017, China obtained 641 patents related to AI, compared with 130 in the United States.14

Moreover, Europe’s information technology assets are scattered across different countries. This is precisely why the establishment of the digital single market is considered so crucial for the EU’s global competitiveness. Fully completed, this would make the EU one of the largest and most valuable digital markets in the world. But the union needs to remove the remaining barriers to cross-border data flows and 5G networks, two tasks it is currently lagging behind on.15 The most recent factsheet from the European Commission from February 2019 stated that the European Parliament, the Council of the EU, and the European Commission had agreed on twenty-eight of the thirty legislative initiatives initially presented as part of the digital single market strategy launched in 2015.16 There are pending legislative initiatives that the European Parliament and the Council of the EU still need to agree on. The most important ones include the regulation of the cross-border portability of online content (geoblocking),17 the regulation of the free flow of nonpersonal data in the EU,18 and a regulation establishing a European High Performance Computing Joint Undertaking, which will pool EU and national funding to develop a pan-European infrastructure for supercomputing and to support related research and innovation activities.19

The European Commission has also taken nonlegislative steps to advance the digital single market strategy, including the Digital Education Action Plan,20 (which includes eleven actions to support the development of technology use and digital competence in education); the High Level Expert Group on AI (see below); and the Fintech Action Plan,21 (which includes initiatives to establish a more competitive and innovative financial market). The recent white paper on AI also proposes creating a “lighthouse center of research, innovation and expertise”;22 encouraging the flourishing of the European AI research community; and building world-class testing and experimentation sites across Europe. The continent is already home to one of the world’s largest nonprofit contract research institutes for software technology based on AI methods, an institute called the German Research Center for Artificial Intelligence.23 Founded in 1988, it has become Germany’s leading research center on innovative AI commercial software technology. On the same day the white paper was released, the European Commission also proposed a data strategy to promote a single data market and a European alternative for cloud-based services. While efforts to strengthen the digital single market are progressing, Brexit means a key European country is taking a different direction. With London being Europe’s most important AI hub, with 1,000 AI companies, thirty-five tech hubs, and reputed research centers such as the Alan Turing Institute,24 the loss of the UK could slow Europe’s collective progress on AI.

Translating basic research into applied research and innovation in the civil and defense sectors in Europe is another issue to be addressed.25 Continued advancement in AI requires collaboration between industry, academia, and government as well as the industrywide development of solutions. These are things that Europe is traditionally quite good at and that, combined with its strength in basic research, could provide a strategic advantage. European companies are also ahead of Chinese and American ones in the adoption of robotic process automation.26 Moreover, there are plenty of AI companies in Europe so establishing compliance regimes for them, as part of the European Commission’s ethical guidelines, could help spread EU norms on AI development and use.

Finally, European AI researchers enjoy excellent scientific standing, though as many AI journal and conference papers are published per year in China as in Europe.27 To some extent, the strength of cross-border academic research teams in Europe creates opportunities for cross-pollination with industry and government actors. Moreover, distributed but cooperative research clusters across Europe have the potential to help cross-pollination, and smaller initiatives are already laying the groundwork (including, for example, the Nordic AI Artificial Intelligence Institute,28 the Benelux Association for Artificial Intelligence,29 and the Robot Technology Transfer Network or ROBOTT-NET).30 There have been attempts to forge pan-European research networks such as the Confederation of Laboratories for Artificial Intelligence Research in Europe (CLAIRE),31 supported by the European Association for Artificial Intelligence, and the AI4EU consortium supported by the European Commission’s Horizon 2020 program.32 As identified by the recent white paper on AI, opportunities for further cooperation between member states to capitalize on each other’s strengths, as well as the advantages of cross-border and sectoral networks of excellence, could help Europe better exploit its existing potential on AI.

External Investment and Innovation Prowess

When it comes to investment, the United States and China are also ahead of Europe. The United States is by far the leader in AI-related investment and venture capital. Whereas its academic institutions conduct the majority of basic research, the private sector is very active in applying research done across the country and elsewhere. These academic and private-sector players are also exceptionally good at attracting top global talent.33 Meanwhile, Chinese AI startups benefit from close ties with the government, which give them access to huge amounts of public-sector funding and early-adopter institutions. For example, in the domain of facial recognition, the company CloudWalk received a $301 million grant from the Guangzhou Municipal Government in 2017,34 while Megvii raised $460 million in a funding round led by the central government’s venture capital fund.35

Despite a steady increase since 2008, Europe still lags behind the United States and China in private investment in AI. In 2016, Europe devoted only between 2.4–3.2 billion euros in investment funds, whereas Asia invested 6.5–9.7 billion euros and North America invested 12.1–18.6 billion euros.36 Private equity and venture capital firms have accounted for 75 percent of AI-related deal volume in Europe in the last ten years.37 Though reported amounts are difficult to compare because analysts use different definitions of AI, the UK, France, and Germany attracted the lion’s share of investment in AI companies over the past decade, evidence of a highly uneven European landscape.38 Moreover, while most AI-related investments in Europe are from European sources, many of the continent’s most successful digital companies—such as Skype (Estonia/Sweden),39 Shazam (UK),40 and Momondo (UK),41 among others—have been acquired by American or Chinese tech giants, making Europe a net importer of digital services despite its significant levels of innovation and many digital startups. The foreign-investment screening framework that the EU agreed to in April 2019, which seeks to protect critical technologies, could restrict Chinese investments into European AI startups.42

Innovation is paramount for the EU’s ability to remain competitive. Launched by the European Commission in October 2010, the Innovation Union—one of the seven flagship initiatives of the Europe 2020 strategy—aims to improve conditions and access to finance for research and innovation.43 As far as R&D is concerned, Europe spends 0.8 percent of gross domestic product (GDP) less every year than the United States and 1.5 percent of GDP less than Japan.44 Its performance in innovation continues to improve.45 According to the 2019 edition of the European Innovation Scoreboard, the performance of twenty-four EU countries improved compared to the previous year, and the progress of lower-performing countries has accelerated compared to higher-performing countries. Since 2011, the EU’s average innovation performance has increased by nearly nine percentage points, and it even surpassed that of the United States for the first time in 2019. Within the EU, innovation performance has increased in twenty-five countries since 2011 with Sweden being the EU’s innovation leader in 2019, followed by Finland, Denmark, and the Netherlands.46 Lithuania, Greece, Latvia, Malta, the UK, Estonia, and the Netherlands are the fastest-growing innovators.47

Venesa Rugova
Venesa Rugova is a research assistant in the Europe program at the Carnegie Endowment for International Peace.

But China is catching up fast with a growth rate for innovation performance three times that of the EU, while Canada, Australia, and Japan all perform better than the EU.48 In China, industrial investments in R&D grew by 20 percent between 2017 and 2018, compared to 8 percent in the EU and 9 percent in the United States.49 However, the EU is home to only 33 unicorns, startups valued at over $1 billion, compared to 151 in the United States and 83 in China.50 Europe also has fewer young leading innovators. Europe’s private research, development, and innovation investments are lagging behind, representing 1.3 percent of EU GDP compared to 1.6 percent in China, 2 percent in the United States, 2.6 percent in Japan, and 3.3 percent in South Korea.51 R&D intensity—that is, gross expenditures on R&D as a percentage of GDP—can also provide insights into what innovation objectives governments are pursuing In 2018, the United States, Japan, and South Korea had the largest increases in R&D intensity. In the EU, gross R&D expenditures exceeded 2 percent for the first time, mainly due to positive trends in Germany, the UK, and Poland.52

Overall, in the context of an accelerating global drive for innovation, the EU must reinforce and implement its research, development, and innovation policy; maintain excellence in key enabling and dual-use technologies; and develop a European approach for critical technology infrastructure across the continent to sustain efforts to scale up and diffuse the use of technology.

Regulatory and Ethical Frameworks

When it comes to regulatory and ethical approaches, the EU fares better than the United States and China. Although the U.S. approach to AI has lacked an overall governmental strategy until recently, emphasizing the benefits of private sector innovation while keeping the government out of the way is not a new phenomenon for the country. Former president Barack Obama’s administration released two reports on AI: “The Future of Artificial Intelligence” and “AI, Automation and the Economy.”53 It also released three landmark studies on big data in 2014, 2015, and 2016.54 More recently, the administration of President Donald Trump has begun to shape its own approach with the launch of the American AI Initiative.55 The relevant executive order instructs the U.S. government to develop a plan to extend the United States’ lead in AI. The document aims to boost investment in AI research, standard setting, workforce training, and outreach to allies. However, while making clear that AI is a major priority, these policies have left some questions of implementation and the allocation of resources unaddressed. Congress has recently discussed new proposals to fund a national AI strategy.56

Moreover, the executive order and subsequent official documents have a heavy focus on protecting the United States’ AI advantage, and if this motive is accompanied by restrictions on cross-border flows of data, capital, talent, and know-how, it may hurt the country in the long run. Likewise, the Trump administration’s recent visa restrictions on foreign tech workers in the United States could also undermine an important source of U.S. competitiveness. In February 2020, the White House reportedly aimed to double the AI research budget: the 2021 budget proposal would increase nondefense AI R&D funding to nearly $2 billion and that for quantum computing to about $860 million over the next two years.57 In January 2020, the administration unveiled ten AI principles for federal government agencies to consider when drafting rules and laws for the use of AI in the private sector.58

These principles are informed by three main goals: to ensure public engagement; limit regulatory overreach; and promote trustworthy AI that is fair, transparent, and safe. As the deputy chief technology officer at the White House Office of Science and Technology, Lynne Parker, explained, the principles are broadly defined to allow each government agency to create more specific regulations tailored to its sector.59 The White House is attempting to address concerns over the ethics and oversight of AI without hampering innovation. The White House also launched a series of public consultations before devising a plan on how to implement the principles.60 Although the principles serve to guide federal agencies on how to devise rules on the use of AI, the administration makes it clear that the key concern is limiting regulatory overreach and avoiding overregulation, a concern it has voiced to its European allies in regard to the European Commission’s AI white paper.61

China’s ambition is to become the world leader in AI as part of the government’s Made in China 2025 initiative. Beijing believes that AI technology is key to future global military and economic competition.62 In July 2017, the Ministry of Industry and Information Technology published its “Next Generation Artificial Intelligence Development Plan,”63 which sets clear targets: to reach the same level in AI as the United States by 2020, to become the world’s premier AI innovation center by 2030, and to build a domestic AI industry worth 150 billion renminbi ($22.2 billion) by 2020 and 400 billion renminbi ($59.1 billion) by 2025.64

The China Electronics Standardization Institute, under the Ministry of Industry and Information Technology, is one of the key players in the country, having launched in January 2018 the “Artificial Intelligence Standardization White Paper,” which outlines the country’s national AI standardization framework and plan for AI capability development.65 Since the release of the white paper, the institute has been actively engaged in developing relevant international standards, including as an active member of the subcommittee of the International Organization for Standardization, which develops international standards for the AI industry.66 Chinese AI products are becoming harder to export as Western scrutiny about data privacy standards and security risks grows (as seen in the recent concerns about Huawei and 5G). Overall, however, China’s AI strategy has been called the most comprehensive and most ambitious in the world.67

The country is also seeking to promote international partnerships on AI, as part of its efforts to set norms in this field and export its government-surveillance practices to other countries. As part of its Digital Silk Road, Beijing is investing heavily in digital infrastructure and in other countries to spread its approach to AI, which Beijing hopes will move these countries closer to its own governance model and make them more dependent on China. A related worrisome development is China’s testing and use of AI for censorship, repression, and extensive surveillance through initiatives like its pilot social credit systems.68 On top of this, Beijing is paying significant attention to the role of AI in national security in the belief that integrating AI into military technologies can allow China to overtake the United States in military supremacy.69

The EU’s white paper on AI and any future legislation are likely to influence the global regulatory debate. Here the EU’s experience with data privacy regulations such as the GDPR can serve an example of the Brussels effect by establishing global norms for emerging technologies. The GDPR seeks to incentivize companies to find innovative solutions for processing data within its legal remit. The principle of accountability enshrined in the GDPR aims to foster data accuracy; it implies increasing trust in the source of such data and the reliability of results based on this data. Ultimately, the EU’s focus on protecting user privacy could be an asset if it enables European innovators to build AI applications that are more consumer friendly. Since regulating AI also requires other regulations such as data protection and content moderation as preconditions, the EU is rather well positioned on this front.

Europeans could be more readily accepting of AI technologies that respect fundamental rights and consumer rights, and the use of such technologies might rapidly emerge as global standards materialize, granting Europe a first-mover advantage. This is especially the case when it comes to what the white paper identifies as high-risk AI applications (such as uses related to healthcare, transport, energy, and the public sector). Moreover, efforts to build trust (via the GDPR or future AI legislation) are important, but since the implications of advanced AI are not yet clear, neither are the policy measures that will be needed to properly address risks. Depending on how it is interpreted and enforced,70 the GDPR could become a unique comparative advantage for the EU when it comes to making good on its ambitions to become a leader in “trustworthy AI.”71

AI has the potential to have significantly negative impacts on societies globally. Concerns about technologically driven or exacerbated unemployment are quite real in many countries, and rising technology-induced displacement of jobs could contribute to increased support for populist parties and movements, which could destabilize democracies around the world. The EU’s focus on responsible and safe AI early on could give it an edge when it comes to setting ethical and regulatory standards. The EU’s approach would not regulate the technology as such, nor its applications, but rather would help guide how the applications are developed and deployed. By reporting, monitoring, and analyzing the progress of AI, the EU could position itself to define quality, build alliances with like-minded partners, and lead multilateral initiatives.

In sum, Europe is behind the United States and China on many of the key dimensions of AI competitiveness but ahead on regulatory approaches. A viable European strategy for competing would therefore include redoubled efforts to catch up when it comes to AI research and innovation, investment, data, and adoption, while simultaneously seeking to leverage Europe’s first-mover advantage on establishing regulatory frameworks pertaining to AI development and use.

National European Efforts on AI

European policymakers at the EU and national levels appear to recognize the importance of AI. For example, in November 2018, Chancellor Angela Merkel approved Germany’s 3 billion euro plan for AI, stating that “[t]oday, Germany cannot claim to be among the world leaders in artificial intelligence. Our aspiration is to make ‘Made in Germany’ a trademark also in artificial intelligence, and to ensure that Germany takes its place as one of the leading [AI] countries in the world.”72 Similarly, French President Emmanuel Macron has emphasized the importance of AI for France and the EU. In March 2018, he said: “I think artificial intelligence will disrupt all the different business models and it’s the next disruption to come. So I want to be part of it. Otherwise I will just be subjected to this disruption without creating jobs in this country.”73 Other European leaders have also increasingly expressed interest in AI and the need for European competitiveness in this field. These comments demonstrate a rising sense of urgency across European capitals around the importance of AI and for Europe to play a leading role in the digital age.

They also reflect a growing realization that more digital sovereignty is a prerequisite for great-power status and that such status is becoming even more strategically important as new and emerging technologies such as AI could concentrate economic power in extraordinary ways. Compared to previous industrial revolutions that harnessed the powers of steam, electricity, and computers, the Fourth Industrial Revolution is unique primarily due to the unprecedented scale, fast convergence, and yet-to-be discovered impact of emerging technological breakthroughs (including AI and especially the subfield of machine learning.) From this perspective, economic and political power will evolve in major ways, potentially shifting power toward a few powerful digital monopolies and tech giants while also enabling new and disruptive companies to quickly rise.

To mitigate this wave of transformations, several European countries—notably Czechia, Estonia, Finland, France, Germany, Sweden, and the UK—have developed AI strategies, with others planning to do so in the near future (see table 1). To a varying degree, these strategies outline specific actions, commit significant amounts of money to AI development, and seek to uphold European values and advance AI in an ethical manner that clearly benefits society.74 The following analysis of national approaches to AI encompasses four dimensions: government investment, private-sector innovation, the public-private AI ecosystem, and regulations and ethics.

Table 1. National Approaches to AI in Europe
Date Country AI Deliverable/Action Plan
December 18, 2017 Finland Publication of a national AI strategy
March 6, 2018 UK Launch of “Sector Deal for AI” report
March 2018 Italy Publication of “AI at the Service of Citizens” white paper
March 29, 2018 France Publication of a national AI Strategy (the Villani report)
May 16, 2018 Sweden Publication of a national AI strategy
November 16, 2018 Germany Publication of a national AI strategy
March 2019 Spain Release of a national research, development, and innovation strategy in AI
March 14, 2019 Denmark Publication of a national AI strategy
March 14, 2019 Lithuania Publication of “Lithuanian Artificial Intelligence Strategy: A Vision of the Future”
March 18, 2019 Belgium Launch of the “AI4Belgium Initiative strategy”
May 6, 2019 Czechia Publication of a national AI strategy
May 24, 2019 Luxembourg Publication of a national AI strategy
June 11, 2019 Portugal Publication of the national AI strategy “AI Portugal 2030”
June 26, 2019 Austria Publication of “Artificial Intelligence Mission Austria 2030”
July 25, 2019 Estonia Publication of a short-term national AI strategy for 2019–2021
August 21, 2019 Poland Launch of the Artificial Intelligence Development Policy for 2019–2027
October 3, 2019 Malta Release of “A Strategy and Vision for Artificial Intelligence in Malta 2030”
October 9, 2019 Netherlands Publication of a “Strategic Action Plan for AI”
January 14, 2020 Norway Publication of a national strategy on AI


France’s approach to AI was first delineated in a 2018 government-commissioned report (the Villani report) called “For a Meaningful Artificial Intelligence: Toward a French and European Strategy.”75 Noteworthy here is the use of the word “European” in the title. This strategic document crystalizes a comprehensive and forward-looking approach to AI that emphasizes more public research, resources, training, transfers, and innovation in four strategic sectors (healthcare, the environment, transportation and mobility, and defense and security.)76 Research on and the development of AI technologies is made interdisciplinary and inclusive by including social scientists, eliminating potential biases in algorithms, exploring the complementarity of humans and AI in human-machine interactions, and promoting gender equality in scientific and technical sectors.77

The strategy also recognizes the necessity of considering the European data ecosystem as a common good, in which public authorities should introduce “new ways of producing, sharing and governing data.”78 It stresses the need to prevent the brain drain of France’s leading experts in the field, making AI understandable to society at large, strengthening R&D in AI technologies in meaningful and ethical ways, and increasing the gender balance and diversity in the field. Macron has announced 1.5 billion euros of public funding into AI by 2022. He has also emphasized leading “the European way” in this field, though it is unclear what precisely he means by this and whether it means pursuing a middle ground between the approaches of the United States and China.79

France’s AI strategy focuses on four major challenges: reinforcing the ecosystem to attract the best talent, developing an open data policy particularly in sectors where the country is already competitive, creating a regulatory and financial framework that favors emerging AI businesses, and developing AI regulations with respect to ethics and acceptable standards for citizens.80 On the one hand, the centralized nature of France’s political system can enable government agencies to set the parameters for AI applications in certain areas. On the other hand, centralized control of innovation can in the long run impede progress, as AI requires a broad spectrum of R&D and applications across different fields. Additionally, though it is well known for having a strong skills base in science, technology, engineering, and mathematics (STEM), France lacks academic institutions and researchers actively and directly involved in AI research compared to the UK and Germany.

To meet the need for increased cooperation between industry actors and universities, France’s planned centers for AI excellence may help to bring together researchers, developers, and users to ensure that scientific progress translates into industrial applications. This could also make the French R&D landscape more attractive to top international talent. The Station F campus in Paris was created to enable AI startups to receive advice on legal issues and the impact of new AI technologies in France from thirty nearby public institutions. France’s push on AI is already showing signs of paying off. According to one report, France had raised $1.2 billion in investment for AI startups by the end of 2019,81 making it Europe’s frontrunner in AI funding ahead of the UK.82

The French strategy emphasizes ethical considerations related to AI (such as the implications of self-driving cars, facial/image recognition, and privacy) as well as inclusivity and diversity (such as a goal of reaching a 40 percent share of female students in “digital subject areas” by 2020)—areas that are deemed important in terms of sustainable growth and future societal resilience.83 The French strategy is detailed and outlines concrete steps for making the country more attractive to research, talent, and industry actors; how to improve transparency and AI cooperation between different actors; and how to integrate moral and ethical issues. It is one of the more ambitious European AI strategies: it includes more details than Germany’s, a timeline (unlike the UK’s strategy), and specific policies to achieve its goals.

The French AI strategy stands out for its government-led, top-down approach. This illustrates how much the government sees AI as strategically important. Yet it is unclear if this approach will create a sustainable ecosystem in which private sector–driven development and public-sector uptake will flourish. France’s focus on creating a European data ecosystem and Macron’s advocacy for a stronger EU could be positive for the continent’s AI landscape. An additional impetus for him is to position France as a destination for technology companies after Brexit. Companies like Google, Facebook, Uber, IBM, Samsung, and Microsoft have already opened or announced the creation of AI research centers in Paris.84

The UK

The UK’s AI strategy witnessed several major developments in 2018. These included the creation of new institutional structures such as the Office on AI and the Centre for Data Ethics and Innovation; the release of a new AI Sector Deal policy paper to spearhead cooperation between various governmental agencies and institutions, private companies, and academic centers; a large package of investment; and the publication of the report by the House of Lords’ AI Select Committee on the country’s new level of ambition in setting the agenda in ethical AI.85 In April 2018, the government announced an investment of nearly 1 billion pounds for its “AI Sector Deal,” made up of 603 million euros in new government, industry, and academic spending, and up to 342 million pounds in previously announced state funding.86 However, the document has been criticized for not setting a clear timeline for these investments.87

The “AI Sector Deal” paper focuses on the five foundations of the UK’s industrial strategy: ideas, people, infrastructure, the business environment, and places. It also sets out a vision to respond to challenges and opportunities presented by AI based on: making the country a global center for AI by investing in R&D, skills, and regulatory innovation; supporting sectors to boost productivity through AI and data analytics; leading the world in the safe and ethical use of data and strengthening digital capabilities by establishing a Centre for Data Ethics and Innovation; and helping people develop the skills needed for the jobs of the future.88 The policy paper was a response to a report on the country’s industry and its potential to remain at the forefront of AI development and use and retain its world-leading status.89

These governmental initiatives aim to build a more coherent national narrative on AI and to rationalize existing thinly dispersed and uncoordinated institutional initiatives across various technological domains such as AI, autonomous systems, and robotics so as to meet the UK’s stated ambition to become a world leader in these sectors.90 While the country’s AI research is influential globally and while London has the highest concentration of AI startups in Europe as well as a strong ability to attract international investment in start-ups, the commercialization of research traditionally is a weakness for the UK.91 It remains to be seen whether the “AI Sector Deal” will help fix this shortcoming by spurring cooperation between the private sector and academia. While the UK strategy does a fine job of identifying national strengths and outlining areas where targeted investments are needed to address weaknesses, the absence of a clear timeline for any of these investments is notable.

Additionally, with the possibility looming that the country may not reach a post-Brexit relationship agreement with the EU, it is unclear how the UK will continue to attract talent and investment from Europe, and the “AI Sector Deal” does not address this eventuality. Conversely, the UK accounts for nearly one-fifth of AI researchers in the EU, it is only behind the Netherlands in terms of the quality of AI research papers produced, it has a tradition of leading the EU in collaborating with third countries, and nearly 40 percent of European AI firms that have received at least $1 million in funding are based in the country.92 Nevertheless, with France eager to create a European data ecosystem and with Brexit potentially causing international technology companies to move from London to the rest of Europe, the UK may lose out not only on EU research funding but also on access to the European data pool. Similarly, collaboration between UK universities and European counterparts will suffer as they lose out on lucrative EU-funded partnerships and grants. While the “AI Sector Deal” is well thought-out and has broad support from industry, academia, and the government, it is unclear if it will be able to prevent or mitigate the potentially damaging ripple effects Brexit will have on the country’s AI landscape.


Significantly shorter than its French equivalent, Germany’s AI strategy sets out twelve areas to address by 2025.93 These include: making Germany and Europe a leader in AI research to excel in future innovation, setting up innovation competitions and European innovation clusters, improving AI-related technology transfers to the economy and the middle class, establishing incentives for investors and founders of AI startups, promoting digital skills and AI-related education, and improving talent attraction and retention. From 2019 to 2025, the country aims to spend 500 million euros annually in support of these goals, which is more money over a longer timeframe than France’s AI funding.94

The strategy emphasizes having government administrators employ AI to offer better and more efficient services for citizens; making data available and facilitating its use; adjusting the country’s legislative framework to include algorithms and AI-based decisions, services, and products; establishing AI standards and norms on national, European, and international levels; ensuring national and international cooperation on AI-related developments; and establishing a broad public dialogue and encouraging political participation to incorporate AI into society in ways that the public deems ethically, legally, and institutionally correct. In addition, in October 2019, the Data Ethics Commission released guidelines for the development and use of AI, guidelines that became a model for the EU’s white paper.95

Germany seems committed to pursuing the responsible development and use of AI. It builds on the strengths of its economy and aims to expand them by improving the transfer of innovations into industry. Its strategy has a wide scope and allocates more funding to AI than other European strategies, but it does not go into many details or the specific allocation of funds to meet its twelve stated priorities. The strategy’s focus on research volume and technology transfers ignores the fact that Germany is late to the AI game and does not have innovation hot spots like Cambridge or Zurich.96 The German Research Center for Artificial Intelligence, however, is considered the world’s largest nonprofit contract research institute for software technology based on AI.97 Additionally, the majority of German universities are not allowed to pay salaries comparable to what top researchers might receive in the United States or China, which makes the prospect of attracting talent attraction even more difficult—an issue the strategy largely ignored.98


Sweden’s “National Approach for Artificial Intelligence,” released in May 2018, outlines what is needed for the country to be at the forefront of AI development and use.99 While it does not include specific policies on how to achieve its stated objectives, the goals of the government include: developing standards and principles for safe, sustainable, and ethical AI; improving the digital infrastructure to leverage existing opportunities; increasing access to data; and playing an active role within the EU efforts. The strategy stresses the country’s lack of skilled AI professionals and the need to increase basic and applied AI research within a legal framework that ensures sustainable (defined as ethical, reliable, safe, and transparent) AI development.

The government has since implemented several targeted initiatives to achieve its goals and has made good progress. Nonetheless, the strategy could benefit from the formulation of specific policies and the commitment of AI-designated funds to strategic initiatives. To further guide the country’s efforts and to promote a Swedish model for AI, a national AI council—similar to the one in Finland—has been established to bring together private-sector representatives, academics, and other experts.100

Since May 2018, the government has invested around 3.7 million euros in several universities to help train AI professionals.101 It also launched an AI Data Factory and Arena at the Lindholmen Science Park in Gothenburg to enable collaboration and strengthen Swedish companies.102 Since then, it has developed into a national center for AI called AI Innovation of Sweden, bringing together some fifty different partners.103 The Swedish innovation agency, Vinnova, has also launched several AI-related projects, such as e-healthcare systems for home care, use of AI in breast cancer screenings, and AI-controlled vehicles in mining operations.104

Another player is the government-funded research institute Rise, which has some sixty employees working on AI-related projects.105 Besides government funding for AI, Sweden also benefits from strong investment from the private sector and a strong innovation climate with successful incubators and access to venture capital. In particular, the Wallenberg AI, Autonomous Systems and Software Program (WASP) plans to generate 520 million euros by 2029 toward AI research in Sweden.106 WASP includes some forty Swedish companies and academic institutions, and it focuses on machine learning, deep learning, and explainable AI.107 Another notable commercial effort is Zenuity, a joint venture led by Volvo Cars to provide SEK with 95 million euros for research into self-driving cars.108


Finland released “Finland’s Age of Artificial Intelligence” AI strategy in December 2017.109 It takes a more bottom-up approach to AI than most other European countries and was produced by a working group on artificial intelligence. Some major intellectual and policy outputs have been the August 2018 report “Work in the Age of Artificial Intelligence—Four Perspectives on the Economy, Employment, Skills and Ethics” produced by a working group on the transformation of society and work and the June 2019 final report of Finland’s Artificial Intelligence Program called “Leading the Way Into the Era of Artificial Intelligence.”110 To assist in drafting an AI development strategy and to provide advice to the government, the Ministry of Economic Affairs and Employment established a national steering group headed by former president of Nokia Pekka Ala-Pietilä, who now chairs the EU High-Level Expert Group on AI (AI HLEG).111

While the strategies of other countries focus primarily on either talent or upskilling with regard to AI, Finland’s stands out by emphasizing the need to “train, retain and attract AI talent through stronger investment and enhanced visibility of Finnish AI expertise” as AI transforms society.112 The Finnish approach to staying competitive globally in AI is to train the population and educate it on the potential impacts on society. According to Minister of Economic Affairs Mika Lintilä, the government acknowledges that Finland does not have the resource advantage that bigger countries have, so the goal is to become a leader in “practical applications of AI.”113 A good example of this approach to educating citizens is “Elements of AI,” a free and first-of-its-kind online course designed to raise AI literacy and to be accessible to all for training purposes.114 Recent efforts to educate the population also include the 1 percent scheme, which focuses on teaching 1 percent of the population (about 55,000 people) the basics of AI, while slowly increasing the number of people trained as the years go on.115

One aspect of Finland’s strategy that has been praised is the inclusion of a SWOT (strengths, weaknesses, opportunities, and threats) analysis. In its guidelines for creating an AI national strategy, the World Economic Forum highlights that the process should begin with such an assessment as it will help keep the strategic goals in line with what a given “country requires in terms of demographic needs, strategic priorities, urgent concerns, the aspirations of its citizens, its resource constraints and geopolitical consequences.”116

Further steps the government intends to take include creating a plan for bringing small and medium-sized enterprises (SMEs) on board, in addition to plans for Finland to partner with Estonia and Sweden in an effort to form Europe’s top laboratory for AI test trials.117 Finland’s goals in AI are part of a bigger strategic vision for the country. As Ilona Lundström, a director general at the Ministry of Economic Affairs and Employment and one of the masterminds behind the AI strategy, states: “We are using AI as the flagship project for a bigger kind of setup of themes of digitalization.”118 Recently, Finland announced plans to offer its online course on AI to all EU citizens free of charge.119


As a pioneer in e-governance and one of the most digitally advanced countries in the world, Estonia has a strong technological foundation. The country also has an impressive history of producing unicorn start-ups—or privately owned companies valued at over $1 billion each—including Skype, Playtech, TransferWise, and Bolt—though at least one of these firms has since been acquired by a publicly traded corporation.120 The government is now seeking to build on its tech-savvy society with the help of AI. Estonian experts assessed the ways in which the private and public sectors could engage more with AI in the 2019 Kratt report.121 (Experts decided to refer to AI as Kratt, an Estonian mythical creature that is “devoted to serving its master but can become bad if left idle.”)122 Its proposals informed a short-term (2019–2021) AI strategy. The government plans to invest at least 10 million euros to implement the strategy and aims to produce a long-term plan in the near future.123 Estonia strongly emphasizes public-sector adaptation, increased investment, R&D, and the promotion of ethical and trustworthy AI. Its short-term approach might be beneficial as a test case for deriving best practices and lessons to inform a more long-term strategic approach.

The strategy also places more emphasis on AI in the public sector, where Estonia strives to have a competitive edge as it believes this aspect of AI receives the least attention from the rest of the world.124 The government’s goal of applying AI solutions in the public sector is for it to “increase the user-centeredness of services, improve the process of data analysis, and make the country work more efficiently.”125 The country also already has been actively implementing AI across society. In late 2019, the country introduced its first component of AI-based applications with a text-analysis tool126 that has been embedded into the government’s public code repository, which makes software solutions built for the public more effective and accessible. Additionally, according to the government, since October 2019, there are at least twenty-three AI solutions deployed in the public sector, with the goal of having at least fifty AI use cases by the end of 2020.127 One of these solutions includes the use of predictive analytics to help decide where to send police officers for directing traffic. The most ambitious project planned is an “AI Judge” that will help decide some small-claims cases in court.128


Czechia released its national strategy for AI in May 2019.129 Largely in line with the European Commission’s “Coordinated Plan on Artificial Intelligence,” the Czechs’ national approach seeks opportunities for deeper engagement with EU-level initiatives and aims to make the country an innovation leader in the field. The strategy splits its objectives into three parts: short term (by 2021), medium term (by 2027), and long term (by 2035).

Similar to the EU-level and national strategies of other members states listed above, Czechia aims to develop responsible and trusted AI in accordance with EU guidelines, invest in R&D, and support startups and identify opportunities for economic growth by increasing employment and upskilling workers. Recently the Czech Institute of Informatics, Robotics and Cybernetics has been given the opportunity to establish a European Center of Excellence for Industrial Robotics and Artificial Intelligence, with almost 50 million euros in startup support from the Research and Innovation Center on Advanced Industrial Production.130 Czechia showcases the EU’s potential to set a certain strategic vision for member states for developing trustworthy AI, implementing EU-wide guidelines, and creating added value when it comes to a common European approach.

Other European National Efforts

Other European countries that have recently published AI strategies include Austria, Belgium, Denmark, Italy, Lithuania, Luxembourg, Malta, Norway, Poland, Portugal, and Spain.131 Meanwhile, several other countries are working on one. In addition, Austria, Ireland, and Italy have established national AI task forces. Portugal, Romania, and Spain have included AI in their national digitization strategies. Most of these initiatives emphasize, among other things, strengthening national research as the basis of AI; setting up AI centers; committing to supporting industry and SMEs; and improving data sharing between the public, industry actors, and the public sector.132

National AI Strategies: More Than the Sum of Their Parts?

EU member states’ individual national strategies sometimes contain references to the EU’s collective AI plan, but some have also impacted the EU strategy. While Europe’s diverse AI strategies all tend to focus on labor-market impacts, many of them also stress ethical challenges in the commercialization, scaling, and research of AI. The fact that EU member states have very different levels of readiness—as some member states do not yet have AI strategies while others have fully developed AI plans—demonstrates further the fragmentation of the European digital market, with Northern European countries generally leading the way, while Southern and Eastern European ones tending to lag behind (with some notable exceptions).133

While many of the European national strategies recognize AI as an engine of economic growth and accelerator of digital change, they hardly share a common definition of what constitutes AI. This matters from a regulatory point of view as AI might run the risk of becoming a definitional moving target, especially in view of a future EU regulatory framework and the design of any new legal instruments. Moreover, any common definition would need to be flexible enough to capture the evolving nature of the technology while providing enough legal clarity for enforcement.

The emphasis on ethical and sustainability issues in the national strategies also reflects a shared understanding of the risks (societal, economic, or security) that AI entails. The broad goals set out in many of these strategies seek to impact different levels of society and the economy at once, a state of affairs that increases the complexity of policies required to achieve these goals as well as the difficulty of measuring success. Because of their fuzzy definitions of AI, some European strategies also lack measurable goals and benchmarks (although these are inherently difficult to craft.)

The European strategies differ most from non-European ones in their heavy focus on ethical, trustworthy, and sustainable development and use of AI systems and services. This normative framing prioritizing a human-centric approach to AI—if further clarified, deepened, and supported by legal certainty—could provide a much-needed competitive advantage for European AI products and services by inspiring more consumer confidence and by providing a roadmap for the regulation of such products. At the same time, the goal should be to avoid hollow rhetorical declarations or overregulation that could impede innovation, commercialization, and uptake. In particular, uncertainty and vague declarations can risk scaring away startups and keep venture capitalists from investing in Europe.

In addition to strictly national efforts, there are also positive examples of cooperative agreements on AI between individual member states—such as the ones between France and Germany and between France and Finland. These agreements are important examples of member states working bilaterally to strengthen the EU’s cross-border cooperation in AI. New cooperation platforms with relevance for AI innovation between European countries are also emerging. One prominent example is the Franco-German Joint European Disruptive Initiative (JEDI) launched in 2018. Modeled after the U.S. Defense Advanced Research Projects Agency (DARPA), this body will identify and support technological challenges likely to disrupt existing industries. For 2018, JEDI was allocated an initial budget of 235 million euros with the goal of increasing funding to 1 billion euros a year once the initiative is fully operational.134

Assessing the EU’s Approach To AI

In addition to national-level efforts, the EU has developed policies and funding opportunities to help advance Europe’s role on AI in recent years.

EU Funding for AI Research and Development

One major role the EU plays is through its funding mechanisms. As part of the EU budget now being negotiated, the multiannual financial framework for 2021–2027, the European Commission has proposed a Digital Europe Program as part of the Single Market, Innovation and Digital chapter. This puts forward 9.2 billion euros for investments in high-performance computing and data, AI, cybersecurity, and advanced digital skills.135 By building on the Digital Single Market Strategy that the European Commission launched in 2015, the goal is to prepare citizens for the digital age and to boost the digitalization of Europe. Since 2004, AI has been included in EU funding for R&D, with a heavy focus on robotics. In the 2014–2020 period, investments in robotics increased by 700 million euros.136 This came on top of 2.1 billion euros in private investment in the EU research program called the Partnership for Robotics in Europe, or SPARC euRobotics.137 It is still unclear how the economic fallout from the coronavirus pandemic will affect the size and priorities of the long-term EU budget.

Under the current EU budget (2014–2020), the European Structural and Investment Funds provide 27 billion euros on skills development, out of which the European Social Fund invests 2.3 billion euros in digital skills.138 The AI4EU project—a grant agreement between the European Commission and seventy-nine private-sector and academic institutions in twenty-one member states—was launched in January 2019.139 It aims to mobilize and promote the European AI ecosystem and provide access to essential AI resources for all users in the EU. The 20-million-euro project will run for three years and will try to establish a network of AI knowledge, tools, and research across the EU. The European Investment Fund has promised to make available 100 million euros in 2020 to support promising AI startups.140 Jean-Eric Paquet, the European Commission’s director general for research and innovation, has said that the fund will seek to close the investment gap the EU faces “by providing equity and grant funding to early stage firms in so-called deep tech, such as manufacturing, biotechnology, health-tech and artificial intelligence.”141 This allocation may be followed by a 3.5 billion euro investment fund in 2021, which will invest in early-stage technology in an effort to increase innovation.142

Between 2014 and 2017, the EU invested around 1.1 billion euros in AI-related research under the multi-annual R&D funding program Horizon 2020.143 Under Horizon Europe, Horizon 2020’s successor, the European Commission proposes to invest 100 billion euros for research and innovation to strengthen the EU’s scientific and technological bases; boost its innovation capacity, competitiveness, and jobs; and deliver on citizens’ priorities and sustain Europe’s socioeconomic model and values.144 The strategic planning process will focus in particular on the pillar addressing global challenges and European industrial competitiveness, which includes a digital and industry cluster with key digital technologies, AI and robotics, and advanced computing and big data as key areas of intervention.

The new European Commission has made investing in AI a top priority as part of its efforts to strengthen the EU’s digital sovereignty, so that users will have more choice and control over which IT products and services they use. The February 2020 white paper on AI notes that in the last three years “EU funding for research and innovation for AI has risen to 1.5 billion [euros], i.e. a 70% increase compared to the previous period,” but it equally points out that “investment in research and innovation in Europe is still a fraction of the public and private investment in other regions of the world.” More precisely, it notes that “3.2 billion [euros] were invested in AI in Europe in 2016, compared to around 12.1 billion [euros] in North America and 6.5 billion [euros] in Asia.”145

To bridge such a gap, the European Commission plans to revise the December 2018 “Coordinated Plan on Artificial Intelligence” by the end of 2020 as a follow-up to the public consultation on the white paper. To help fulfill the white paper’s recommendation to lay the groundwork for increased coordination with EU member states, it plans to attract over 20 billion euros in annual investment in AI in the EU over the next decade.146 As far as SMEs and startups are concerned, the European Commission and the European Investment Fund will launch a “pilot scheme of 100 million [euros] in Q1 2020 to provide equity financing for innovative developments in AI.”147 The negative economic fallout from the coronavirus pandemic raises questions about how much money the next multiannual financial framework can allocate toward AI R&D given potential budget cuts and diverging priorities. The European Commission has recently identified AI as one of the core areas where the EU needs to invest more as part of the post-pandemic economic recovery.148

The European Commission’s Evolving Approach

Besides providing funding for R&D, the European Commission has sought to shape a common approach to AI in other ways (see table 2).

Table 2: Timeline for the EU’s AI Strategy
April 10, 2018 Member states sign a “Declaration of Cooperation on Artificial Intelligence.”

They agree to work together on the most important issues raised by AI, from ensuring competitiveness in R&D to dealing with the resulting social, economic, ethical, and legal questions
April 25, 2018 The European Commission adopts the “Communication on Artificial Intelligence.”

This document lays out the EU’s approach to AI. It is characterized by its unique emphasis on ethical AI and aims to increase the EU’s technological and industrial capacity as well as AI uptake by the public and private sectors, to prepare Europeans for the socioeconomic changes brought about by AI, and to ensure that an appropriate ethical and legal framework is in place.
June 14, 2018 The European Commission appoints the AI HLEG.

Consisting of fifty-two experts on AI from academia, civil society, and business, the group advises the European Commission on the implementation of its AI strategy.
December 7, 2018 The European Commission presents a “Coordinated Plan on AI.”

Prepared with member states to foster the development and use of AI in Europe, this document notes that the EU is lagging behind in private investments and “risks losing out on the opportunities offered by AI” without significant efforts.

The plan focuses on the need for strengthened cooperation among all involved parties in key areas: encouraging more investment and financing for startups and innovative SMEs; increasing excellence in trustworthy AI technologies and the broad diffusion of such technologies; adapting learning and training programs and systems to better prepare society for AI; building a European data space for AI for Europe; developing ethical guidelines, while also ensuring an innovation-friendly legal framework; and tackling security-related aspects of AI applications.149
January 9, 2019 The AI4EU project launches.

The AI4EU project brings together seventy-nine top research institutes, SMEs, and large enterprises in twenty-one countries to build a focal point for AI resources, including data repositories, computing power, tools, and algorithms.150 It aims to offer services and provide support to potential users of the technology as well as help them test and integrate AI solutions in their processes, products and services.
April 8, 2019 The HLEG publishes the “Ethics Guidelines for Trustworthy AI.”

The guidelines put forward a human-centric approach to AI and list seven key requirements that AI systems should meet in order to be trustworthy.151
April 8, 2019 The European Commission issues a “Communication on Building Trust in Human Centric AI.”

This indicates the seven requirements that all AI applications should comply with to be considered trustworthy: human agency and oversight; technical robustness and safety; privacy and data governance; transparency; diversity, nondiscrimination, and fairness; societal and environmental well-being; and accountability.152 The principles identified in the communication come from the HLEG with the primary aim of drafting AI ethics guidelines based on the existing regulatory framework, which should be applied by all developers, suppliers, and users of AI.
June 26, 2019 The HLEG publishes “Policy and Investment Recommendations for Trustworthy Artificial Intelligence.”

This document puts forward thirty-three recommendations that can guide trustworthy AI toward sustainability, growth and competitiveness, as well as inclusion—while empowering, benefiting and protecting human beings.

The recommendations aim to help the European Commission and member states to update the coordinated plan by the end of 2019.153
June 26, 2019 The HLEG launches the pilot phase of the “Assessment List of the Ethics Guidelines for Trustworthy AI,” which is to last until December 1, 2019.
February 19, 2020 The European Commission releases the “White Paper on Artificial Intelligence—A European Approach to Excellence and Trust,” a “European Strategy for Data,” and “A Strategy for Europe Fit for the Digital Age.”

To achieve an ecosystem of excellence, the European Commission proposes to streamline research, foster collaboration between member states, and increase investment in AI development and deployment. These actions build on the coordinated plan on AI from December 2018.

To achieve an ecosystem of trust, the European Commission presents options on creating a legal framework that addresses the risks for fundamental rights and safety. This builds on the Ethics Guidelines for Trustworthy AI, which were tested by companies in late 2019. A legal framework should be principles-based and focus on high-risk AI systems to avoid unnecessary obstacles to innovating companies.

The European Commission conducted a public consultation on the white paper until May 31, 2020, and it plans to present proposals for a regulatory framework in December 2020.

An early effort came in April 2018 when twenty-four member states and Norway agreed on a “declaration of cooperation on artificial intelligence.”154 Since then, the other members and the UK have also signed on to this statement.155 The nonbinding declaration called for “a comprehensive and integrated European approach on AI” and included language on cooperation to boost the EU’s technological and industrial capacity by improving access to public-sector data, addressing socioeconomic challenges, and ensuring an adequate legal and ethical framework.

This communication mentions the new threats AI pose before discussing the opportunities. This is in sharp contrast to other countries’ emphasis on opportunities, though this is hardly surprising given the purpose of the EU document is to justify future regulations. In 2018, Mariya Gabriel, then commissioner for digital economy and society, said the “enhanced cooperation efforts will focus on reinforcing European AI research centers, creating synergies in R&D&I [research and development and innovation] funding schemes across Europe, and exchanging views on the impact of AI on society and the economy.”156 The group has engaged in a continuous dialogue with the European Commission, which acts as a facilitator, to promote a new framework for collaboration between countries. The declaration illustrates the commitment of member states to advancing the EU’s role in AI and enhancing coordination among themselves.

Simultaneously in 2018, the European Commission put forward more concrete ideas about a European approach in its “Communication on AI.”157 This was based on three pillars that then formed the backbone for the subsequent coordinated action plan on AI.158 The first pillar is keeping ahead of technological developments and encouraging acceptance of AI by the public and private sectors. The second pillar is to address socioeconomic changes brought about by AI through, for instance, efforts to attract AI talent, support digital skills and STEM education development, and encourage member states to modernize their education and training systems. The third pillar is to ensure an appropriate ethical and legal framework and legal clarity for AI by developing ethics guidelines and guidance on the interpretation of the EU Product Liability Directive with regard to AI.

Building on the “Communication on AI,” the European Commission released a “Coordinated Action Plan” in December 2018.159 This nonbinding document emphasizes that stronger coordination is essential for Europe “to become the world-leading region for developing and deploying cutting-edge, ethical and secure AI.” It proposes joint actions for closer and more efficient cooperation between the member states, Norway, Switzerland, and the European Commission in four key areas.160 The first area is maximizing investment through increased coordination and partnerships. Joint actions to achieve this goal include: all member states having AI strategies in place by mid-2019, a new EU-AI public-private partnership, a new AI scale-up fund for startups and innovators, and the development of world-leading AI centers (through digital innovation hubs and the EU Innovation Council pilot initiative.)161 The second area is creating European data spaces to make more data available and help share this data seamlessly across borders while complying with the GDPR. As part of this effort, a support center for data sharing went live in July 2019. The third area is fostering talent, skills, and lifelong learning. Measures include supporting advanced AI-related degrees through dedicated scholarships, supporting digital skills and learning, ensuring AI’s presence in education programs to ensure human-centered AI development, and attracting talent through the Blue Card system. The fourth area is developing ethical AI and ensuring trust. The aim here is develop AI technology that respects fundamental rights and ethical rules with the ambition to bring this ethics-driven approach to the global stage.

At the time, then vice-president for the digital single market Andrus Ansip noted that the EU would work to pool data and coordinate investments with the aim of reaching at least 20 billion euros in private and public investments by the end of 2020.162 Commissioner for Digital Economy and Society Mariya Gabriel added that the coordination action plan would help Europe compete better globally “while safeguarding trust and respecting ethical values.”163 Its full implementation will be a key task for the current European Commission, which will serve until 2024, and will be contingent on funding as part of the next multiannual financial framework.

The High-Level Expert Group on AI

To support the articulation of a European strategy, the European Commission established the AI HLEG in June 2018, which consists of fifty-two figures from academia, the private sector, and civil society.164 The group launched a consultation process in December 2018, and in April 2019 it published “Ethics Guidelines for Trustworthy Artificial Intelligence,” outlining the EU’s approach to set ethical guidelines and increase investment in AI.165 The guidelines define “trustworthy AI” as having three principal components: lawfulness, ethical adherence, and robustness.166 The guidelines include many of the common themes in member states’ strategies, such as transparency, safety, and fairness and nondiscrimination, but they also cover some issues less often associated with AI such as the environment.

Additionally, the guidelines list seven key requirements based on fundamental rights and ethical principles that AI systems should meet to be considered trustworthy. These are: human agency and oversight; technical robustness and safety; privacy and data governance; transparency; diversity, nondiscrimination and fairness; societal and environmental well-being; and accountability. The HLEG states that this “trustworthy approach” is key to enabling what it calls “responsible competitiveness” by providing a foundation of trust for those who will be affected by AI systems. While the guidelines were a good first step in the right direction for what the EU wants to accomplish, the fact that they are nonbinding may limit their usefulness as self-regulation seems unlikely to work (as seen in dealing with privacy issues).167 Companies and government entities using AI technology should be accountable to laws and not merely self-regulation or industry codes of ethics so that consumers can understand and potentially challenge decisions made using AI.

In June 2019, the HLEG published a document called “Policy and Investment Recommendations for Trustworthy Artificial Intelligence.”168 This document calls for significant new investment and resources dedicated to transforming the regulatory and investment environment for trustworthy AI in Europe. The plan consists of thirty-three recommendations to guide trustworthy AI toward “sustainability, growth and competitiveness, as well as inclusion—while empowering, benefiting and protecting human beings.” These recommendations are directed at a wide range of actors, including governments, SMEs, and academia.

The first part of the report focuses on the four main areas where there can be a positive impact from trustworthy AI: humans and society at large, the private sector, the public sector, and research and academia. The recommendations serve as a basis for providing actions that can be taken while striving toward creating trustworthy AI, in addition to highlighting critical areas where Europe is falling behind, such as in the area of talent acquisition and retention.169 In addition to the recommendations, the HLEG also assesses four main enablers for achieving positive impact in the areas of focus: data and infrastructure, skills and education, appropriate governance and regulation, and funding and investment.

The guidelines have been criticized on some grounds. Some experts argue that the EU cannot compete on ethics and regulation of AI alone, especially when it lacks the necessary ecosystem and infrastructure for AI.170 Others have pointed to undue influence by industry actors.171 Along with the ethics guidelines, the HLEG released an assessment list that offers guidance to help put these guidelines into practice while also collecting feedback from organizations working on AI. The list was piloted across Europe from June to December 2019, with over 450 stakeholders registered in the piloting process as part of the European AI Alliance, a broad multi-stakeholder platform where members of the public can engage with the HLEG on AI-related policymaking.172 Common critical feedback relates to difficulties related to the need to understand and explain how AI algorithms make decisions to build trust with the public, issues with vague transparency requirements, and problems with redundant requirements.173 The HLEG is now in the process of updating and revising the assessment list based on stakeholders’ feedback. Executive Vice President of the European Commission for A Europe Fit for the Digital Age Margrethe Vestager has claimed that “a very high number” of private companies have endorsed the HLEG’s principles.174 The HLEG will likely publish a detailed report and revised version of the assessment list based on the consultation period in mid-2020, to help the public and the private sectors operationalize the guidelines as well as to guide the process of design, development, and deployment of AI technologies.

Although they can be useful, multi-stakeholder groups are often difficult to manage and may become influenced by specific interest groups. There have been critical voices within the HLEG arguing that its agenda has been captured by business interests, due to the imbalanced composition of the group. Some external voices have also challenged the idea of regulating AI risks, citing concerns over stifling innovation and hampering startups, and instead they are calling for red lines defining how AI should not be used.175

The New European Commission’s AI Plan

The new European Commission has already demonstrated a strong interest in tackling AI issues. President Ursula von der Leyen has called for hard rules governing AI and pledged “to put forward legislation for a coordinated European approach on the human and ethical implications of artificial intelligence” in her first one hundred days in office (on February 19, 2020, the commission unveiled a new white paper on AI, but no legislation has yet been put forth).176 She has also emphasized completing the digital single market.177 In her address at her confirmation hearing before the European Parliament, Executive Vice President Vestager emphasized that Europe’s regulatory stance is what will distinguish it from its competitors: “Some say China has all the data and the US has all the money. But in Europe, we have purpose [and] a lot to build on.”178

Although the EU has made progress on AI in recent years, most national and EU-level strategies and initiatives are all very recent. It therefore remains to be seen if they will receive enough financing and whether and how they align and interact. More broadly, the approach at the national and EU levels reflects a European desire to develop ethical, human-centric, and trustworthy AI and to become a leader in these areas. This commitment is not only demonstrated by the AI-related policy documents but also by the EU’s regulatory leadership on data privacy and other technology-related issues through legislation like the GDPR.

There is a clear connection between a more comprehensive vision for the EU’s ambition to become a leader in responsible AI and the uptake of technologically robust and trustworthy European AI-enabled products engineered to respect basic human rights and designed to mitigate potential negative consequences. From the European Commission’s perspective, the brand of “trustworthy AI,” by laying the foundation for ethical guidelines for the creation and use of AI, could become the “silver bullet” in the EU’s strategy to catch up with the United States or China.179 The underlying logic behind such a strategy is that developing AI technologies by adhering to high ethical, democratic, and human-rights standards will eventually provide European developers and manufacturers with a much-needed competitive edge, with consumers and users ultimately favoring their products over those from the United States or China.

The EU’s White Paper on Artificial Intelligence

On February 19, 2020, the European Commission published an ambitious package on the EU’s digital policy, including a series of documents including the “White Paper on Artificial Intelligence—A European Approach to Excellence and Trust,”180 the “European Strategy for Data,”181 and “Shaping Europe’s Digital Future.”182 In July 2019, von der Leyen announced her ambition as the commission’s incoming president to regulate AI within one hundred days of the new European Commission taking office.183 The white paper was presented on the eightieth day after it took office as part of a consultative exercise that would enable public feedback until May 31, 2020, and is meant to inform the European Commission’s legislative vision for the coming years. Vestager was tasked with developing a comprehensive European data and AI strategy that balances regulatory precautions and technological innovation in the digital age. But can both objectives be achieved?

The crucial question is how to write the rules of AI according to an ethical and human-rights agenda without hampering innovation or harming uptake of AI technologies in Europe. Based on the white paper, the European Commission seems to believe that it can realize the twin objective of a regulatory and an investment-oriented approach. The aim is not overregulation but practical safeguards, accountability, and the possibility of human intervention in high-risk AI cases.

Building on the European AI strategy presented in April 2018, the white paper covers a host of issues, such as developing an AI ecosystem that benefits European societies and economies, putting forward a common European approach to AI to accomplish sufficient scale and avoid single market fragmentation, and creating a type of European AI grounded in values and fundamental rights such as human dignity and privacy protection. The European Commission pledges to preserve the EU’s technological leadership, enable scientific breakthrough, and make sure that new technologies will improve the lives of European citizens while respecting their rights.

Moreover, it comes as no surprise that the white paper is permeated with rhetoric on sovereignty building. In an optimistic op-ed that accompanied the document’s launch, von der Leyen underlined that “tech sovereignty” is at the forefront of the EU’s digital future as “the capability that Europe must have to make its own choices, based on its own values, respecting its own rules.”184 The white paper points to a geopolitical fear rearing its head that Europe will become increasingly dependent on foreign technologies if it does not invest in competitive homegrown ones, create a robust digital single market to support innovation and growth in high-tech sectors, and train and retain talent to secure excellence and innovation.

The goal of the white paper is to develop a comprehensive European policy and governance approach to AI to “become a global leader in innovation in the data economy and its applications.”185 According to the document, the main building blocks to achieve this goal are an “ecosystem of excellence” that mobilizes up to 20 billion euros of private- and public-sector resources along the entire value chain, from research and innovation to accelerating the uptake of AI-based solutions, and an “ecosystem of trust” that ensures legal certainty for public and private organizations as well as rules for protecting fundamental rights and consumer rights. The European Commission notes that the wider existing bodies of legislation in member states and the EU are already applicable to AI, but it identifies potential needs for new legislation or adjustments due to the specificities intrinsic to AI, its evolution, the opacity of black-box algorithms, and unintended or negative consequences.

Overall, the white paper proposes to build on the existing EU policy framework and points to three key pillars for a solid European AI strategy. The first one is bolstering the EU’s digital capacities and technological and industrial strategic investment in AI, cybersecurity, quantum computing, and 5G. The second one is harnessing the potential of nonpersonal anonymized data by developing a legislative framework and operating standards for European data spaces to allow governments and researchers to store their data and access trusted data shared by others. The third one is advancing a risk-based approach for “high-risk AI.” The document also identifies two main risk areas: to fundamental rights—including privacy protection, personal data, and nondiscrimination—and to the safety, legal clarity, and effective functioning of the liability regime.

The white paper does not address the development and use of AI for dual-use or military purposes, eschews any substantive discussion of automated weapons, and tones down references to a potential ban on facial recognition and surveillance technologies. In a draft of the document, the European Commission raised the possibility of a ban of real-time face recognition, but the final version sees no need for a moratorium on what is now labeled as “remote biometric identification,” albeit recognizing that such facial recognition should only be used provided there is a “substantial public interest.”

The proposed regulatory framework in the white paper is constructed around the idea of the “high-risk” development, applications, and uses of AI that involve significant risks for fundamental rights, consumer rights, and safety. The key characteristics include requirements related to training data, data and record keeping, the provision of information, technological accuracy and robustness, and human oversight, as well as specific requirements for certain AI applications, such as those used for remote biometric identification. For low-risk AI applications that are not subject to these mandatory requirements, the European Commission proposes a voluntary labeling scheme for economic actors, in addition to applicable legislation. Moreover, a future regulatory framework should not create disproportionate burdens, and a risk-based approach may help ensure proportionality, especially in the case of SMEs. Finally, an EU-wide governance structure will be needed to ensure varied stakeholder participation across Europe and to harness cross-border and sectoral expertise networks for an effective enforcement system and oversight of the future AI regulatory framework.

A Coherent European Approach to AI in the Making?

The EU is striving to provide a more coherent European approach to AI. So far, European countries have mostly developed their own approaches without much consultation with one another or a single overarching framework. Consequently, national approaches differ considerably from one another, which makes it harder for various European actors to combine their efforts and build on their common strengths. Though similar themes tend to permeate most national strategies, there are also crucial differences in their approaches and areas of emphasis. For example, while most strategy documents refer to the need for an ethically informed approach to AI, only a few mention issues of inequality, human rights, manipulation of information and disinformation, safeguards against massive surveillance, and the overall control and abuse of power.

While the diversity in national AI approaches need not be problematic—and can even be an asset if it allows countries to devote particular attention to areas where they might have comparative advantages—it does raise the issues of coordination and coherence. Identifying and capitalizing on shared priorities in national strategies presents an opportunity to coordinate policies, pool resources, figure out individual strengths, and create a bigger impact. But this would require creating a forum for facilitating the exchange of best practices between EU member states. In fact, it is not clear whether the various national approaches to AI add up to anything resembling a coherent Europe-wide approach yet.

As a political and economic union, the EU has the potential to help facilitate coordination of common goals, interests, and values pertaining to AI. The December 2018 coordinated action plan was a first attempt at aligning various national approaches, but it needs to be updated to reflect policy developments that have taken place since it was released. In particular, the European Commission should seek to raise the role of member states in implementing a common approach to AI by building synergies and combining their efforts. In this regard, the white paper could assist member states by providing a common reference point. This is especially important for member states that still lack a national AI strategy. More widely, by aiming to foster not one but two ecosystems, of excellence and of trust in AI, respectively, the EU in general and the European Commission in particular have taken a clear strategic stance on AI.

This ecosystem-in-the-making approach is premised on a two-tiered strategy. First, this approach seeks to streamline research and innovation in AI, foster increased collaboration and coordination between member states, and increase investment in the development and uptake of AI technologies. Second, this approach aims to ensure that such technologies comply with European fundamental values, norms, and legal requirements by putting forward policy options for an EU regulatory framework, with special consideration given to high-risk technologies. Though it is not entirely clear how these two goals relate to each other, the ecosystem metaphor suggests an awareness at the EU level that an organic and complex strategy is needed to galvanize a variety of state and nonstate actors across the public and private sectors, keeping in mind market and regulatory logics. If the EU manages to incentivize stronger collaboration to combat fragmentation and to promote its own innovation ecosystem, it could potentially take the lead in establishing the international legal regime pertaining to the use of AI.

Conclusions and Recommendations: Toward a More Coherent European AI Strategy

A European AI ecosystem needs many things to truly succeed, including significant amounts of investment; a favorable regulatory environment; and inputs such as talented people, vast quantities of data, and computational power. The development of AI comes with enormous opportunities but also considerable risks. Tackling these risks while not overregulating and trying to control AI requires a fine balance, clear definitions and guidelines, and concrete goals. Nonetheless, as the plethora of policy and regulatory initiatives that have materialized over the past couple of years demonstrate, Europe is well positioned to help establish best practices and set global standards and norms to steer the future direction of AI development toward applications that truly have meaningful value for societies and ensure the security of citizens. With its human-centric focus, the EU’s strategy is distinctive, given its emphasis on the trustworthy and secure development of AI in full respect of citizens’ rights, freedoms, and values—but it remains to be seen whether these concepts will actually be effective and useful or merely empty slogans.

The EU could take a first step toward a more coherent AI strategy by realizing its full potential on digitization. Estonia offers a notable example of how digitization can improve the public sector and citizen services in member states. Encouraging and funding similar developments in other countries not yet fully embracing digitization is essential to ensuring every one of them gets to similar levels of application of digital technologies. Moreover, the EU’s manufacturing base presents an excellent opportunity for broad AI adoption to get ahead in digitalization and the Internet of Things. If the EU continues to lag in AI adoption and development, its ability to attract and retain top talent is likely to suffer as researchers choose to move elsewhere. This vicious cycle could be hard to break, which would cause a further slowdown in Europe’s AI technology development and uptake.

Related to increasing the digitization of EU economies is the importance of striving to complete the digital single market. To this end, as the white paper on AI called for, the EU should redouble its efforts to avoid fragmentation. To complete the digital single market with common regulatory and legal frameworks, member states will need to have a more constant dialogue as well as more substantive public-private dialogue. Such dialogue is needed to develop a common approach to research so as to capitalize on the diversity of member states and consolidate the EU’s assets in the public and private sectors.

A completed digital market would not only help companies to scale and gain access to more customers, but it could also make the EU more attractive for foreign investors and help to close the AI investment gap between Europe and bigger investors like China and the United States. Europe has a strong track record of data protection, application of AI in robotics and the automotive sector, and an outstanding research base. These assets should be capitalized on in its approach to AI as part of a leading digital ecosystem. Investing in becoming a digital destination and having modern digital infrastructure can further help Europe attract and retain top talent and foreign direct investment, and thus expand a digital ecosystem in which homegrown tech leaders could emerge and thrive, or so the plan goes. Using its funding instruments and ability to set business incentives, the EU could also increase support for the development of AI startups. By also acting as an active facilitator of AI innovation, the EU could help to overcome the fragmentation currently observed throughout the European AI landscape.

Moreover, while the EU is right to pursue a first-mover advantage on regulating AI, it must be careful about taking an overly legalistic and top-down approach. While its experience with data-privacy regulation in the form of the GDPR may serve an example of the Brussels effect—establishing global norms for emerging technology—it could also hamper the EU’s ability to be a leader in AI. Given that limited access to open data presents a hurdle to developing the powerful algorithms necessary for AI development, the risk is that overregulation and privacy restrictions could stymie innovation and reduce the amount of data available for Europe-based firms. Although the European Commission is increasing funding for AI, the GDPR puts tight restrictions on uses that involve personal data. If the costs and legal difficulties of using AI at an early stage increase, adoption of AI technologies by European firms may fall further behind that in the rest of the world. Moreover, when it comes to AI, the EU will face a regulatory dilemma between ensuring that legal requirements ideally do not hamper AI R&D and that innovation does not threaten legally protected interests and rights.186 European regulators must therefore strive to strike a careful balance between ethical AI policies and the dangers of overregulating the new technology.

In short, the EU’s strong emphasis on the more social, ethical, and consumer-friendly direction of AI development is a major asset. But regulation alone cannot be the main strategy. For Europe’s AI ecosystem to thrive, Europeans need to find a way to protect their research base, encourage governments to be early adopters, foster Europe’s startup ecosystem (including international links), and develop AI technologies as well as leverage their use efficiently. More importantly, to be able to take these steps, Europe needs to catch up on digitizing its economies and complete the establishment of the digital single market.


Complete the EU digital single market: European leaders should intensify their efforts to implement and complete the digital single market once and for all. A common European approach seems to be the sine qua non condition to scale up AI efforts across member states. As the European Commission’s white paper stated, several member states are envisaging options for national regulatory frameworks to tackle AI. Divergent national rules and initiatives increase the risk of fragmentation by creating new obstacles for economic actors that want to sell and deploy AI technologies and systems in the EU digital market.

Although significant progress has been made over the past few years, most recently with the proposed European data strategy, progress on remaining priorities areas is needed. The European Commission should prioritize this area as part of the next multiannual financial framework, encourage businesses to innovate and use AI technologies to develop new business models, and accelerate the digitization of businesses and economies to promote a more level digital playing field across Europe. New and emerging digital technologies are shaping the geopolitical landscape in unprecedented ways by concentrating power in digital monopolies, thus increasing the influence of economic statecraft of certain actors over other technologically dependent ones. Even so, the EU should be careful about erecting overly high walls around the digital single market, recognizing that foreign direct investment into Europe’s technology sector, technology transfers, strong partnerships with other countries, and access to the global value chain remain crucial to long-term success.

Balance technological sovereignty with the benefits of globally interconnected supply chains: While going it completely alone is at most feasible for the United States or China when it comes to AI, this approach has potentially dire consequences. Nationalizing or territorializing supply chains and endorsing technological sovereignty at all costs poses risks that could unravel the interdependent global economy. While the EU has emphasized the need for more “technological sovereignty in key enabling technologies and infrastructure for the data economy,”187 it should also work toward deterritorializing the digital realm. For the EU, it would be equally inefficient to decouple from globally interwoven supply chains, especially because there are so few European digital tech giants. The EU should seek to balance the task of bolstering its own technological and digital capacity to reduce its dependence on others with the goal of working toward a global common good and ethics-driven approach to AI governance in the digital domain. The coronavirus pandemic has underscored the potential pitfalls of pursuing a data sovereignty strategy since maintaining access to high-quality global data, not just European data, is necessary for developing effective AI-training algorithms.188

Lead on setting standards and regulations: One of the greatest strengths the EU enjoys is its ability to set global technical standards in a variety of fields (the Brussels effect). Europe should seek to become a global standard setter in AI and fully utilize multilateral platforms and partnerships with other countries. Rather than a race to deploy AI technology as such, the real race is for setting regulations, guidelines, and best practices so uses of AI take into account socioeconomic, legal, and ethical considerations. Here, the EU has certain advantages that it should seek to better capitalize on. The white paper on AI is a positive step in terms of setting the tone, outlining the normative agenda, tracking developments, informing future regulations, and taking public consultation seriously. However, it is also short on substance and ethics; further, it takes an overly legislative approach to dealing with risks, particularly high-risk AI technologies, which it also does not define clearly.

Even so, given the diversity of AI applications, acknowledging that there is not always a one-size-fits-all solution is also essential. Moreover, taking an ex ante approach to AI regulation is difficult and could have unintentional effects, such as stymying innovation and forward thinking in Europe. The EU should therefore tread lightly when it comes to regulating low-risk AI applications where it might be better to have companies self-regulate instead. Comparisons to the EU’s rather successful experience with data-privacy regulations (the GDPR) may not be perfectly applicable in the case of AI, as the latter is still very much an emerging technology. Moreover, it is essential that any new EU regulations on AI avoid unnecessary overlap with existing frameworks such as the GDPR and that any new rules are enforceable, applicable, and focused on safeguarding European companies’ competitiveness.

Work to ensure that citizens trust their governments by building an AI ecosystem of trust: The EU should further explore how AI is changing the governmental landscape and how it can enhance or hinder citizen-facing services. At the same time, it should raise awareness of how governments mobilize AI technologies, such as in surveillance technologies and how citizens’ data is being used. In this respect, the coronavirus pandemic has further emphasized the need to assess the risks and opportunities afforded by leveraging mass digital surveillance, data, and AI to address public health crises of this nature. Getting citizen buy-in during the early stages of AI adoption and use in the public sector is important and will help to maintain societal trust in governmental services. As seen in the private sector, consumers often have little choice about how their data is being used, as well as how their privacy is protected. In this respect, citizens’ trust in using AI in the public sector should be accompanied by transparency in data usage, as well as public debates on how to implement AI in a safe, fair, and ethical manner. Citizens’ trust in governments is difficult to maintain and easy to lose. The EU and its member states should create a context for transparency and unbiased decisionmaking and set a global example of how to tackle biases in algorithms. Other member states can also learn from the efforts of countries like Estonia and Finland to educate their citizens about the role and impact of AI.

Promote a vibrant Europe-wide AI ecosystem: Although Europe is facing fierce competition from other global players when it comes to developing AI, it should not lose sight of its unique capabilities, especially its strong research base. In particular, Europe needs to support this research base to maintain its leadership in AI research; protect the AI-startup environment by building a supportive digital ecosystem and facilitating access to funding; and retain, educate, and attract top talent. Efforts should include strengthening R&D funding to industry and academic AI centers, improving the capital markets union to facilitate more inter-European private investment in AI, encouraging new Europe-wide public and private research partnerships and networks, striving to retain top talent in Europe, prompting more and stronger integrated regional clusters, and supporting the commercialization of AI R&D. Europe should also seek to concentrate efforts in AI sectors it has the potential to be a world leader in, such as low-power electronics as well as “robotics and competitive manufacturing and services sectors, from automotive to healthcare, energy, financial services and agriculture.”189

Align national AI strategies: Forging a common EU framework for AI is important especially for smaller member states that can turn to these documents as a reference point. Not all European countries have yet developed national strategies, though additional ones are in the works. Many of these strategies remain vague and lack clear goals. In accordance with the EU coordinated plan, all member states are required to complete the development of national strategies, ensure that AI is included in national digital strategies, and increase coordination among themselves to align strategic goals in national strategies and promote synergies. National plans need to be concrete, contain detailed specific actions, and be coordinated with those of other member states and EU initiatives in order to be effective. As more national strategies are developed, a concerted effort is necessary to align them so as to ensure coherence and consistency, while at the same time recognizing the value in maintaining a diversity of approaches. In this regard, an updated coordinated action plan should focus on condensing and aligning all the different national AI strategies.

Protect dual-use technologies: Chinese efforts to siphon off Western technology, as part of its Made in China 2025 initiative, raise questions about the lack of a coherent approach to protecting Europe’s technological and industrial base and research. This issue is particularly important as the United States has become more restrictive of late, causing many Chinese investors to look to Europe instead for access to cutting-edge technology. Though the EU has recently adopted a common framework for screening foreign investments, more attention must be devoted to promoting and protecting sensitive technologies with potential dual-use applications. Such efforts should ideally be coordinated with those of other like-minded non-European countries such as the United States, Japan, and Australia. Additionally, as the EU implements its new investment-screening mechanism, special attention should be paid to ensuring that key technologies such as AI are considered a critical sector to protect. The mechanism and member-state-level frameworks to protect intellectual property rights and the outflow of critical technologies should all play an increased role in preserving the EU’s innovation prowess and digital sovereignty. Finally, European entities should become more aware of China’s AI strategy and impose greater scrutiny on partnerships with Chinese technology firms and research bodies.

Ensure close EU-UK cooperation on AI. Given the UK’s leading role in Europe in AI, ensuring its continued close collaboration with the EU on AI is essential. More needs to be done to achieve a frictionless digital ecosystem between the EU and the UK in which AI can progress. At the time of writing, the UK government seems keen on leaving the single market and pursuing regulatory divergence from the EU, which would make it harder for the EU and the UK to collaborate and develop common data schemes, particularly in the absence of a future UK-EU trade agreement.190 One practical step includes allowing UK universities and researchers to participate in EU-funded research projects. A potential model to follow is Israel’s association with the EU’s research and innovation frameworks that has allowed Israeli researchers to participate in Horizon 2020 and cooperate with the Joint Research Centre.191

Enhance transatlantic dialogue on AI: Instead of pursuing a “third way” between the United States and China on digital policy or overly stressing the need for digital sovereignty, a key component of Europe’s efforts must be actively engaging the United States and other democratic countries around a common or at least complementary set of AI principles based on shared values and norms. The EU’s white paper recognizes this, stating that there is a need to “continue to cooperate with like-minded countries, but also with global players.”192 It and the U.S. government’s new AI principles provide an opportunity for fruitful transatlantic dialogue on aligning approaches. The United States has signaled its support for the principles of the Organization for Economic Co-operation and Development (OECD) on AI and the G7’s Global Partnership on AI, which provide a good baseline for fruitful transatlantic discussions as well.193

Establishing a specific transatlantic working group on AI, consisting of key government, private-sector, and civil-society representatives could be an effective platform for joint action and collaboration on developing new frameworks, standards, and ethical guidelines for AI. Such conversations could potentially also take place in a trilateral format with Japan. Moreover, the EU and the United States should seek to resolve their disagreements over privacy issues, taxation of tech companies, and trade tariffs, and they should promote a transatlantic AI market. Although the EU has had fruitful exchanges with the Trump administration on AI regulatory matters, if there is a new Democratic administration starting in 2021 it would likely be more interested in actively engaging with the EU on digital issues.

Engage global stakeholders on the ethical dimensions of AI: The EU’s ethics guidelines on AI are a major asset that it should seek to leverage in its outreach to other international stakeholders to promote global adaptation of such guidelines and frameworks for the sustainable and responsible development and use of AI systems and services. These conversations should ideally start with representatives from like-minded countries,194 such as Japan, that have similar commitments to human rights and democracy.195 Japan has developed Social Principles of Human-Centric AI that have a lot in common with the EU’s approach when it comes to such issues as security, safety, privacy, and transparency.196 That said, the EU should not shy away from leveraging AI ethics in conversations with China and other global players.197 As proposed in the white paper, it should also explore how multilateral forums such as the OECD (especially its AI Policy Observatory, which provides a platform for sharing best practices),198 the International Organization for Standardization, the UN Secretary General’s High-Level Panel on Digital Cooperation, the G7’s Global Partnership on AI,199 and the G20 can be leveraged for such purposes.

Consider AI a part of European strategic autonomy: AI will have massive potential strategic implications—including on the efficiency of weapons technology, modern warfare, and cyber security, and AI leadership could become essential for the effective development and deployment of military forces. While AI could help the EU enhance its detection, protection, and preparation capabilities against security and defense threats and risks,200 there is a risk of a race to the bottom in terms of AI safety if businesses and countries ignore ethics and reliability concerns due to the desire to be first. As the United States and China look to AI for maintaining or achieving military superiority, Europe runs the risk of seeing its own security decrease and the ability of its militaries to remain interoperable with U.S. forces decline. For example, in the United States, DARPA announced in September 2018 that the Department of Defense would invest up to $2 billion over the next five years in new programs advancing AI, stepping up the technological race with China.201

While military applications have major ethical drawbacks, Europe must ensure it stays competitive in this area too. The debate around strategic autonomy in Europe would do well to include a stronger focus on emerging technologies such as AI, which are lacking from the military debate in Europe.202 New defense initiatives such as the European Defense Fund could be instrumental in funding EU projects focusing on disruptive AI applications in the military sector, but this must be supplemented with the development of military doctrine for AI applications. Most importantly, given the current coronavirus pandemic and crisis and the likelihood that it will be a game changer in terms of budgeting, the EU should not lose sight of AI as a strategic priority and stick to its ambitious digital agenda.


The authors would like to express their thanks to Thomas Carothers, Michael Nelson, Peter Fatelnig, and Eline Chivot for their many helpful comments and suggestions on earlier drafts of the paper. Anna Bosch provided valuable research assistance during the initial research phase.


1 There is no universal definition of artificial intelligence. This paper uses the following one: “Artificial Intelligence is that activity devoted to making machines intelligent, and intelligence is that quality that enables an entity to function appropriately and with foresight in its environment.” See Peter Stone, Rodney Brooks, Erik Brynjolfsson, Ryan Calo, and Oren Etzioni, et al., “Artificial Intelligence and Life in 2030,” Stanford University One Hundred Year Study on Artificial Intelligence: Report of the 2015–2016 Study Panel, September 2016, 12,

2 Jacques Bughin, Jeongmin Seong, James Manyika, Michael Chui, and Raoul Joshi, Notes From the AI Frontier: Modeling the Impact of AI on the World Economy (Washington DC: McKinsey and Company, 2018), 1; and World Bank, “GDP (current US$) - China,” 2018,

3 Michael Horowitz, Elsa B. Kania, Gregory C. Allen, and Paul Scharre, Strategic Competition in an Era of Artificial Intelligence (Washington, DC: Center for a New American Security, 2018).

4 European Commission, “On Artificial Intelligence: A European Approach to Excellence and Trust,” 2020,

5 European Commission High-Level Expert Group on Artificial Intelligence, “Ethics Guidelines for Trustworthy AI,” April 8, 2019,; and European Commission, “Member States and Commission to Work Together to Boost Artificial Intelligence ‘Made in Europe,’” December 7, 2018,

6 Thierry Breton, “Hearing of Commissioner-Designate Thierry Breton,” European Parliament, press release, November 14, 2019,

7 European Commission, “Building Trust in Human-Centric Artificial Intelligence,” April 8, 2019,

8 Anu Bradford, The Brussels Effect: How the European Union Rules the World (Oxford, UK: Oxford University Press, 2019).

9 EU Agency for Fundamental Rights, “Facial Recognition Technology: Fundamental Rights Considerations in the Context of Law Enforcement,” 2019, 1–34,

10 Axelle Lemaire, Romain Lucazeau, Tobias Rappers, Fabian Westerheide, and Carly E. Howard, “Artificial Intelligence – A Strategy for European Startups: Recommendations for Policymakers,” Roland Berger and Asgard – Human Venture Capital, 2018, 7,

11 European Patent Office, Patents and the Fourth Industrial Revolution: The Inventions Behind Digital Transformation (Munich, Germany: European Patent Office, December 2017),$File/fourth_industrial_revolution_2017__en.pdf. See also World Intellectual Property Organization (WIPO), WIPO Technology Trends 2019: Artificial Intelligence (Geneva: World Intellectual Property Organization, 2019),

12 Christina Larson, “China’s Massive Investment in Artificial Intelligence Has an Insidious Downside,” American Association for the Advancement of Science, February 8, 2018,

13 Echo Huang, “China Has Shot Far Ahead of the US on Deep-Learning Patents,” Quartz, March 2, 2018,

14 CB Insights, “Top AI Trends to Watch in 2018,” 2018,

15 Bughin, Seong, Manyika, Hämäläinen, Windhagen, and Hazan, Notes From the AI Frontier, 38.

16 European Commission, “Creating a Digital Single Market - European Commission Actions Since 2015,” February 15, 2019,

17 European Commission, “Geo-blocking,” September 20, 2019,

18 European Commission, “Free Flow of Non-Personal Data,” February 24, 2020,

19 European Commission, “The European High-Performance Computing Joint Undertaking- EuroHPC,” February 14, 2020,

20 European Commission, “Digital Education Action Plan,” January 17, 2018,

21 European Commission, “FinTech Action Plan: For a More Competitive and Innovative European Financial Sector,” March 8, 2018,

22 Florin Zubașcu, “Commission Adds AI Research ‘Lighthouse’ to Innovation Priorities Amid Budget Wrangle,” Science Business Publishing International SR, February 20, 2020,

23 German Research Center for Artificial Intelligence, “Human Centric AI – Intelligent Solutions for the Knowledge Society,”

24 “AI in UK Industry Landscape 2018,” Deep Knowledge Group, 2018,

25 Daniel Fiott and Gustav Lindstrom, “Artificial Intelligence: What Implications for EU Security and Defence?” EU Institute for Security Studies, November 8, 2018, 1,

26 Leonid Bershidsky, “AI Competition Is the New Space Race,” Bloomberg Opinion, December 28, 2018,

27 Raymond Perrault, Yoav Shoham, Erik Brynjolfsson, Jack Clark, John Etchemendy, Barbara Grosz, Terah Lyons, James Manyika, Saurabh Mishra, and Juan Carlos Niebles, The AI Index 2019 Annual Report (Stanford, CA: Stanford University Human-Centered AI Institute AI Index Steering Committee, December 2019),

28 Nordic AI Artificial Intelligence Institute, “What We Do,”

29 Benelux Association for Artificial Intelligence, “Benelux Association for Artificial Intelligence,”

30 Robot Technology Transfer Network, “This Is ROBOTT-NET,”

31 Confederation of Laboratories for Artificial Intelligence Research in Europe (CLAIRE), “A Brief History of CLAIRE,”

32 AI4EU Consortium, “About the Project,”

33 Timothy W. Martin, “American Tech Firms Are Winning the R&D Spending Race With China,” Wall Street Journal, October 30, 2018,; and Paulson Institute’s Macro Polo, “The Global AI Talent Tracker,” June 9, 2020,

34 Amy Hawkins, “Beijing’s Big Brother Tech Needs African Faces,” Foreign Policy, July 24, 2018,

35 Lulu Yilun Chen, “China State VC Leads $460 Million Funding in Face-Scan Startup,” Bloomberg, October 31, 2017,

36 European Commission, “Factsheet: Artificial Intelligence for Europe,” July 2, 2019,

37 Jacques Bughin, Eric Labaye, Sven Smit, Eckart Windhagen, Jan Mischke, and Sarah Forman, “Europe’s Economy: Three Pathways to Rebuilding Trust and Sustaining Momentum,” McKinsey and Company, January 18, 2018,

38 Thomas Holm Møller, Ellen Czaika, Hanne Jesca Bax, and Vivek Nijhon, “Artificial Intelligence in Europe: How 277 Major Companies Benefit From AI Outlook for 2019 and Beyond,” Microsoft, 2018,

39 Peter Bright, “Microsoft Buys Skype for $8.5 Billion. Why, Exactly?” Wired, May 10, 2011,

40 Chris Welch, “Apple Completes Shazam Acquisition, Will Make App Ad-Free for Everyone,” Verge, September 24, 2018,

41 Bookings Holdings, “The Priceline Group Completes the Acquisition of Momondo Group,” press release, July 24, 2017,

42 European Commission, “EU Foreign Investment Screening Regulation Enters Into Force,” press release, April 10, 2019,

43 Eurostat, “Europe 2020 – Overview,”

44 European Parliament, “Innovation Policy,” Fact Sheets on the European Union, 2020,

45 European Commission, “European Innovation Scoreboard Edition 2020,” July 2019,

46 European Commission, “European Innovation Scoreboard, June 17, 2019,

47 Ibid.

48 Ibid.

49 Hector Hernandez Guevara, Nicola Grassano, Alexander Tuebke, Lesley Potters, Petros Gkotsis, and Antonio Vezzani, EU R&D Scoreboard: The 2018 EU Industrial R&D Investment Scoreboard (Luxembourg: EU Publications Office, 2018),

50 Business Europe, “Research and Innovation in the New European Political Cycle,” September 2019, 5,

51 Ibid.

52 Eurostat, “R&D Expenditure in the EU Increased Slightly to 2.07% of GDP in 2017,” press release, January 10, 2019,

53 Executive Office of the President National Science and Technology Council Committee on Technology, “Preparing for the Future of Artificial Intelligence,” October 2018,; and Executive Office of the President, “Artificial Intelligence, Automation, and the Economy,” December 2016,

54 Executive Office of the President, “Big Data: A Report on Algorithmic Systems, Opportunity, and Civil Rights,” May 2016,

55 Kaveh Waddell, “Trump to Lay Out an AI plan,” Axios, February 11, 2019,

56 Khari Johnson, “U.S. Senators Propose Legislation to Fund National AI Strategy,” Venture Beat, May 21, 2019,

57 Office of Science and Technology Policy, “President Trump’s FY 2021 Budget Commits to Double Investments in Key Industries of the Future,” White House, February 11, 2020,

58 Russell T. Vought, “Memorandum for the Heads of Executive Departments and Agencies: Guidance for Regulation of Artificial Intelligence Applications, White House, January 7, 2019,

59 Karen Hao, “The US Just Released 10 Principles That It Hopes Will Make AI Safer,” MIT Tech Review, January 2, 2020,

60 James Vincent, “White House Encourages Hands-off Approach to AI Regulation,” Verge, January 7, 2020,

61 Ryan Daws, “The White House Warns European Allies Not to Overregulate AI,” Artificial Intelligence News, January 7, 2020,

62 Gregory C. Allen, “Understanding China’s AI Strategy: Clues to Chinese Strategic Thinking on Artificial Intelligence and National Security,” Center for a New American Security, February 2019, 5,

63 “China’s New Generation of Artificial Intelligence Development Plan,” Foundation for Law and International Affairs, July 30, 2017,

64 Tim Dutton, “An Overview of National AI Strategies,” Medium, June 28, 2018,

65 Jeffrey Ding and Paul Triolo, “Translation: Excerpts From China’s ‘White Paper on Artificial Intelligence Standardization,’” New America DigiChina (blog), June 20, 2018,

66 International Organization for Standardization, “Artificial Intelligence [ISO/IEC JTC 1/SC 42],” 2017,

67 Dutton, “An Overview of National AI Strategies.”

68 Paul Mozur, “Inside China’s Dystopian Dreams: A.I., Shame and Lots of Cameras,” New York Times, July 8, 2018,; and Steven Feldstein, “The Global Expansion of AI Surveillance,” Carnegie Endowment for International Peace, September 17, 2019,

69 Julian E. Barnes and Josh Chin, “The New Arms Race in AI,” Wall Street Journal, March 2, 2018,

70 Samuel Stolton, “GDPR Enforcement Held Back by Lack of Resources, Report Says,”, May 25, 2020,

71 European Commission High-Level Expert Group on Artificial Intelligence, “Ethics Guidelines for Trustworthy AI.”

72 Tobias Buck, “Germany to Spend €3bn on Boosting AI Capabilities,” Financial Times, November 15, 2018,

73 Nicholas Thompson, “Emmanuel Macron Talks to Wired About France’s AI Strategy,” Wired, March 31, 2018,

74 European Commission Joint Research Centre, “Artificial Intelligence: A European Perspective,” (Luxembourg: EU Publications Office, 2018), 55–60,

75 Cedric Villani, “For a Meaningful Artificial Intelligence Towards a French And European Strategy,” report to the French Government, March 8, 2018,

76 Ibid, 6–7.

77 Ibid, 7–8, 12–17.

78 Ibid, 8.

79 Nicholas Vinocur, “Macron’s €1.5 Billion Plan to Drag France Into the Age of Artificial Intelligence,” Politico Europe, March 27, 2018,

80 French Government, “Artificial Intelligence: Making France a Leader,” March 30, 2018,

81 Emmanuel Touboul, Florian Espejo, Raphaelle de Lafforest, Nicolas Brien, and Alara Ucak, “The Road to AI: Investment Dynamics in the European Ecosystem: AI Global Index 2019,” Roland Berger and France Digital, 2019, 5,

82 Janosch Delcker, “The Age of Facial Recognition—AI Targeting Your Innermost Emotions—A New European Leader in AI Startup Funding?” Politico Europe, December 18, 2019,

83 Villani, “For a Meaningful Artificial Intelligence Towards a French And European Strategy,” 16.

84 Chris O’Brien, “France’s AI Startup Scene Grew 38% in 2019 With Government and Investor Backing,” Venture Beat, October 22, 2019,

85 Chris Middleton, “Special Report: UK AI Policy: Why the Government Must Modernize First,” Internet of Business, 2018,

86 UK Government Department for Business, Energy, and Industrial Strategy and the Department for Digital, Culture, Media, and Sport, “AI Sector Deal,” updated May 21, 2019,

87 Middleton, “Special Report: UK AI Policy.”

88 UK Government Department for Business, Energy, and Industrial Strategy and the Department for Digital, Culture, Media, and Sport, “AI Sector Deal.”

89 Stephen Armstrong, “UK Positioned to Become World Leaders in AI,” Raconteur, May 23, 2018,

90 Middleton, “Special Report: UK AI Policy.”

91 UK House of Commons Science and Technology Committee, “Managing Intellectual Property and Technology Transfer,” UK Parliament Tenth Report of Session 2016–17, March 13, 2017, 5,

92 Michael McLaughlin and Daniel Castro, “What Will Brexit Mean for AI in the EU?” Center for Data and Innovation, August 27, 2019,

93 German Federal Cabinet, “Strategie Künstliche Intelligenz der Bundesregierung” [Federal Cabinet’s artificial intelligence strategy], November 15, 2018,

94 Author calculations based on numbers released in Germany’s aforementioned report. (See the previous endnote.)

95 German Federal Ministry of Justice and Consumer Protection Data Ethics Commission, “Opinion of the Data Ethics Commission - Executive Summary,” October 22, 2019, 5–28,

96 Alexander Armbruster, “Die Schwächen der deutschen KI-Strategie” [The weakness of the German AI strategy], Frankfurter Allgemeine, November 16, 2018,

97 German Research Center for Artificial Intelligence, “Human Centric AI – Intelligent Solutions for the Knowledge Society.”

98 Armbruster, “Die Schwächen der deutschen KI-Strategie” [The weakness of the German AI strategy].

99 Government Offices of Sweden, “National Approach to Artificial Intelligence,” May 2018,

100 Swedish AI Council, “Mission,”

101 Future of Life Institute, “AI Policy – Sweden,”

102 Carl-Martin Vikingsson, “Sweden Will Create a Leading International Environment for Collaboration on AI,” Government Offices of Sweden, press release, May 17, 2018,

103 AI Innovation of Sweden, “About AI Innovation of Sweden,”

104 Vinnova, “Artificial Intelligence in Swedish Business and Society: Analysis of Development and Potential,” May 2018, 22–25,

105 Daniel Gillblad, “RISE at the Forefront of Sweden’s Drive to be a Leader in AI,” Research Institutes of Sweden (RISE),

106 Susan Ritzén, “Swedish Record Investment in AI Is Growing,” SVT Nyheter, January 3, 2020,

107 Wallenberg Artificial Intelligence, Autonomous Systems and Software Program (WASP), “About WASP,”

108 Zenuity, “Introducing Zenuity,”

109 Finnish Ministry of Economic Affairs and Employment Steering Group of the Artificial Intelligence Program, “Finland’s Age of Artificial Intelligence: Turning Finland Into a Leading Country in the Application of Artificial Intelligence: Objective and Recommendations for Measures,” December 18, 2017,

110 Finnish Ministry of Economic Affairs and Employment, “Work in the Age of Artificial Intelligence: Four Perspectives on the Economy, Employment, Skills and Ethics,” August 16, 2018,; and Finnish Ministry of Economic Affairs and Employment, Leading the Way Into the Age of Artificial Intelligence: Final Report of Finland’s Artificial Intelligence Program 2019 (Helsinki: Finnish Ministry of Economic Affairs and Employment, 2019),

111 AI Finland, “Background,”

112 Charlotte Stixx and Ben Gilburt, “Finland’s Final AI Report, Vienna’s Digital Agenda, Dutch AI Research Agenda and Much More,” European AI Newsletter no. 29, October 8, 2019,

113 Janosch Delcker, “Finland’s Grand AI Experiment,” Politico Europe, January 2, 2019,

114 Elements of AI, “Welcome to the Elements of AI Free Online Course!,”

115 Delcker, “Finland’s Grand AI Experiment.”

116 Lofred Madzou, Punit Shukla, Mark Caine, Thomas A. Campbell, Nicholas Davis, Kay Firth-Butterfield, Farah Huq, Bryan Lim, Xuan Hong Lim, Rachel Parker, Tobias Straube, Elissa Strome, and Julian Torres Santeli, “A Framework for Developing a National Artificial Intelligence Strategy,” World Economic Forum, August 2019, 6,

117 Delcker, “Finland’s Grand AI Experiment.”

118 Ibid.

119 Jari Tanner, “Finland Offers Crash Course in Artificial Intelligence to EU,” Associated Press, December 17, 2019,

120 Elle Hunt, “Estonian President Delights in Country’s High Proportion of Unicorns,” Guardian, June 29, 2018,

121 Estonian Ministry of Economic Affairs and Communications, “Eesti tehisintellekti kasutuselevõtu eksperdirühma aruanne” [Estonian artificial intelligence report of the expert group], May 2019,; and Estonian Government, “Estonia’s National Artificial Intelligence Strategy 2019–2021,” July 2019,

122 E-Estonia, “Estonia Accelerates Artificial Intelligence Development,” May 2019,

123 Estonian Government GCIO Office, “Artificial Intelligence for Estonia,”

124 Charlotte Stixx, “Deep Dive: AI Strategies From Luxembourg and Estonia,” European AI Newsletter no. 24, June 5, 2019,

125 E-Estonia, “Estonia Accelerates Artificial Intelligence Development.”

126 Aili Vahtla (ed.), “First Component for AI-Based Applications Reaches Source Code Repository,” Estonian Public Broadcasting (ERR), October 20, 2019,

127 E-Estonia, “National AI Strategy for 2019-2021 Gets a Kick-Off, October 2019,

128 Erik Niller, “Can AI Be a Fair Judge in Court? Estonia Thinks So,” Wired, March 25, 2019,

129 Czech Ministry of Industry and Trade, “National Artificial Intelligence Strategy of the Czech Republic,” May 2019,

130 Czech Technical University in Prague, “European Center of Excellence for Industrial Robotics and Intelligence Is Being Established at CTU in Prague,” press release, April 4, 2019,

131 See these other European countries’ national AI strategies. Austrian Federal Ministry for Climate Action, Environment, Energy, Mobility, Innovation and Technology and Federal Ministry for Digital and Economic Affairs, “AIM at 2030: Artificial Intelligence Mission Austria 2030,” June 26, 2019,; Belgian AI 4 Belgium, “AI 4 Belgium,” March 18, 2019,; Danish Ministry of Finance and Ministry of Industry, Business, and Financial Affairs, “National Strategy for Artificial Intelligence, March 2019,; Italian Ministry of Economic Development, “Strategia Nazionale per l’Intelligenza Artificiale [National strategy for artificial intelligence], July 2019,; Lithuanian XX, “Lithuanian Artificial Intelligence Strategy: A Vision of the Future,” March 14, 2019,; Government of Luxembourg, “Artificial Intelligence: A Strategic Vision for Luxembourg,” May 24, 2019,; Malta’s Office of the Prime Minister Parliamentary Secretariat for Financial Services, Digital Economy and Innovation, “Malta The Ultimate AI Launchpad: A Strategy and Vision for Artificial Intelligence in Malta 2030,” October 2019,; Dutch Ministry of Economic Affairs and Climate, “Strategic Action Plan for Artificial Intelligence,” October 9, 2019,; Norwegian Ministry of Local Government and Modernization, “National Strategy for Artificial Intelligence,” January 14, 2020,; Polish Ministry of Digitization, “Konsultacje społeczne projektu ‘Polityki Rozwoju Sztucznej Inteligencji w Polsce na lata 2019 – 2027’” [Public consulations on the project ‘Artificial Intelligence Development Policy in Poland for 2019–2027], August 21, 2019,; Portuguese National Initiative on Digital Competencies (INCoDe 2030), “AI Portugal 2030: Portuguese National Initiative on Digital Skills,” June 11, 2019,; and Spanish Ministry of Science and Innovation and Ministry of Universities, “Estrategia Nacional de Inteligencia Artificial” [National strategy on artificial intelligence], March 2019,

132 European Commission Joint Research Centre, “Artificial Intelligence: A European Perspective.”

133 Bughin, Seong, Manyika, Hämäläinen, Windhagen, and Hazan, Notes From the AI Frontier, 38.

134 Daniela Vincenti, “Return of the JEDI: European Disruptive Technology Initiative Ready to Launch,”, March 14, 2018,

135 European Commission, “Digital Europe Program: A Proposed €9.2 Billion of Funding for 2021–2027,” June 26, 2019,

136 European Commission, “Robotics – Horizon 2020,”

137 European Commission, “EU Launches World’s Largest Civilian Robotics Program – 240,000 New Jobs Expected,” press release, June 23, 2014,

138 European Commission, “Memo: A European Approach on Artificial Intelligence,” April 25, 2018,

139 AI4EU, “About the Project,”

140 EU Artificial Intelligence and Blockchain investment fund to invest 100 million euros in startups in 2020,

141 European Commission, “EU Artificial Intelligence and Blockchain Investment Fund to Invest 100 Million Euros in Startups in 2020,” November 21, 2019,

142 “EU Plans $3.9 Billion Fund for Startups in ‘Valley of Death,’” Bloomberg, November 23, 2019,

143 European Commission, “Artificial Intelligence for Europe,” April 25, 2018,

144 European Commission, “Horizon Europe - The Next Research and Innovation Framework Program,”

145 European Commission, “On Artificial Intelligence: A European Approach to Excellence and Trust,” 4–7.

146 Ibid.

147 Ibid.

148 European Commission, “The EU Budget Powering the Recovery Plan for Europe,” May 27, 2020,

149 European Commission, “Coordinated Plan on Artificial Intelligence,” December 7, 2018,

150 AI4EU Consortium, “About the Project.”

151 European Commission High-Level Expert Group on Artificial Intelligence, “Ethics Guidelines for Trustworthy AI.”

152 Ibid.

153 European Commission High Level Expert Group on Artificial Intelligence, “Policy and Investment Recommendations for Trustworthy AI,” June 26, 2019,

154 European Commission, “EU Member States Sign Up to Cooperate on Artificial Intelligence,” press release, April 10, 2018,

155 Ibid.

156 Ibid.

157 European Commission, “Artificial Intelligence for Europe,” April 25, 2018,

158 European Commission, “Communication Artificial Intelligence for Europe,” April 25, 2018,

159 European Commission, “Member States and Commission to Work Together to Boost Artificial Intelligence ‘Made in Europe,’” press release, December 7, 2018,

160 Ibid.

161 European Commission, “Digital Innovation Hubs (DIHs) in Europe,” May 26, 2020,; and European Commission, “Enhanced European Innovation Council (EIC) Pilot,” April 30, 2020,

162 European Commission, “Member States and Commission to Work Together to Boost Artificial Intelligence ‘Made in Europe.’”

163 Ibid.

164 European Commission, “High-Level Expert Group on Artificial Intelligence,” October 4, 2019,

165 European Commission High-Level Expert Group on Artificial Intelligence, “Ethics Guidelines for Trustworthy AI.”

166 Ibid, 5.

167 Ibid, 5.

168 European Commission High Level Expert Group on Artificial Intelligence, “Policy and Investment Recommendations for Trustworthy AI.”

169 Ibid.

170 European Commission High-Level Expert Group on Artificial Intelligence, “Ethics Guidelines for Trustworthy AI.”

171 Tom Simonite, “How Tech Companies Are Shaping the Rules Governing AI,” Wired, May, 16, 2019,

172 European Commission High-Level Expert Group on Artificial Intelligence, “Ethics Guidelines for Trustworthy AI.”

173 Eline Chivot, “Initial Lessons Learned From Piloting the EU’s AI Ethics Assessment List,” Center for Data Innovation, March 1, 2020,

174 European Parliament Committee on Industry, Research and Energy, Committee on the Internal Market and Consumer Protection, and the Committee on Economic and Monetary Affairs, Hearing of Margrethe Vestager, October 8, 2019, 11,

175 See Michael Veale, “A Critical Take on the Policy Recommendations of the EU High-Level Expert Group on Artificial Intelligence,” University College London, Working Paper no. 8, October 2019; Thomas Metzinger, “Ethics Washing Made in Europe,” Der Tagesspiegel, April 8, 2019,; and Fanny Hidvegi and Daniel Leufer, “Laying Down the Law on AI: Ethics Done, Now the EU Must Focus on Human Rights,” Access Now, April 8, 2019,

176 Ursula von der Leyen, “A Union That Strives for More: My agenda for Europe,” European Commission, 13,

177 European Commission, “6 Commission Priorities for 2019–24,”

178 Éanna Kelly, “Vestager Promises Europe Will Go Its Own Way on Artificial Intelligence Rules,” Science Business, October 10, 2019,

179 Raluca Csernatoni, “An Ambitious Agenda or Big Words? Developing a European Approach to AI,” Royal Institute for International Relations (EGMONT), November 14, 2019, 3,

180 European Commission, “On Artificial Intelligence: A European Approach to Excellence and Trust.”

181 European Commission, “A European Strategy for Data,” February 19, 2020,

182 European Commission, “Shaping Europe’s Digital Future,” February 19, 2020,

183 Maïa de la Baume et al., “Von der Leyen’s Real 100-Day Challenge,” Politico Europe, November 28, 2019,

184 Ursula von der Leyen, ”Shaping Europe’s Digital Future,” European Commission (published in various outlets), February 19, 2020,

185 European Commission, “On Artificial Intelligence: A European Approach to Excellence and Trust.”

186 Csernatoni, “An Ambitious Agenda or Big Words? Developing a European Approach to AI.”

187 European Commission, “On Artificial Intelligence: A European Approach to Excellence and Trust.”

188 Javier Espinoza, “Coronavirus Prompts Delays and Overhaul of EU Digital Strategy,” Financial Times, March 23, 2020,

189 European Commission, “On Artificial Intelligence: A European Approach to Excellence and Trust.”

190 Madhumita Murgia, “Google Moves UK User Data to US to Avert Brexit Risks,” Financial Times, February 20, 2020,

191 European Commission “Israel: Policy Background,”

192 European Commission, “On Artificial Intelligence: A European Approach to Excellence and Trust.”

193 Janosch Delcker, “US to Endorse New OECD Principles on Artificial Intelligence,” Politico Europe, May 23, 2019,; and Max Chafkin, “U.S. Will Join G-7 AI Pact, Citing Threat From China,” Bloomberg, May 27, 2020,

194 See Andrew Imbrie, Ryan Fedasiuk, Catherine Aiken, Tarun Chhabra, and Husanjot Chahal, “Agile Alliances: How the United States and Its Allies Can Deliver a Democratic Way of AI,” Georgetown University Center for Security and Emerging Technologies, February 2020, 11–12,

195 Ana Gascón Marcén, “Society 5.0: EU-Japanese Cooperation and the Opportunities and Challenges Posed by the Data Economy,” Elcano Royal Institute, February 4, 2020,

196 Japanese Government, “AI Strategy 2019: AI for Everyone: People, Industries, Regions and Governments,” June 11, 2019,

197 Will Knight, “Why Does Beijing Suddenly Care About AI Ethics?” MIT Technology Review, May 31, 2019,

198 OECD, “Forty-two Countries Adopt New OECD Principles on Artificial Intelligence,” May 22, 2019,

199 Chafkin, “U.S. Will Join G-7 AI Pact, Citing Threat From China.”

200 Fiott and Lindstrom, “Artificial Intelligence: What Implications for EU Security and Defence?,” 1.

201 Drew Harwell, “Defense Department Pledges Billions Toward Artificial Intelligence Research,” Washington Post, September 7, 2018,

202 Ulrike Esther Franke, “Not Smart Enough: The Poverty of European Military Thinking on Artificial Intelligence,” European Council on Foreign Relations, December 2019,