Summary
From the outset, the European Union (EU) has positioned itself as a trailblazer in AI governance with the world’s first comprehensive legal framework for AI systems in use, the AI Act. The EU’s approach to governing artificial intelligence (AI) has been characterized by a strong precautionary and ethics-driven philosophy. This ambitious regulation reflects the EU’s long-standing approach of prioritizing high ethical standards and fundamental rights in tech and digital policies—a strategy of fostering both excellence and trust in human-centric AI models. Yet, framed as essential to keep pace with U.S. and Chinese AI giants, the EU has recently taken a deregulatory turn that risks trading away democratic safeguards, without addressing systemic challenges to AI innovation.
The EU now stands at a crossroads: it can forge ahead with bold, home-grown AI innovation underpinned by robust regulation, or it can loosen its ethical guardrails, only to find itself stripped of both technological autonomy and regulatory sway. While Brussels’s recent deregulatory turn is framed as a much needed competitiveness boost, the real obstacles to Europe’s digital renaissance lie elsewhere: persistent underfunding, siloed markets, and reliance on non-EU infrastructures.
Regulatory Resolve as a Geopolitical Strategy
The EU’s assertive regulatory stance is also a geopolitical strategy to project normative power and set international benchmarks for AI governance. Such ambitions are well founded. The EU’s large single market and proactive tech regulations have historically given the bloc an outsize global influence, a phenomenon often dubbed the “Brussels effect.” However, this drive to lead by regulation has increasingly come into tension with concerns about Europe’s innovation capacity and global competitiveness.
The strength of the EU’s regulatory resolve has prompted intensifying debates about its economic trade-offs. Europe’s limited domestic AI industry and financing have cast doubt on whether the union can match its regulatory power with tech leadership in key emerging and disruptive technologies, like AI. Critics have emphasized that Europe’s regulatory stance could prove costly. They have argued that the EU’s fixation on rules, however commendable, may deepen industrial weaknesses and deter the investment and talent needed to nurture a robust AI ecosystem.
As AI capabilities rapidly advance and rivals like the United States and China pour billions into AI development, European policymakers face a dual imperative: upholding the EU’s values-based regulatory model while catalyzing a homegrown AI industry.
Against the backdrop of rising geopolitical and high-tech competition as well as a fraying transatlantic partnership, the EU must perform a tricky balancing act between competing priorities, with wide-ranging implications for the union’s global norm-setting role and its pursuit of strategic autonomy.
Toward a Secure AI Future for Europe
In response to the global context, the EU has begun to pivot from its role as a regulatory power toward a more innovation-focused path. This partial regulatory rollback exemplifies the complex politics of AI governance in Europe. On the one hand, it underlines legitimate concerns that overly restrictive rules could leave Europe lagging in the AI race or drive innovation abroad. On the other hand, it raises a new question: Might these compromises weaken the EU’s principled position and undermine its credibility as a guardian of digital rights at home and abroad?
The answer may hinge on what the EU does next. To chart a course forward, the EU must transform its approach into a bold new vision for AI innovation while ensuring that AI models remain human-centric, ethical, and trustworthy. To do so, the union will need to carefully balance its innovation ambitions with regulatory oversight and a coherent strategy that is not swayed by external influences.
To secure its AI future, the EU should:
· Significantly expand its investments. Public funding must actively catalyze private venture capital to prevent Europe’s most promising AI start-ups from emigrating or falling prey to foreign acquisitions.
· Develop a comprehensive digital infrastructure. The proposed EuroStack initiative would reduce the EU’s dependence on foreign cloud providers and semiconductor manufacturers and strengthen its digital resilience and security.
· Enhance its regulatory clarity, particularly on dual-use AI applications. This means adopting an EU-wide dual-use AI framework that sets clear common criteria for classifying AI systems with potential security or defense applications. The EU also needs to strike a balance between strategic autonomy and robust democratic oversight to avert an AI arms race.
More broadly, beyond merely choosing between innovation and regulation, the EU needs to embrace a dynamic third pathway that blends rigorous regulatory standards with an aggressive industrial policy. Only by providing targeted support, fostering European AI champions, and making strategic investments in infrastructure can Europe credibly uphold its regulatory model while thriving amid global competition.
Introduction
In the high-stakes global contest over artificial intelligence, the European Union stands at a crossroads. AI has become a geopolitical game changer that underpins economic security and military power alike. The United States and China are investing billions in AI innovation, from start-ups and advanced semiconductors to critical digital infrastructure and research labs, racing ahead in both civilian and defense applications of AI.1 Europe, by contrast, has focused on ethics and regulation, prioritizing human-centric and trustworthy AI models.
Recently, however, the EU has pivoted from its celebrated role as a regulatory power in AI governance toward a bold, innovation-oriented trajectory. That raises critical questions about how to balance regulation, competitiveness, and strategic autonomy. While a deregulatory pivot may offer gains for the union, it also poses profound risks for democratic oversight, the EU’s long-standing legitimacy as a regulatory power, and the strategic autonomy and technological sovereignty the EU seeks to safeguard.
Yet, the debate need not hinge solely on deregulation versus oversight. Indeed, Europe’s regulatory credibility depends on coupling effective governance with an ambitious industrial policy. Absent a strategic push to scale up domestic AI innovation, Europe’s regulatory stance and principled oversight may be criticized as bureaucratic overreach.
The EU’s regulation-first approach has been widely hailed as, on the one hand, a virtue thanks to the union’s groundbreaking AI Act, which champions trustworthy AI and safeguards fundamental values, and, on the other, a potential vice if it risks stifling European innovation, world-class industries, and the strategic capabilities it aims to govern.2 Celebrated as the world’s rule maker on technological governance, the EU appears poised to enter a new era of deregulatory shifts fueled by mounting geopolitical pressures and fierce competition from the United States and China.
Less than a year ago, Brussels proudly confirmed its regulatory might with the AI Act as well as promises of codes of practice, technical standards, and an AI liability directive that would hold rogue algorithms to account.3 Today, that ambition looks significantly different: Enforcement of the Digital Services Act (DSA) is in limbo, the liability directive has been shelved, and new AI codes of practice focus more on help and support than on firm restraints.4 Is this a pragmatic recalibration by the EU to accommodate shifting realities or a capitulation to the United States’ bullish tech stance? The stakes transcend mere rhetoric. The EU’s recent moves away from regulation and toward innovation strike at the heart of Europe’s strategic autonomy and test the union’s global reputation as a regulatory superpower that can set the agenda for promoting trustworthy and human-centric AI systems.
The EU has framed its shift toward deregulation as necessary and unavoidable because of heightened geopolitical risks, particularly Europe’s vulnerabilities in digital supply chains and AI infrastructure. Europe’s reliance on external providers for essential AI components, such as advanced semiconductors and cloud computing resources, exposes the bloc to strategic dependencies and potential exploitation by rival powers, notably China and the United States. Moreover, the EU’s pivot toward innovation demands careful consideration of AI systems’ dual-use nature for both civilian and military applications, which could inadvertently exacerbate geopolitical tensions or lead to arms races in autonomous weapons systems.
The EU’s pivot could bring wide-reaching consequences for the bloc. These could take the form of both an innovation surge, which would fuel faster growth, tech scale-ups, and global competitiveness, and a regulatory rollback, which would dilute privacy and consumer, labor, and environmental safeguards.5 Yet, Europe’s newfound enthusiasm for deregulation also ushers in complex dynamics that might impact the EU’s normative, regulatory, and market powers in irreversible ways. Despite its bad reputation, regulation serves crucial democratic purposes by providing oversight and safeguarding fundamental rights against unbridled corporate power and governments’ extensive surveillance of citizens.6
Europe’s challenge is to strike a careful balance between fostering top-tier AI innovation and upholding robust regulation that protects the public interest and democratic values. By prioritizing innovation and competitiveness over caution, the EU risks dismantling carefully constructed legal protections and magnifying big tech’s influence while creating worrying regulatory voids. This is especially problematic for AI’s disruptive impact on democracies, which is amplified by rapidly evolving and sophisticated generative AI (GenAI) tools that outpace regulation, thus undermining democratic institutions and public trust.
In response, the EU needs an integrated and balanced AI governance framework that aligns the union’s innovation ambitions with its foundational values of accountability, oversight, and democracy. Only then can Europe effectively counter the simplistic calls for unchecked deregulation that are being promoted by other global AI powerhouses.
The EU’s AI Balancing Act
Global competition for AI supremacy has never been more intense or geopolitically contested. While the United States is strongly pushing a deregulated, market-driven agenda, epitomized by the disruptive discourses of figures like billionaire businessman Elon Musk in the administration of President Donald Trump, China is mobilizing massive state resources at breakneck speed to go all in and launch new AI models. Europe, for its part, finds itself at a critical juncture.
As a regulatory power and a front-liner in global digital governance, does the EU now risk being relegated to the role of a mere rule maker? The AI Act is generating significant conflicts and struggling to set a global standard for trustworthy and human-centric AI.7 At stake are the act’s legitimacy and the EU’s aspirations to be the global leader in safe and reliable AI technologies. Some observers argue that the EU’s regulatory caution has left the union behind the innovation curve and hampered the scale-up of AI start-ups, with the risk of creating strategic dependencies on external technologies and resources, primarily from the United States and China.8 According to critics, the EU’s pursuit of digital sovereignty through regulatory interventions has, paradoxically, undermined the union’s innovation power, slowed the adoption of disruptive AI models, deterred investors, and furthered market fragmentation.9
In reality, even as the AI Act moved through its arduous negotiations, pressure had already been mounting on the EU to dial back provisions of the legislation that various stakeholders perceived as overzealous. Big tech, industry stakeholders, and EU member states continuously and successfully lobbied for more flexibility, citing the need for more innovation in Europe’s digital economy.10
This push translated into notable carve-outs, concessions, and exceptions in the final legislation. For instance, article 2 of the act placed national-security uses of AI outside the law’s scope, effectively permitting EU governments to deploy AI systems for mass surveillance in public spaces, for example at protests or national borders.11 These exemptions extend to private companies and, potentially, to third countries that provide AI technology to police and law-enforcement agencies. Such changes, driven mainly by a coalition of member states led by France, represent a clear rollback from the European Parliament’s earlier, stricter stance that sought to ban remote biometric identification in most cases.12
Some European experts have cautioned that the narrative of regulatory overreach stifling innovation is largely a strategic construct promoted by U.S. actors, rather than an objective reality.13 A 2025 analysis by the Corporate Europe Observatory revealed that behind closed doors, a handful of digital titans have been dictating the guidelines that should govern their AI systems’ respect for fundamental rights.14 Such a self-serving process raises troubling questions about impartiality in the age of algorithmic oversight. By enforcing transparency and accountability, the AI Act encourages responsible innovation and fosters trust among businesses and consumers, which is critical for the widespread adoption and market growth of AI. Serious developers already comply with the legislation through robust documentation and quality checks. In any case, the act mainly regulates high-risk AI applications while leaving most systems free of heavy obligations, hardly confirming the claim of regulatory overreach.
Finally, there is a need to counter critics’ skewed narratives, which are often imported from across the Atlantic and designed to question the EU’s AI approach and mislead European start-ups into thinking that the union’s AI regulations are killing innovation.15 Recent events in the United States, such as tariff shocks and a security pivot toward a Russia-friendly approach to ending the war in Ukraine, as well as U.S. Vice President JD Vance’s warning at the February 2025 AI Action Summit in Paris that Europe should ease tech regulation, have left the EU in a state of disbelief.16
The EU should respond by investing in the regulatory structures and innovation strategies that will define the future of digital technology in Europe. To chart a course forward, the EU must transform its approach into a bold new vision for AI innovation while ensuring that AI models remain human-centric, ethical, and trustworthy. To do so, the EU will need to carefully balance innovation in emerging and disruptive technologies, like AI, with regulatory oversight and a coherent strategy that is not swayed by Washington’s whims or the outcomes of elections.
The European Commission’s decision to take a deregulatory turn tilts the balance of the EU’s approach toward AI innovation. Former European Central Bank president Mario Draghi’s 2024 report on the state of the EU economy underlined the need to deregulate.17 The report emphasized that the union’s competitiveness woes stem largely from inconsistent, restrictive regulation. Yet, while deregulation might boost short-term competitiveness, it overlooks the significant strategic vulnerabilities that arise from reduced oversight, particularly of foreign-owned critical AI infrastructure and data privacy. Deregulation also risks weakening Europe’s bargaining power in AI global governance forums by surrendering the EU’s agenda-setting role to less accountable, private-sector tech giants and assertive geopolitical rivals.
A Much-Needed AI-Driven Industrial Renaissance
For years, Europe’s economic dynamism has lagged behind that of the United States or Asia when it comes to civilian and commercial applications of technology. A major reason is the state of the high-tech sector. In his 2024 report, Draghi highlighted the EU’s diminishing high-tech clout:
Technological change is accelerating rapidly. Europe largely missed out on the digital revolution led by the internet, and the productivity gains it brought: in fact, the productivity gap between the EU and the U.S. is largely explained by the tech sector. The EU is weak in the emerging technologies that will drive future growth. Only four of the world’s top 50 tech companies are European.18
The report outlined that Europe must urgently accelerate innovation to retain its manufacturing edge and develop breakthrough AI-driven technologies. However, the report also noted that to harness the full potential of digitization and advanced AI technologies, the EU must invest in cutting-edge infrastructure, from ubiquitous high-speed broadband and robust cloud computing to next-generation networks. Equally important is a digitally skilled workforce and citizenry that will safeguard Europe’s competitiveness in an era when innovation depends as much on human capital as on high-tech research and development.
The report’s main message about AI innovation in the EU is that the bloc’s ambitions risk becoming little more than a footnote in a global market dominated by the United States and China. The data presented are compelling: Only 11 percent of EU firms use AI, far from the bloc’s target of 75 percent by 2030. Worse, since 2017, 73 percent of foundational AI models have come from the United States and 15 percent from China, leaving Europe largely dependent on foreign-designed AI. And in 2023, the EU attracted just $8 billion in AI venture capital, compared with $68 billion in the United States and $15 billion in China.19
Europe’s most promising GenAI companies, such as Mistral and Aleph Alpha, struggle to compete with U.S. giants because of a lack of capital. With 61 percent of global AI funding flowing to U.S. firms and only 6 percent to their European counterparts, EU companies are increasingly turning to foreign investors.20 The bloc also suffers from a smaller AI talent pool, as skilled professionals are lured abroad by higher salaries. Without strategic investment, Europe risks losing its market share across key industries, while its lead in advanced robotics faces erosion.
Europe’s Distant Goal of Digital Sovereignty
The Draghi report underlined the brutal reality of AI competition, a winner-takes-most game in which laggards risk becoming irrelevant. Yet, framing Europe’s challenge as a simplistic trade-off between regulation and innovation is misleading, because it reinforces warnings of a false dichotomy.21 The EU’s regulatory frameworks—the AI Act, the General Data Protection Regulation (GDPR), the DSA, and the Digital Markets Act—are often blamed for stifling investment, but the real barriers to AI leadership run deeper. Europe’s fragmented digital market, lack of risk-tolerant venture capital, and dependence on foreign cloud hyperscalers—providers of cloud services with massive networks of data centers—hinder the bloc’s AI ambitions far more than regulations do.22 Europe has long lamented its reliance on U.S. tech giants, yet little has changed. Amazon, Google, and Microsoft continue to dominate nearly 70 percent of the European cloud market, while the continent’s largest provider accounts for a mere 2 percent.23
A February 2025 report by the Bertelsmann Stiftung underscored just how far Europe is from attaining digital sovereignty.24 The report outlined a strategy for a more independent European digital ecosystem, which encompassed everything from battery raw materials to enterprise software. It suggested establishing a European sovereign tech fund, with an initial investment of €10 billion ($11 billion), although achieving complete independence would require €300 billion ($341 billion) over the next decade, partly funded by private investment. Instead of imitating foreign tech giants or adopting state-driven models, the report championed cooperation among midsize European companies through shared digital standards and open-source solutions. It also suggested that reforming capital markets and public procurement could further enhance European technological sovereignty.
Europe’s push for tech sovereignty is highlighted by the EuroStack initiative, which aims to balance market competition with regulatory oversight to safeguard data integrity and economic independence. EuroStack stands for a European tech infrastructure designed to build local capacity across digital value chains, from semiconductors and data to computing and connectivity. Partly inspired by India’s widely recognized digital stack, the project aims to reduce Europe’s reliance on foreign providers while strengthening security, resilience, and competitiveness. By nurturing homegrown digital infrastructure, EuroStack seeks to both enhance innovation and ensure European governance of critical services for businesses, citizens, and institutions.
In March 2025, nearly one hundred industry leaders—from digital small and medium-sized enterprises (SMEs) to defense tech giants, such as Airbus, Dassault Systèmes, and OVHcloud—rallied behind the European DIGITAL SME Alliance’s call for action on the EuroStack. In an open letter to the commission, the network of European SMEs urged a strong industrial strategy to reduce Europe’s reliance on foreign digital infrastructure and enhance its tech sovereignty.25 The signatories argued that without decisive investment in homegrown digital capabilities, Europe risked falling farther behind in the global tech race. Their demand highlights growing frustration over Europe’s slow progress in achieving digital self-sufficiency in an increasingly contested technological landscape.
So, while regulatory clarity is crucial, deregulation is no panacea. Rather, the EU needs bold industrial action to build strategic autonomy in key sectors. Accordingly, in the words of the open letter, the union must become “more technologically independent across all layers of its critical digital infrastructure: from logical Infrastructure—applications, platforms, media, AI frameworks and models—to physical Infrastructure—chips, computing, storage and connectivity.”26
It is worth noting that the United States and China dominate AI not because of looser rules but because of their aggressive state-backed investments in infrastructure, access to vast computing power, and more seamless public-private partnerships. Thus, Europe’s challenge is not just about regulatory and bureaucratic burdens but about creating the right conditions for scaling up AI firms. Simplified rules and harmonized implementation across EU member states can help, but without a bold industrial policy and targeted funding, the bloc will not deliver on its ambitions for AI innovation.
What is more, perceptions of heavy regulation matter. Europe’s reputation for stringent rules, whether justified or not, deters entrepreneurs and venture capitalists before ideas even reach the drawing board. Fearful of bureaucratic pitfalls and lengthy compliance processes, start-ups opt for safer, less ambitious projects or relocate altogether. In this respect, disruptive innovation thrives not only with direct investment but also in an environment that signals openness to risk and rapid experimentation.
EU Efforts to Boost Competitiveness
In response to the current situation, the commission unveiled the Competitiveness Compass, a tool designed to slash administrative burdens by simplifying the regulatory environment while prioritizing speed and agility.27 Should the EU pursue a full-scale deregulatory agenda, as indicated by proposals for forthcoming bills, it risks sacrificing hard-won assets: the robust regulatory frameworks that have underpinned European economic progress and global influence. Crucially, the push toward deregulation threatens the democratic oversight that traditionally accompanies strict rules. That could amplify existing digital divides, exacerbate inequalities, and further concentrate power among already-dominant big tech companies with dubious track records on transparency and accountability.
But what has the EU done so far to boost its AI competitiveness? As part of the bloc’s turn toward innovation, it has established AI factories, which combine supercomputers, data, and expertise to accelerate the development of AI models.28 In December 2024, Europe took a notable stride in its quest to foster a homegrown AI environment: The European High Performance Computing Joint Undertaking (EuroHPC)—a collaborative initiative between the EU, European countries, and private-sector partners to develop a world-class supercomputing ecosystem—chose seven consortia to build the bloc’s first AI factories.29
Spain teamed up with Portugal, Romania, and Turkey, while Italy linked up with Austria and Slovenia, and Finland joined forces with the Czech Republic, Denmark, Estonia, Norway, and Poland. Bankrolled by the EU and its member states to the tune of €2.1 billion ($2.4 billion), these sites will install new AI-optimized supercomputers and revamp older ones while developing AI-oriented microprocessors and skills support.30
Meanwhile, the EU’s AI Office—the union’s center of AI expertise—is collaborating with the EuroHPC and other players to pool resources to shorten training times, lower costs, and enable breakthroughs in areas like large language models (LLMs), thus bolstering Europe’s AI autonomy and reducing its reliance on foreign cloud providers. This initiative primarily targets start-ups and SMEs to ensure equitable access to top-tier computing power.31
Yet, challenges remain, chiefly how to guarantee energy efficiency and secure sufficient AI chips. Ensuring that energy-hungry supercomputers and data centers operate sustainably and efficiently is a priority. The deployment of AI brings notable risks, particularly in terms of security and surging energy demand. Data centers alone consume 2.7 percent of the EU’s electricity, and their power use is projected to climb by 28 percent by 2030.32 Indeed, global demand for data-center capacity will more than triple by the end of this decade.33
In this respect, Chinese developments gained global attention in early 2025 with the release of DeepSeek’s affordable, power-efficient GenAI model. The launch raised eyebrows among industry observers, who had grown accustomed to associating AI progress with relentless demand for energy-intensive data centers. DeepSeek, a developer of LLMs akin to OpenAI’s ChatGPT, asserts that its highly efficient technology substantially lowers computing costs and energy consumption, challenging prevailing assumptions about infrastructure needs.34
This development calls into question the United States’ aggressive push to expand domestic data centers and energy-hungry AI facilities, as it suggests that geopolitical competition and corporate interests, rather than necessity, may be driving the massive infrastructure investments in the United States. Conversely, the DeepSeek launch also underlines the growing potential of open-source AI models supported by China, India, and Europe.35 It could be argued that an alternative idea of digital public goods has gained momentum, reflecting a global shift toward inclusive and collaborative AI development.
Innovation Opportunities and Hurdles
When it comes to boosting AI innovation, the EU’s goal is to sharpen Europe’s edge in AI, pool European resources, and invest in refining cutting-edge AI models and integrating them into strategic applications.36 At the AI Action Summit, European Commission President Ursula von der Leyen unveiled a sweeping €8 billion ($9 billion) upgrade to set up AI factories across Europe, alongside a bold €50 billion ($57 billion) investment initiative to “supercharge” innovation in AI.37
Both France, whose President Emmanuel Macron has announced plans for €109 billion ($124 billion) of private AI investment, and the commission appear determined to join the global AI arms race, having prioritized rapid growth and capital infusion over regulatory caution.38 One of the largest public-sector AI investments to date, the EU initiative is seen as a much-needed catalyst, which is expected to unlock more than ten times its value in private funding. In early 2025, the commission president pitched AI gigafactories as the next evolution in public computing infrastructure. In doing so, she drew a parallel with the legacy of the European Organization for Nuclear Research (CERN) of uniting the world’s leading minds around state-of-the-art technology.39
The United States, by contrast, has rolled out Stargate, a private-sector behemoth with $500 billion committed to AI infrastructure and a data center already under construction in Texas.40 The venture unites OpenAI, Oracle, Softbank (a Japanese multinational investment holding company), and MGX (a technology-investment arm of the Emirati government) but has raised concerns about the impact of data centers on energy supplies and the role of foreign investors. The U.S. model eschews direct public funding, instead leaning on regulatory support in the form of streamlined land-use permits and guaranteed access to cheap energy and water to fuel its expansion. Meanwhile, Europe’s public-sector-led AI gigafactories, inspired by CERN’s collaborative success, promise wider societal returns by aligning private-sector innovation with public-sector interests.
Although Stargate’s $500 billion investment dwarfs Europe’s outlay, it could be argued that the EU’s approach will ensure greater transparency, accountability, and equitable access by prioritizing oversight over corporate interests. Notably, while Chinese achievements and DeepSeek’s success may curb forecasts of soaring demand for AI infrastructure investment, global AI competition will still favor ever-larger facilities clustered in resource-rich locations.41 The United States will remain a primary host, while Europe’s data centers will gravitate either to places that abound in cheap power, like France and Scandinavia, or to hubs that offer industrial and financial incentives, such as Germany, Ireland, and the United Kingdom.
Europe’s ambitions are tempered, however, by its stringent data-protection regime. Many technology firms and AI researchers believe that the GDPR, which is hailed globally for prioritizing privacy, inadvertently hampers Europe’s ability to leverage large-scale data sets that are critical for training sophisticated AI models.42 Critics argue that this regulatory framework shackles AI innovation, leaving EU-based firms perpetually trailing their U.S. and Chinese counterparts, which face fewer barriers to harnessing vast amounts of consumer data.43
Yet, such concerns may overstate the GDPR’s constraints. Strict data-protection rules foster trust, which, in turn, encourages higher-quality data sharing among European users wary of invasive technologies elsewhere. Europe’s regulatory approach need not preclude innovation; it can instead drive firms to excel in privacy-preserving techniques, such as federated learning and synthetic data generation, enhancing competitiveness through trustworthy European AI.
To Regulate or Not to Regulate? That Is the Big Tech Question
The EU’s overall AI strategy reflects a more dirigiste approach that aims at bolstering digital sovereignty through coordinated, collaborative, and publicly financed infrastructure. In stark contrast, the U.S. model champions market-led growth by harnessing private capital while the government plays an enabling role in lowering operational barriers.44 This divergence highlights the need for a broad European debate over the optimal mix of public intervention and private enterprise in driving the next wave of AI innovation.
As the EU rapidly pivots from a discourse of regulation to one of innovation, one thing is sure: The prevailing narrative about AI innovation, which already dominates on the global stage because of the influence of political leaders, corporate actors, and tech giants, is premised on a problematic and narrow ideology. A brief look at the latest AI developments shows that their success is not merely a product of technological advancements and the genius of savvy tech entrepreneurs.
The AI Liability Directive
The commission’s deregulatory turn on AI is best exemplified by its 2025 work program, which signaled a striking policy shift in this direction. This move was particularly evident in the commission’s abrupt cancellation of the proposed AI liability directive, which had been intended to establish provisions on noncontractual civil liability for damages caused with the involvement of AI systems.45 The directive would have aimed to ensure that those harmed by AI systems enjoyed similar protections to victims of other technologies in the EU. The law would have introduced a so-called presumption of causality, meaning that victims would no longer shoulder the entire burden of proving precisely how AI systems had caused them harm.
Critics had been skeptical about the liability directive, with some questioning whether the rules would align with the AI Act and worrying about the law stifling innovation. Some observers also pointed to ambiguities between various EU directives and national legislation.46 The fear was that the liability directive did not sit neatly alongside existing EU rules that member states had already transposed, such as the product liability directive, the machinery directive, the e-commerce directive, the consumer rights directive, or national liability laws. The new directive would therefore have complicated liability regimes, created legal uncertainty, and added extra compliance burdens.
The overturning of the liability directive marks an important pivot for the EU as it seeks to unshackle AI innovation by cutting red tape. It is a stark response to increasing global competition, notably from the United States.
Yet, this deregulatory turn is fraught with risks. Scrapping the AI liability directive erases critical legal safeguards meant to protect individuals harmed by AI systems. Unlike the AI Act, which oversees AI market entry, the liability directive would have addressed accountability after damages had occurred. Without it, victims of AI-related harm lose a structural legal recourse and are effectively left with risks without rights.47 The decision also undermines the work of the European Parliament, which had actively debated and supported the directive. This sidelining weakens democratic legitimacy and risks creating institutional discord.
In the absence of the directive, considerable regulatory voids remain with regard to discrimination, breaches of personal rights, and purely financial damages—issues that are crucial in an increasingly AI-driven society. Top EU officials are now eager to show the United States and tech companies, investors, providers, and innovators, both at home and abroad, that Europe can be a great place to innovate and conduct business. Paradoxically, the deregulatory turn might erode the Brussels effect—the process by which EU regulations influence norms and practices outside the union’s borders—thus weakening global standards and the EU’s commitments to the rule of law and the promotion of a risk-based approach to AI safety.48
The EU’s decision to withdraw the AI liability directive sparked frustration among policymakers and legal experts, who argue that the move undermines the bloc’s broader AI strategy. Critics, including members of the European Parliament and industry analysts, contend that scrapping the directive erodes trust in the EU’s regulatory framework.49 Concerns have also grown that regulatory fragmentation across EU member states—coupled with EU initiatives aimed at simultaneously shaping alternative frameworks, such as a Council of Europe AI convention aligned with the AI Act—will weaken the EU’s ability to set global AI standards. Skeptics argue that abandoning the liability directive dilutes the EU’s ambition to balance innovation and governance, effectively ceding influence to industry giants and geopolitical rivals.50
This broader debate highlights Europe’s current dilemma: how to foster AI innovation without conferring regulatory authority onto private actors and international competitors. Proponents of liability law insist that regulatory frameworks are crucial for ensuring accountability in AI-driven harm cases as big tech companies consolidate power, trample ethical concerns, and increasingly sway with Trump’s political pendulum.51
The Silicon Valley Myth
AI models promise to make the world more prosperous, more efficient, fairer, and more humane—goals that legitimize calls to eliminate burdensome bureaucracy and overregulation. Yet, this dominant narrative, which is supported by the perceived superior expertise of certain private-sector tech actors, portrays AI models as purely technical and desirable innovations and minimizes some of their deeply problematic sociotechnical implications.52 This discourse downplays the digital divides, exploitative labor practices, ideological biases, and legal breaches that underlie the models’ advances. Meanwhile, it hypes up Silicon Valley’s libertarian ethos of visionary tech entrepreneurs setting out to make the world a better place, unburdened by state interventionism.
Certainly, Silicon Valley and the immense wealth it has produced have long epitomized the marvels of free-market capitalism, which are seen as evidence of how innovation can flourish when it is free from regulation. But this view is a myth. From the Cold War–era race for technological supremacy to the digital boom, the U.S. government has served as the invisible architect of innovation by channeling billions into research, defense projects, and early-stage funding.53 The internet, the Global Positioning System (GPS), the Apollo space program, and even Apple’s foundational technologies were all nurtured by public investment. Venture capitalists and private firms entered the fray only once the risks had been absorbed.
Yet, today’s tech giants continue to privatize profits while externalizing costs, avoiding taxes, lobbying against regulation, and seeking public subsidies when convenient. The reality is that the Silicon Valley tech sector was established with government intervention and support. This suggests that patient public finance, rather than deregulation, is the bedrock of enduring innovation.
However, the Silicon Valley myth continues to fuel policymaking worldwide, including in the EU, but its venture capital–driven model often exacerbates inequality rather than bridging it. High-tech start-ups, designed for rapid financial exits, concentrate wealth in elite hands. If the state is the true risk taker, then why should returns be privatized? Many argue that the private sector is better suited than the state to tackle grand challenges like the AI-led Fourth Industrial Revolution. This belief is embodied by ventures like OpenAI, which was created as a nonprofit AI research lab in 2015 but has been commercializing products in recent years, most notably its viral ChatGPT chatbot, and now aims to restructure itself into a for-profit organization.54 This shift highlights a broader trend: Even firms that claim to prioritize societal benefit ultimately succumb to the financial imperatives of venture capital.
Moreover, industry leaders’ aggressive race for artificial general intelligence (AGI), driven largely by profit motives and market dominance rather than the public good, presents unprecedented societal risks.55 Without robust ethical and regulatory frameworks, the pursuit of AGI could exacerbate existential risks, magnify algorithmic biases, and deepen socioeconomic inequalities. These concerns are compounded by a deregulatory climate that might relax transparency requirements, safety concerns, and accountability standards, granting tech giants unchecked authority over AI’s most powerful tools. These companies’ control of information systems and ability to reach wide audiences can generate further complications.56 Rushed deregulation thus heightens the risk of misuse of AI, both intentional and unintentional, further undermining public trust in technology and democratic governance.
At the same time, tech giants OpenAI and Google are lobbying the Trump administration to classify AI training on copyrighted data as fair use, framing such training as essential for national security. This move, which the firms justify in terms of securing an edge over rivals like China, raises deep ethical and legal concerns. In a scandal in early 2025, leaked documents revealed that Meta had covertly engaged in the scraping of copyrighted books to train AI models, prompting lawsuits from authors.57 OpenAI and Google argue that restrictive copyright laws stifle innovation, contrasting U.S. flexibility with Europe’s cautious approach.58 Yet, the companies’ stance, by prioritizing corporate interests over intellectual property rights and democratic governance, effectively legitimizes mass data theft.
Tech firms are adapting their playbooks to fit the new Trump era. Musk is hardly alone in seeking to curry favor with the U.S. president. In January 2025, Meta, eager to align itself with the shifting political winds, scrapped its third-party fact-checking program and reinstated political content, including on previously restricted topics, such as immigration and gender. Meta’s platforms, long criticized for fueling misinformation, now appear less inclined to moderate content that could provoke the administration’s voter base. The company also funneled $1 million into Trump’s inauguration, alongside contributions from Amazon and OpenAI Chief Executive Sam Altman, in an unmistakable nod toward the administration’s growing influence over Silicon Valley.59
Google, too, waded into controversy when it renamed the Gulf of Mexico the Gulf of America on Google Maps in an apparent attempt to gain favor with Trump after his January 2025 executive order proclaiming the change. Google’s parent company, Alphabet, which was once vocal about keeping AI out of military applications, has quietly reversed course, leading to resignations and an internal backlash over the firm’s ethical direction.60 And Jeff Bezos, owner of the Washington Post, faces scrutiny over the paper’s shifting editorial stance.61 As big tech recalibrates in response to Trump’s return, the uneasy entanglement of corporate power and political influence has never been more apparent.
Time for Europe to Act
More worryingly, in the first few months of his second term, Trump has shredded the transatlantic alliance and damaged the trust of U.S. allies. The March 2025 revelation that the United States is willing to leverage its control over a commercial space-based communication system, Starlink, to exert pressure on Ukraine serves as a stark warning to Europe.62 The episode highlights a broader strategic vulnerability: the overreliance of businesses, governments, and even militaries on technologies controlled by single companies, often headquartered in foreign jurisdictions.
Indeed, it is no longer far-fetched to imagine that the Trump administration would block arms sales, stop supplies of spare parts, or deactivate critical weapons systems used by European allies as part of a bargaining strategy with Russia. For the EU, this is yet another reminder of why tech sovereignty in terms of critical infrastructure is not a luxury but a necessity. If Europe wants to avoid being caught in the geopolitical crossfire of digital dependencies, it must accelerate its efforts to develop and secure its own critical infrastructure.
It is thus high time for European policymakers to find ways to reduce Europe’s exposure to single-provider risks, diversify supply chains, and ensure that critical infrastructure and core digital services remain under European—or, at least, multilateral—control. The EU’s substantial reliance on foreign technology is not just an economic challenge but a mounting security risk.63 The critical infrastructures that underpin Europe’s digital economy, from cloud computing to communication platforms, are dominated by a handful of U.S. tech giants. These companies are expanding their reach beyond traditional customer services into public-sector domains, such as health, policing, and even defense, including with AI models for warfare.64
These firms’ dominance extends even beyond software and services: They also own a substantial share of the hardware that powers global communications, including undersea cables once managed by heavily regulated telecommunication firms. By controlling key digital infrastructures and the latest AI models, U.S. tech giants can influence policy decisions, shape the information environment, and exert outsize political leverage.
Europe’s dependence on U.S. digital platforms and high tech weakens the EU’s strategic autonomy and risks sparking regulatory conflicts, as showcased by pressures from the Trump administration and Vance’s warning at the AI Action Summit against “excessive regulation” of AI.65 If the EU is serious about its ambitions for technological sovereignty, it must act swiftly to build alternatives; strengthen, not weaken, regulatory oversight; and reduce dependencies on foreign-controlled technologies and digital infrastructures.
What is more, while deregulation might appear economically appealing in these turbulent geopolitical times, abandoning the EU’s meticulous regulatory frameworks in favor of market-friendly agility while trying to emulate the U.S. model risks tampering with fundamental rights, democratic accountability, and social justice.66
That is why, to ensure AI models are aligned with broader socioeconomic and political needs, Europe will require robust governance, investment, and open collaboration that recognizes AI innovation as a public good. Public innovation funds could ensure that the next wave of AI tech benefits society, not just shareholders. True scientific innovation needs institutions and people guided by principles that go beyond financial incentives. In this respect, the EU’s regulatory rollback in pursuit of a mythical Silicon Valley–inspired innovation model risks dismantling carefully negotiated regulatory frameworks before they have had a chance to deliver meaningful success.
More than ever, the EU needs an integrated AI policy framework that aligns the bloc’s economic statecraft, innovation, foreign policy, and security imperatives with the need for responsible AI governance.67 Europe’s renewed AI strategy must also strike a balance between innovation and ethical safeguards, ensuring that AI remains a driver of inclusive growth and the public good while upholding the rule of law and democratic accountability.
All in all, the union must remember that regulating AI models is fundamentally about regulating unchecked power. The goal should be to ensure robust oversight and maintain legal clarity about liability and accountability, which are essential to preserve democratic legitimacy and safeguard citizens from potential AI abuses. Through coordinated initiatives that span targeted investments, strategic alliances, and emerging global markets, the EU can reassert its influence and foster homegrown AI innovation while setting global standards for responsible, forward-looking AI governance.
Yet, the nebulous goal of innovation, hailed as an all-encompassing virtue in and of itself, risks eclipsing prudent oversight and safety. Finding a balance between technological ambition and robust regulation will not only bolster Europe’s economic power but also maintain its credentials as a global rule maker in the digital age.
Europe’s Dual-Use AI Conundrum
Another part of the EU’s governance dilemma is rooted in AI’s dual-use nature. As enabling and general-purpose technologies, AI models are inherently dual use: An algorithm that boosts industrial efficiency can be repurposed for the battlefield. This blurring of civilian and military applications further complicates governance and regulation. Notably, the AI Act currently does not extend to military uses of AI, leaving significant oversight gaps. What is more, by rapidly pivoting toward innovation to compete globally, the EU risks inadvertently enabling the militarization of advanced technologies without adequate safeguards.
On the one hand, AI models are often billed as the engine of the Fourth Industrial Revolution: poised to boost productivity, create new industries, and reshape economies, as electricity and computing did in the past. For Europe’s industry and high-end manufacturing, AI systems offer a chance to reinvigorate the industrial base and jump-start economic progress. And, indeed, this comes not a moment too soon because of Europe’s sluggish economic growth. Yet, the EU’s recent emphasis on AI innovation and increased investment mirrors broader geopolitical trends in which civilian, commercial-sector advancements are increasingly transitioning into military projects and capabilities.
On the other hand, in military affairs, AI systems are seen as potential game changers that can enable swarming drones, autonomous robotics, intelligence-led cyber defenses, and AI-assisted decisionmaking. Major powers are well aware of these possibilities: China’s 2019 defense white paper called for “intelligentized warfare” and made AI technologies central to its military modernization, while the United States has curbed China’s access to advanced AI chips amid fears of bolstering Beijing’s arsenal.68 U.S. tech giants like Google and Microsoft, defense tech companies like Anduril, and start-ups like Scale AI have found the U.S. military to be a willing partner as it attempts to upgrade its technology to match that of the private sector.69
Unlike the controlled technology of nuclear arms, AI’s general-purpose, software-based nature makes it far harder to regulate or contain. Recent and ongoing conflicts underscore how AI is a double-edged sword. For example, observers have dubbed Russia’s war in Ukraine an “AI war lab” in which civilian tech firms and start-ups are deploying AI tools in live combat scenarios.70 Private companies like Palantir have provided AI-driven surveillance and targeting intelligence on the battlefield, blurring the line between Silicon Valley and the military-industrial complex.71
In 2024, OpenAI revised its usage guidelines to lift a previous ban on military uses, allowing its advanced systems to be deployed for weapons development and warfare applications, while Alphabet ended a long-standing ban on the use of AI to develop weapons and surveillance tools. OpenAI is also partnering with Anduril to integrate its advanced models with Anduril’s counterdrone systems to improve the U.S. military’s defenses against unpiloted aerial attacks. This move marks a notable shift, as tech companies that once shunned defense work are now edging into the security realm, effectively exercising a form of corporate sovereignty over the ways AI is used.
A Window of Opportunity to Lead on Military AI
While traditional defense contractors remain dominant, Europe’s most dynamic military innovations are increasingly coming from start-ups and tech firms. SMEs are injecting fresh ideas into a sector once seen as slow moving and resistant to change. The EU’s European Defense Fund is taking note and directing a growing share of its resources toward SME-led projects.72 Venture capital, previously reluctant to back defense tech, is now starting to pour in.
The rise of firms like Helsing and Mistral AI highlights this shift. Helsing, a German defense AI company, has rapidly gained influence thanks to its software that helps power drone targeting systems in Ukraine. Mistral, a French AI model start-up, has joined forces with Helsing to develop next-generation battlefield AI that blends language models with real-time combat decisionmaking. Their partnership signals Europe’s intent to develop homegrown AI capabilities rather than rely on U.S. or Chinese technology.
The EU’s March 2025 white paper on defense readiness emphasized that the future of European defense depends on its capacity to embrace disruptive technologies.73 The document acknowledged that AI, quantum computing, autonomous systems, and secure connectivity are swiftly redefining warfare. As seen in the Russia-Ukraine war, drones have transformed combat dynamics, while swarming drones and AI-powered robotics continue to develop, with autonomous ground vehicles spearheading early operations. These AI-enabled technologies, which are capable of reconnaissance, direct strikes, and logistical support, are reshaping battlefields. The white paper highlighted that Europe has a limited window to lead in military AI and robotics; Europe’s rivals are investing vigorously, and without prompt action, the continent risks falling behind in the technological arms race that will shape future conflicts.
The coming years will test Europe’s capacity to reconcile its technostrategic ambitions with its cautious approach to regulating high-risk AI technologies. Establishing a credible AI-augmented defense posture is costly and politically contentious, particularly across a union of twenty-seven member states. However, failing to do so carries even greater costs. Europe’s adversaries—and even its allies nowadays—are making rapid advances in weaponizing algorithms, regardless of ethical concerns. If Europe wishes to remain secure and relevant, it will require both civilian and military AI technologies: cutting-edge AI systems guided by democratic values and supported by hard power. This means increasing spending, spurring smarter public-private cooperation, nurturing innovators across civilian and military domains, and developing regulations to mitigate risks. In true European fashion, this is a delicate balancing act.
Europe aims to secure its place at the table of military AI. Allowing national security carve-outs in the AI Act was, in part, a concession to this reality and an acknowledgment that EU member states demand the freedom to deploy AI for security and defense purposes, even if doing so conflicts with the EU’s human-centric ideals.74 These carve-outs present a conundrum, however. How can the EU credibly champion an international human-centric AI agenda while its allies weaponize innovation and experiment with AI for military advantage or use powerful AI surveillance tools with minimal oversight? While governments argue that this flexibility is essential to keep Europeans safe, critics are concerned about a slippery slope toward the erosion of fundamental rights.75
Indeed, the dual-use challenge highlights Europe’s broader AI governance dilemma: how to pursue strategic autonomy while upholding robust ethical standards. The dual-use debate is not merely a military concern; it crystallizes the tension between rapid innovation and responsible oversight. If Europe’s ambition is genuine tech sovereignty, it must integrate clear dual-use regulations into the entire life cycle of AI technologies, compute resources, and wider industrial strategies. Effective governance to mitigate risks must match the sophistication of the technologies themselves. It should encompass rigorous assessments of capabilities and potential harms, robust reporting channels, and swift response mechanisms, including stringent frameworks to guard against unauthorized access to powerful AI models.76
Securing Europe’s AI Future
The EU faces a stark choice: balance bold innovation with responsible oversight or risk losing both tech sovereignty and regulatory influence. The EU’s recent deregulatory pivot, which the union has framed as a necessity to remain competitive against AI powerhouses like the United States and China, raises profound concerns about surrendering democratic safeguards to corporate interests. Europe’s carefully crafted digital regulation has rightly emphasized ethics and trust but now faces dilution amid geopolitical pressures and market-driven impulses.
Yet, regulation alone is not Europe’s greatest barrier to AI innovation. Chronic underinvestment, fragmented markets, and dependence on foreign tech infrastructure present far graver threats to AI innovation in the EU. The Draghi report underscored a harsh truth: Europe risks becoming irrelevant if it does not swiftly overcome its structural weaknesses and aggressively nurture its digital and AI ecosystem. But merely cutting red tape, however appealing, is no panacea.
To secure its AI future, the EU should take three concrete actions. First, it should significantly expand its investments. Public funding must actively catalyze private venture capital to prevent Europe’s most promising AI start-ups from emigrating or falling prey to foreign acquisitions. Second, Europe needs a comprehensive digital infrastructure, the EuroStack, to reduce its dependence on foreign cloud providers and semiconductor manufacturers and strengthen its digital resilience and security. Third, rather than undermine accountability, the EU should enhance its regulatory clarity, particularly on dual-use AI applications, and strike a balance between strategic autonomy and robust democratic oversight to avert an AI arms race. This would mean issuing an EU-wide dual-use AI framework that could better define risk tiers and licensing regimes, set common criteria for classifying AI systems for potential military or security applications, and harmonize export-control requirements and end-use screening procedures across member states.
Finally, Europe must embrace a third way of dynamic governance, in which future-proof, proactive regulation and an ambitious industrial policy reinforce rather than contradict each other. This approach calls for fostering European AI champions through dedicated funding, public-private synergies, and a supportive regulatory environment. Navigating today’s turbulent AI era demands more than deregulation. Europe’s strategic future hinges on investment, collaboration, and democratic accountability, the very strengths that once positioned the EU as a global rule maker.
Acknowledgments
The author would like to thank Rosa Balfour, Sinan Ülgen, and Dennis Broeders for providing insightful feedback on earlier drafts of this working paper.
Carnegie Europe is grateful to the Patrick J. McGovern Foundation for its support of this work.
Notes
1“Announcing the Stargate Project,” OpenAI, January 21, 2025, https://openai.com/index/announcing-the-stargate-project/; and Graham Webster et al., “Full Translation: China’s ‘New Generation Artificial Intelligence Development Plan’ (2017),” DigiChina (blog), August 1, 2017, https://digichina.stanford.edu/work/full-translation-chinas-new-generation-artificial-intelligence-development-plan-2017/.
2“AI Act,” European Commission, February 18, 2025, https://digital-strategy.ec.europa.eu/en/policies/regulatory-framework-ai.
3Tambiama Madiega, “Artificial Intelligence Liability Directive,” European Parliamentary Research Service, February 2023, https://www.europarl.europa.eu/RegData/etudes/BRIE/2023/739342/EPRS_BRI(2023)739342_EN.pdf.
4Ramsha Jahangir, “Tracking Recent Statements on the Enforcement of EU Tech Laws,” Tech Policy Press, March 13, 2025, https://techpolicy.press/tracking-recent-statements-on-the-enforcement-of-eu-tech-laws; and Henry Foy and Barbara Moens, “EU Scales Back Tech Rules to Boost AI Investment, Says Digital Chief,” Financial Times, February 14, 2025, https://www.ft.com/content/fde53886-4295-4066-a704-b8cf5f388800.
5Raluca Csernatoni, “How the EU Can Navigate the Geopolitics of AI,” Carnegie Europe, January 30, 2024, https://carnegieendowment.org/europe/strategic-europe/2024/01/how-the-eu-can-navigate-the-geopolitics-of-ai?lang=en.
6Raluca Csernatoni, “Can Democracy Survive the Disruptive Power of AI?,” Carnegie Europe, December 18, 2024, https://carnegieendowment.org/research/2024/12/can-democracy-survive-the-disruptive-power-of-ai?lang=en; and Maria Maggiore, Leïla Miñano, and Harald Schumann, “France Spearheads Member State Campaign to Dilute European AI Regulation,” Investigate Europe, January 22, 2025, https://www.investigate-europe.eu/posts/france-spearheads-member-state-campaign-dilute-european-artificial-intelligence-regulation.
7Laura Caroli, “Talks on the EU AI Act Code of Practice at a Crucial Phase,” Center for Strategic & International Studies, January 24, 2025, https://www.csis.org/analysis/talks-eu-ai-act-code-practice-crucial-phase.
8Pieter Haeck and Giovanni Coi, “Startups Side With Draghi: EU Red Tape Hampers Growth,” Politico, November 21, 2024, https://www.politico.eu/article/eu-tech-scene-denounces-ai-data-rules-bad-for-growth-meta-google-survey-gdpr/.
9Enrico Letta, “Letta Report ‘Much More Than a Market’ (April 2024),” European Commission, April 2024, https://european-research-area.ec.europa.eu/documents/letta-report-much-more-market-april-2024.
10Bram Vranken, “Big Tech Lobbying Is Derailing the AI Act,” Corporate Europe Observatory, November 24, 2023, https://corporateeurope.org/en/2023/11/big-tech-lobbying-derailing-ai-act; Cynthia Kroet, “Industry Flags ‘Serious Concerns’ With Latest Draft of EU AI Code of Practice,” Euronews, March 12, 2025, https://www.euronews.com/next/2025/03/12/industry-flags-serious-concerns-with-latest-draft-of-eu-ai-code-of-practice; and Alexandre Piquard, “France Keeps Up Pressure on EU’s AI Act, Despite Mounting Criticism,” Le Monde, January 27, 2024, https://www.lemonde.fr/en/economy/article/2024/01/27/france-keeps-up-its-pressure-on-the-eu-s-ai-act-despite-mounting-criticism_6471038_19.html.
11“State Lobbyists Roll Back AI Act,” The Good Lobby, January 28, 2025, https://www.thegoodlobby.eu/state-lobbyists-roll-back-ai-act/.
12Maggiore et al., “European AI Regulation.”
13Martin Greenacre, “EU Is ‘Losing the Narrative Battle’ Over AI Act, Says UN Adviser,” Science Business, December 5, 2024, https://sciencebusiness.net/news/ai/eu-losing-narrative-battle-over-ai-act-says-un-adviser.
14“Bias Baked In: How Big Tech Sets Its Own AI Standards,” Corporate Europe Observatory, January 9, 2025, https://corporateeurope.org/en/2025/01/bias-baked.
15Greenacre, “Narrative Battle.”
16Clea Caulcutt, “JD Vance Warns Europe to Go Easy on Tech Regulation in Major AI Speech,” Politico, February 11, 2025, https://www.politico.eu/article/vp-jd-vance-calls-europe-row-back-tech-regulation-ai-action-summit/.
17“The Draghi Report on EU Competitiveness,” European Commission, September 9, 2024, https://commission.europa.eu/topics/eu-competitiveness/draghi-report_en.
18“Draghi Report,” European Commission.
19“Draghi Report,” European Commission.
20“Draghi Report,” European Commission.
21Max von Thun, “3. To Innovate or to Regulate? The False Dichotomy at the Heart of Europe’s Industrial Approach,” AI Now Institute, March 12, 2024, https://ainowinstitute.org/publication/to-innovate-or-to-regulate-the-false-dichotomy.
22David Matthews, “The EU Urgently Needs Technological Autonomy From the US, MEPs Say,” Science Business, March 13, 2025, https://sciencebusiness.net/news/sovereignty/eu-urgently-needs-technological-autonomy-us-meps-say.
23Matthews, “Technological Autonomy.”
24Francesca Bria, Paul Timmers, and Fausto Gernone, “EuroStack – a European Alternative for Digital Sovereignty,” Bertelsmann Stiftung, February 13, 2025, https://doi.org/10.11586/2025006.
25“Open Letter: European Industry Calls for Strong Commitment to Sovereign Digital Infrastructure,” European DIGITAL SME Alliance, March 14, 2025, https://www.digitalsme.eu/digital/uploads/Open-Letter-European-Industry-Calls-for-Strong-Commitment-to-Sovereign-Digital-Infrastructure.pdf.
26“Open Letter,” European DIGITAL SME Alliance.
27“European Commission Presents Its Compass to Boost Europe’s Competitiveness in the Next Five Years,” European Commission, January 30, 2025, https://ec.europa.eu/commission/presscorner/detail/en/ac_25_385.
28“AI Factories,” European Commission, April 10, 2025, https://digital-strategy.ec.europa.eu/en/policies/ai-factories.
29“The European High Performance Computing Joint Undertaking,” European Commission, https://digital-strategy.ec.europa.eu/en/policies/high-performance-computing-joint-undertaking.
30“AI Factories,” European Commission.
31“European AI Office,” European Commission, April 10, 2025, https://digital-strategy.ec.europa.eu/en/policies/ai-office; and Maria Niestadt, “AI Factories,” European Parliamentary Research Service, February 2025, https://www.europarl.europa.eu/RegData/etudes/BRIE/2025/769492/EPRS_BRI(2025)769492_EN.pdf.
32“Draghi Report,” European Commission.
33“AI Data Center Growth: Meeting the Demand,” McKinsey, October 29, 2024, https://www.mckinsey.com/industries/technology-media-and-telecommunications/our-insights/ai-power-expanding-data-center-capacity-to-meet-growing-demand.
34“AI to Drive 165% Increase in Data Center Power Demand by 2030,” Goldman Sachs, February 4, 2025, https://www.goldmansachs.com/insights/articles/ai-to-drive-165-increase-in-data-center-power-demand-by-2030.
35Iain Martin, “The EU Is Betting $56 Million on Open Source AI,” Forbes, February 2, 2025, https://www.forbes.com/sites/iainmartin/2025/02/02/the-eu-is-betting-56-million-on-open-source-ai/.
36“European AI Office,” European Commission.
37Guntram Wolff et al., “Sovereignty and Digital Interdependence,” European Union Institute for Security Studies, July 1, 2021, https://www.jstor.org/stable/resrep34007.6; and Martin Greenacre, “EU to Invest €50B to ‘Supercharge’ Innovation in Artificial Intelligence,” Science Business, February 13, 2025, https://sciencebusiness.net/news/eu-budget/eu-invest-eu50b-supercharge-innovation-artificial-intelligence.
38Piquard, “France.”
39Jacob Wulff Wold, “Commission Introduces ‘AI Gigafactories’ in Final Competitiveness Compass,” Euractiv, January 19, 2025, https://www.euractiv.com/section/tech/news/commission-introduces-ai-gigafactories-in-final-competitiveness-compass/; and “About CERN,” CERN, https://home.cern/node/5011.
40Paul Smith-Goodson, “The Stargate Project: Trump Touts $500 Billion Bid for AI Dominance,” Forbes, January 30, 2025, https://www.forbes.com/sites/moorinsights/2025/01/30/the-stargate-project-trump-touts-500-billion-bid-for-ai-dominance/.
41Laure de Roucy-Rochegonde and Adrien Buffard, “AI, Data Centers and Energy Demand: Reassessing and Exploring the Trends,” French Institute of International Relations, February 2025, https://www.ifri.org/sites/default/files/2025-02/ifri_buffard-rochegonde_ai_data_centers_energy_2025_2.pdf.
42“Regulation (EU) 2016/679 of the European Parliament and of the Council,” Official Journal of the European Union, May 4, 2016, https://eur-lex.europa.eu/eli/reg/2016/679/oj/eng.
43Nick Wallace and Daniel Castro, “The Impact of the EU’s New Data Protection Regulation on AI,” Center for Data Innovation, March 27, 2018, https://www2.datainnovation.org/2018-impact-gdpr-ai.pdf.
44Rosa Balfour and Sinan Ülgen (editors), “Geopolitics and Economic Statecraft in the European Union,” Carnegie Europe, November 19, 2024, https://carnegieendowment.org/research/2024/11/geopolitics-and-economic-statecraft-in-the-european-union?lang=en.
45“Proposal for a Directive of the European Parliament and of the Council on Adapting Non-Contractual Civil Liability Rules to Artificial Intelligence (AI Liability Directive),” European Commission, September 28, 2022, https://eur-lex.europa.eu/legal-content/EN/TXT/?uri=celex:52022PC0496.
46Madiega, “Liability Directive.”
47Philipp Hacker, “The European AI Liability Directives – Critique of a Half-Hearted Approach and Lessons for the Future,” Social Science Research Network, November 25, 2022, https://doi.org/10.2139/ssrn.4279796.
48Maria Niestadt and Jasmin Reichert, “The Global Reach of EU’s Vision to Digital Transformation,” European Parliamentary Research Service, January 2024, https://www.europarl.europa.eu/RegData/etudes/BRIE/2024/757632/EPRS_BRI(2024)757632_EN.pdf.
49Kai Zenner, “An AI Liability Regulation Would Complete the EU’s AI Strategy,” Centre for European Policy Studies, February 25, 2025, https://www.ceps.eu/an-ai-liability-regulation-would-complete-the-eus-ai-strategy/.
50Marco Almada and Anca Radu, “The Brussels Side-Effect: How the AI Act Can Reduce the Global Reach of EU Policy,” German Law Journal 25, no. 4 (2004): 646–663, https://doi.org/10.1017/glj.2023.108.
51Tina Teng, “Big Tech and Trump: What Happens When Business and Politics Mix?,” Euronews, March 12, 2025, https://www.euronews.com/business/2025/03/12/big-tech-and-trump-what-happens-when-business-and-politics-mix.
52Raluca Csernatoni, “Charting the Geopolitics and European Governance of Artificial Intelligence,” Carnegie Europe, March 6, 2024, https://carnegieendowment.org/research/2024/03/charting-the-geopolitics-and-european-governance-of-artificial-intelligence?lang=en¢er=europe.
53Mariana Mazzucato, “The Myth of the ‘Meddling’ State,” Public Finance Focus, June 25, 2013, https://www.publicfinancefocus.org/news/2013/06/myth-meddling-state.
54Hayden Field, “Judge Denies Musk’s Attempt to Block OpenAI From Becoming For-Profit Entity,” CNBC, March 5, 2025, https://www.cnbc.com/2025/03/04/judge-denies-musk-attempt-to-block-openai-from-becoming-for-profit-.html.
55Wim Naudé and Nicola Dimitri, “The Race for an Artificial General Intelligence: Implications for Public Policy,” AI & SOCIETY 35, no. 2 (2020): 367–379, https://doi.org/10.1007/s00146-019-00887-x.
56Henry Farrell and Abraham Newman, “Elon Musk Weaponizes the Government,” Lawfare, May 2, 2025, https://www.lawfaremedia.org/article/elon-musk-weaponizes-the-government.
57“Authors Challenge Meta’s Use of Their Books in AI Training,” Digital Watch Observatory (blog), March 11, 2025, https://dig.watch/updates/authors-challenge-metas-use-of-their-books-in-ai-training.
58Virginie Berger, “The AI Copyright Battle: Why OpenAI and Google Are Pushing for Fair Use,” Forbes, March 15, 2025, https://www.forbes.com/sites/virginieberger/2025/03/15/the-ai-copyright-battle-why-openai-and-google-are-pushing-for-fair-use/.
59Liv McMahon, “Mark Zuckerberg’s Meta Donates $1m to Trump Fund,” BBC News, December 12, 2024, https://www.bbc.co.uk/news/articles/c8j9e1x9z2xo.
60Jennifer Elias, “Google Removes Pledge to Not Use AI for Weapons, Surveillance,” CNBC, February 4, 2025, https://www.cnbc.com/2025/02/04/google-removes-pledge-to-not-use-ai-for-weapons-surveillance.html.
61Anna Betts, “Ex-Washington Post Editor Marty Baron Rebukes Bezos: ‘Betrayal of Free Expression,’” Guardian, February 28, 2025, https://www.theguardian.com/us-news/2025/feb/28/marty-baron-jeff-bezos-washington-post.
62Joshua Posaner, “EU to Help Ukraine Replace Musk’s Starlink,” Politico, March 2, 2025, https://www.politico.eu/article/eu-to-help-ukraine-replace-musks-starlink/.
63Raluca Csernatoni, “Europe’s Wake-Up Call for Tech Leadership,” December 12, 2024, Carnegie Europe, https://carnegieendowment.org/europe/strategic-europe/2024/12/europes-wake-up-call-for-tech-leadership?lang=en¢er=europe.
64Roberto J. González, “How Big Tech and Silicon Valley Are Transforming the Military-Industrial Complex,” Costs of War, April 17, 2024, https://watson.brown.edu/costsofwar/papers/2024/SiliconValley; and Vera Bergengruen, “How Tech Giants Turned Ukraine Into an AI War Lab,” Time, February 8, 2024, https://time.com/6691662/ai-ukraine-war-palantir/.
65Emma De Ruiter, “US VP Vance Challenges Europe’s ‘Excessive Regulation’ of AI,” Euronews, February 11, 2025, https://www.euronews.com/2025/02/11/jd-vance-challenges-europes-excessive-regulation-of-ai-at-paris-summit.
66Hannah Ruschemeier, “The De-Regulatory Turn of the EU Commission,” Verfassungsblog (blog), February 18, 2025, https://doi.org/10.59704/b61325ec5f72184a.
67Balfour and Ülgen, “Geopolitics.”
68Koichiro Takagi, “New Tech, New Concepts: China’s Plans for AI and Cognitive Warfare,” War on the Rocks, April 13, 2022, https://warontherocks.com/2022/04/new-tech-new-concepts-chinas-plans-for-ai-and-cognitive-warfare/; and “China Says US Plan to Toughen Semiconductor Curb Will Backfire,” Reuters, February 25, 2025, https://www.reuters.com/technology/china-says-us-plan-toughen-semiconductor-curb-will-backfire-2025-02-25/.
69Gerrit De Vynck, “Pentagon Signs AI Deal to Help Commanders Plan Military Maneuvers,” Washington Post, March 5, 2025, https://www.washingtonpost.com/technology/2025/03/05/pentagon-ai-military-scale/.
70Bergengruen, “Tech Giants.”
71Csernatoni, “Military AI.”
72“European Defence Fund: Over €1 Billion to Drive Next-Generation Defence Technologies and Innovation,” European Commission, January 30, 2025, https://defence-industry-space.ec.europa.eu/european-defence-fund-over-eu1-billion-drive-next-generation-defence-technologies-and-innovation-2025-01-30_en.
73“Introducing the White Paper for European Defence and the ReArm Europe Plan- Readiness 2030,” European Commission, March 12, 2025, https://defence-industry-space.ec.europa.eu/eu-defence-industry/introducing-white-paper-european-defence-and-rearm-europe-plan-readiness-2030_en.
74Raluca Csernatoni, “Weaponizing Innovation? Mapping Artificial Intelligence-Enabled Security and Defence in the EU,” Stockholm International Peace Research Institute, July 2023, https://www.sipri.org/publications/2023/eu-non-proliferation-and-disarmament-papers/weaponizing-innovation-mapping-artificial-intelligence-enabled-security-and-defence-eu.
75Nikolett Aszódi, “EU’s AI Act Fails to Set Gold Standard for Human Rights,” Algorithm Watch, April 3, 2024, https://algorithmwatch.org/en/ai-act-fails-to-set-gold-standard-for-human-rights/.
76Bengüsu Özcan, “Double-Edged Tech: Advanced AI & Compute as Dual-Use Technologies,” Centre for Future Generations, January 20, 2025, https://cfg.eu/double-edged-tech/.