Geopolitical turmoil and great-power rivalry are threatening the EU’s political-economic model. Carnegie Europe’s recent compilation “Geopolitics and Economic Statecraft in the European Union” explores the dilemmas facing the union as it pursues economic security while defending the rules-based order.
In the second in a series of Q&As examining the challenges identified in the compilation, co-editor Sinan Ülgen sat down with Raluca Csernatoni, author of the chapter “The EU’s Vision for Technological Leadership,” to unpack the EU’s ambitions in the digital domain.
Sinan Ülgen: Henna Virkkunen is the European Commission’s new executive vice president for tech sovereignty, security, and democracy. Could her appointment mark a decisive step toward EU tech sovereignty?
Raluca Csernatoni: Virkkunen’s new role signals the EU’s serious commitment to tech sovereignty, yet the challenges remain profound. For the first time, a European Commission executive vice president’s portfolio explicitly links technology with security and democracy, indicating a recognition that control over Europe’s digital and technological destiny is now strategic. This move is a symbolic milestone: It shows the EU wants to set its own tech standards, nurture homegrown innovation, and reduce reliance on U.S. and Chinese systems in critical domains, from semiconductors to cloud infrastructure.
Yet, a single personnel change, however high profile, is not a panacea. True tech sovereignty will depend on concrete action. Virkkunen inherits grand initiatives, like the European Chips Act and secure 5G rollouts, but she must inject momentum. Her biggest challenge is to turn the buzz phrase “tech sovereignty” into reality by uniting fragmented efforts across the member states. That means persuading national capitals to pool resources for important EU-wide projects and speak with one voice on issues like AI governance and supply-chain security.
It also means balancing Europe’s instinct for regulation with the need to spur competitiveness. Unlike former European commissioner Thierry Breton, whose legislative activism unleashed a regulatory tsunami in the name of tech sovereignty, Virkkunen has warned against overregulation stifling innovation. Her track record also suggests she may push for a more flexible, industry-friendly approach to building Europe’s tech capacities.
In essence, Virkkunen’s appointment is a decisive step in intent. It highlights a shift in Europe’s tech strategy from high-flying rhetoric to on-the-ground implementation. If she can leverage her mandate to address the EU’s systemic ailments—by harmonizing policies, championing strategic investment, bolstering existing regulatory frameworks, and avoiding bureaucratic overreach—the EU will move closer to genuine tech autonomy. But success will hinge on execution, industry support, and political will far beyond one official’s influence.
Sinan Ülgen: As the EU becomes more active in creating a policy framework for tech sovereignty, what tension do you foresee between the EU institutions and the member states?
Raluca Csernatoni: Brussels’s drive for tech sovereignty is bound to collide with national interests. As the EU asserts a more significant role in tech and digital policy, member states are wary of ceding too much control. Technology straddles various competencies: The commission sets common rules for the single market, but investments in innovation and industry still come largely from national coffers. This balance makes friction inevitable.
One bone of contention is industrial strategy. EU officials and experts may urge joint European projects and champions, but governments often prefer to back their own tech firms. France and Germany, for instance, support EU-level initiatives in principle but are protective of national innovation ecosystems and homegrown giants. Smaller countries, meanwhile, worry that a one-size-fits-all plan from Brussels could favor the big players and sideline their needs.
The fragmentation is real. In the European quantum market, for example, EU member states have crafted their own strategies, but Virkkunen has prompted calls for a comprehensive European quantum plan followed by a quantum act to coordinate efforts and pool resources.
Regulation is another source of tension. Brussels has rolled out sweeping digital laws on issues from data privacy to AI, sometimes faster than national authorities and industries can adapt. Indeed, enforcement of these laws falls to the member states, which vary in their levels of enthusiasm and capacity, leading to uneven outcomes. Some capitals may be wary of what they see as overreach by the commission: Warsaw and others have been reluctant about EU content rules, for example, while others complain that Brussels is not doing enough on, say, semiconductor subsidies.
Geopolitics also complicates matters. European unity on tech policy can falter when security and alliances are at stake. Certain Eastern or Baltic EU members prioritize alignment with U.S. standards, still seeing Washington as a vital security partner, whereas others prefer to build more strategic autonomy, even if it means diverging from American interests. The EU’s supranational industrial and technological push must navigate these tensions carefully. In the end, Brussels can set a bold direction on tech sovereignty, but getting twenty-seven capitals to march in step will require a delicate compromise.
Sinan Ülgen: How would you assess the ten critical technology areas identified by the commission in October 2023? Is the underlying methodology robust and future-proof?
Raluca Csernatoni: The EU’s list of ten critical technologies is a broad attempt to pinpoint where Europe’s strategic vulnerabilities and opportunities lie. These critical technology areas, vital for the bloc’s economic security, cover the usual suspects: cutting-edge digital fields like advanced semiconductors and artificial intelligence (AI), frontier domains like quantum computing, and biotechnologies. These domains were chosen using sensible criteria that focused on the technologies’ disruptive potential, dual civil-military uses, critical dependencies on foreign players, and risks of misuse or rights abuses. Brussels quickly highlighted four of the ten—semiconductors, AI, quantum, and biotech—as the most urgent, given their high likelihood of technology leakage or security threats.
By and large, the selection methodology is sound. Few would dispute that chips and AI are critical, or that quantum and biotech could disrupt economies and militaries alike. The inclusion of diverse sectors shows that policymakers aimed for comprehensive coverage and recognized the potential fusion between these emerging technological domains, for example with advanced chips fueling the AI revolution. If anything, the list errs on the side of caution by casting a wide net, though that is prudent when dealing with an unpredictable and potentially disrupted tech future.
Zooming out, however, no static list can be truly future proof in a fast-moving tech landscape. The underlying methodology for any laundry list of emerging and disruptive technologies (EDTs) with a dual-use potential will need to evolve as innovation does. New breakthroughs in quantum computing or unforeseen technologies could rise to strategic prominence within a few years. For instance, today’s niche research in areas like advanced materials for batteries or neurotechnology might become tomorrow’s key strategic domains. Any approach will stay robust only if it remains iterative, and regular reviews and updates—informed by academia, tech experts, and industry—are vital to keep the critical list current.
Overall, identifying priority tech areas is a useful exercise to elevate their security profile and guide EU strategy and investment. The framework is robust for now, but its real test lies in how agile and forward looking Europe can be as the technological sands continue to shift.
Sinan Ülgen: Do you think the AI Act will become a lasting area of friction in transatlantic relations? What could a possible common understanding between the United States and the EU on AI regulation look like?
Raluca Csernatoni: These are very important questions. The EU’s AI Act, the world’s first AI law, is not only a point of pride for Europe but also a point of contention. Washington has noted that Brussels’s zeal to regulate will hamstring innovation and burden U.S. tech firms. EU officials, for their part, see such complaints as exaggerated by Silicon Valley lobbyists. The friction was clear during the February 2025 AI Action Summit in Paris, when the United States publicly urged Europe to ease up on AI rules.
In truth, the act is more measured than its critics allege. It mainly targets high-risk AI systems and mandates transparency and safety checks, while leaving most applications untouched. The aims are to foster trust and to prevent abuses like mass surveillance or biased algorithms, not to smother everyday innovation. What is more, EU lawmakers have already carved out national security uses as a concession to defense concerns. Hence, it could be argued that far from a tech-stifling edict, the EU’s strategy strives to balance innovation with the union’s values.
Still, a philosophical gap remains: The EU favors precaution, the United States a light touch. Could this transatlantic divide become lasting? Not if both sides find common ground, which is more unlikely under the current administration of U.S. President Donald Trump. A transatlantic understanding will likely hinge on shared principles and trust rather than identical laws, which, again, will be harder to achieve nowadays. Both sides agree that AI should be safe and respect rights. There may be continued convergence on baseline norms, such as ruling out dystopian uses and ensuring human oversight of AI, even if legal frameworks differ. Nevertheless, the risks here are a regulatory race to the bottom and a lack of alignment between allies.
That said, Europe is already tweaking its stance. Former European Central Bank president Mario Draghi’s 2024 report on competitiveness urged the EU to cut red tape, and officials have signaled their openness to easing certain rules. Indeed, a dose of pragmatism could make the AI Act less divisive. Yet, if the EU waters down the law too far, the union risks eroding its credibility as a normative power and standard setter.
Europe’s recent shift toward deregulation, exemplified by its decision to drop a planned directive on civil liability for AI-related damages, reveals an attempt to placate industry stakeholders and the U.S. administration’s market-oriented stance. This pivot carries significant risks, however, as it could undermine democratic oversight, erode public trust, and dilute the EU’s regulatory authority globally.
Sinan Ülgen: I’m interested in how you think the regulation of one technology could impact another at a time of transatlantic tension. Could, for instance, a transatlantic rift over AI regulation affect the development of EU-origin quantum industries?
Raluca Csernatoni: Certainly, a transatlantic falling-out over AI would not stop at AI; it would spill over into the entire tech relationship. Quantum technologies are a prime example. Europe’s quantum industry is nascent and still relies on global collaboration. U.S. firms lead in quantum computing, and European projects often depend on their hardware and expertise. If an EU-U.S. rift over AI were to erode trust, these partnerships could also falter. For instance, U.S. investors might think twice about backing European quantum start-ups, and joint research could slow as knowledge sharing dries up.
Strategically, trouble in one domain can definitely trigger decoupling in others. The EU’s push for tech sovereignty is intensifying, and a clash over AI could harden the union’s resolve to seek more autonomy in quantum, too. EU officials are already considering a quantum act to galvanize European efforts, and a transatlantic rift would only add to the sense of urgency. The quest for strategic autonomy in EDTs would extend to quantum encryption, sensing, and computing—technologies with big military implications.
Yet, going it alone has downsides for both the EU and the United States. Quantum research and development (R&D) thrives on global talent and supply chains, while isolation from U.S. collaboration would cut Europe off from some cutting-edge advances. It could also lead to duplicated efforts—that is, two parallel quantum tech stacks. Europe’s quantum start-ups might also find themselves without access to U.S. know-how and markets when they need them the most.
Finally, quantum and AI intersect. If Europe and the United States stop cooperating, both may miss out on breakthroughs at that intersection. To avoid isolation, Europe should pursue a balanced strategy: boost internal investments and capabilities through initiatives like quantum hubs and targeted venture funding while safeguarding open channels via bilateral agreements on R&D, talent exchange, and regulatory alignment. Such pragmatic diplomacy, by complementing Europe’s quantum autonomy with selective international collaboration, would help insulate European quantum ambitions from collateral damage in other tech disputes.