Source: Getty

The AI Governance Arms Race: From Summit Pageantry to Progress?

The race to regulate AI has produced a complex web of competing initiatives, including high-profile summits. To develop a coherent and effective AI governance framework, the global community must move from symbolic gestures to enforceable commitments.

Published on October 7, 2024

In a world where artificial intelligence (AI) is swiftly reshaping the way people live, work, and engage, the global race to set the governance agenda for these transformative technologies has intensified into fierce competition. To regulate or not to regulate AI has become a hot geopolitical issue. International and regional institutions, governments, and tech companies are all striving to establish frameworks to manage the development and deployment of AI.

Yet, instead of a cohesive global regulatory approach, what has emerged is a mosaic of national policies, multilateral agreements, high-level and stakeholder-driven summits, declarations, frameworks, and voluntary commitments. This fragmented and competitive landscape often looks more like a form of governance spectacle than a path toward substantive action.

The critical question is whether these efforts should lay the foundation for a comprehensive, practical, and enforceable global regulatory regime or whether the goal is merely to establish symbolic measures that obscure deeper, unresolved issues. Given the cultural divides, differing value judgments, and geopolitical competition, it is uncertain whether such a unified framework is achievable. At the heart of the debate is a fundamental challenge: Can the global community come together to develop a coherent AI governance framework that substantially addresses the ethical, legal, security, and military challenges AI poses? Or is the world headed toward a regulatory arms race in which countries and corporate tech giants vie for dominance by setting conflicting principles and standards that exacerbate inequalities and leave risky AI unchecked?

The Race for Global AI Governance: Who Sets the Rules?

In the absence of a binding international treaty, the global governance of AI has become fragmented, with different regions, organizations, tech giants, and great powers such as the United States and China all developing their own approaches. As the AI landscape is information dense and evolving fast, navigating these many types of AI governance is becoming increasingly challenging.

Major international institutions and initiatives such as the Organisation for Economic Co-operation and Development (OECD), the Global Partnership on AI, the International Organization for Standardization (ISO), the UN High-Level Advisory Body on AI, the UN Educational, Scientific, and Cultural Organization (UNESCO), and the Council of Europe have all created frameworks aimed at guiding AI development responsibly (see box). However, these efforts are primarily voluntary and lack the enforceability needed to ensure compliance across borders. As a result, they serve more as a patchwork of guiding principles than as binding regulations.

Box: Examples of AI Governance Initiatives

Ethics, Norms, and Principles

  • The Principles for the Ethical Use of AI in the UN System
  • The OECD AI Principles
  • The Asilomar AI Principles
  • The UNESCO Recommendation on the Ethics of AI
  • The Institute of Electrical and Electronics Engineers (IEEE) Global Initiative 2.0 on Ethics of Autonomous and Intelligent Systems

Laws and Binding Regulations

  • The EU’s AI Act
  • The Council of Europe’s Framework Convention on AI and Human Rights, Democracy, and the Rule of Law
  • China’s measures for labeling AI-generated synthetic content

Voluntary Codes of Conduct

  • The Hiroshima Process International Code of Conduct for Organizations Developing Advanced AI Systems
  • The White House’s Voluntary AI Commitments
  • The Code of Conduct for Microsoft Generative AI Services
  • Canada’s Voluntary Code of Conduct on the Responsible Development and Management of Advanced Generative AI Systems

Frameworks and Guidelines

  • The White House Executive Order on the Safe, Secure, and Trustworthy Development and Use of AI
  • The White House Blueprint for an AI Bill of Rights
  • The U.S. National Institute of Standards and Technology AI Risk Management Framework
  • The OECD Framework for the Classification of AI Systems

Standards and Certifications

  • Standardization Subcommittee 42 of Joint Technical Committee 1 of the International Organization for Standardization and the International Electrotechnical Commission (ISO/IEC JTC 1/SC 42) on AI
  • The IEEE P7000 standards series
  • The European Committee for Standardization and the European Electrotechnical Committee for Standardization (CEN-CENELEC) development of AI standards
  • The Responsible AI Institute’s Certification Program for AI systems

One of the more ambitious attempts at regulating AI comes from the EU. The union’s AI Act is a bold, one-of-a-kind effort to impose comprehensive rules on the development and use of AI within the EU, positioning the bloc as a leader in AI governance. Yet, this initiative and its implementation across the union face challenges ranging from harmonization among the EU member states to the involvement of stakeholders such as government entities and providers, importers, users, and distributors of AI systems. The act’s external impact will also depend on how other regions respond, particularly AI powerhouses like the United States and China. Without global alignment, the result could be a collage of regulations that hinder innovation and obscure compliance efforts.

Another promising development is the Council of Europe’s Framework Convention on AI and Human Rights, Democracy, and the Rule of Law, the first ever legally binding treaty designed to ensure that AI systems align with rights and democratic principles. Unlike other initiatives, the convention is legally enforceable, offering a model for how AI governance can be structured globally. The convention marks a significant moment, as it represents one of the first instances in which the United States and the EU have formally aligned on AI regulation.

Yet, it could be argued that the convention is a set of minimum standards with few obligations and rights. It remains to be seen whether this initiative will translate into tangible outcomes, because the convention lacks a robust compliance-reporting and enforcement mechanism. There are also concerns about the adequacy of remedies for human rights violations by AI systems due to vague implementation guidelines.

What is more, while it is unclear whether the convention covers the use of AI systems for national security purposes, states must ensure that these activities respect international law and democratic institutions and processes. Thus, the use of vague language about states’ obligations raises serious concerns about legal certainty, enforceability, and overall effectiveness. There are also worries about whether other countries, such as those in the BRICS+ group, will be interested in signing on or will have sufficient influence in shaping the future of global tech governance. Shifting political dynamics, especially after the 2024 U.S. presidential election, could lead to a sharp change in direction for various signatories of the convention, potentially undermining the progress made so far.

In the past couple of years, the field of AI governance has witnessed further major developments, such as the G7 statement on the Hiroshima AI Process, the White House’s executive order on AI, and the boom in AI summits. Another example is the UNESCO Recommendation on the Ethics of AI. This global standard emphasizes human rights, fairness, and transparency while providing practical policy guidance for translating ethical principles into action. Yet, like many other initiatives, it remains nonbinding, highlighting a recurring challenge in the AI governance landscape: good intentions without teeth.

The multiplication of such efforts raises concerns about how the various bodies involved can best interact with one another and avoid fragmentation. Should each scheme focus on specific issues, geographies, or deliverables, and should these be binding or nonbinding? The risks of overlap and overcrowding are real, and the conveners of these initiatives are increasingly looking to distinguish their work.

Carving out specific niches will be essential to ensure clarity and efficiency. Otherwise, business as usual carries significant risks. The range of perspectives on global AI governance could lead to discussions that feel more symbolic than substantive—almost as if hosting high-level AI events has become a hyped trend without grappling with the real complexities of governance. There is also a danger that influential players might hijack or dominate the game, shaping the rules in a top-down fashion. A genuinely effective AI governance framework demands inclusive and meaningful participation from all stakeholders, not just the most influential governmental or corporate voices.

The Problem With AI Summit Pageantry

The notion of an AI governance arms race is not merely figurative. It reflects real competition among states, international organizations, and the tech industry to set global standards and dominate governance frameworks in line with national and corporate interests. A symptom of this arms race is AI summit pageantry—high-profile international gatherings at which leaders and stakeholders discuss AI governance but often produce only symbolic declarations and voluntary or nonbinding commitments. These summits risk creating a governance spectacle where symbolic gestures obscure the pressing need for cohesive, enforceable global rules amid a growing regulatory scramble. While these events draw attention to critical issues, they frequently lack necessary concrete actions or enforceable regulations, serving more as displays of diplomatic engagement than as examples of substantive progress.

AI summits are indeed uniquely positioned to play a catalytic role in the global governance process, but to truly drive progress, they must focus on more than high-level discussions. In a rush to be the first to regulate—or, in some cases, to avoid regulation—countries risk creating a confusing web of summits and initiatives that undermine the goals of a coherent global AI governance.

To illustrate, the UK’s 2023 AI Safety Summit, the UN’s 2024 AI for Good Global Summit, South Korea’s 2024 AI Safety Summit, the 2024 G7 summit in Italy, and France’s 2025 AI Action Summit all focus on various degrees of international cooperation on AI. The 2023 summit, held at Bletchley Park, was an essential step toward establishing global norms for AI safety. The Bletchley Declaration acknowledged the high risks AI poses, particularly when it comes to alignment with human intent. While the declaration was an important first step, it did not aim to deliver the enforceable regulations necessary to pause frontier AI development and assess the risks more thoroughly. The 2024 summit in Seoul followed in a similar fashion. Despite early concerns about its limited scope, the meeting resulted in sixteen major AI companies signing the voluntary, nonbinding Frontier AI Safety Commitments.

These commitments, which include publishing risk and safety protocols, aim to improve transparency and accountability. Yet, like the UK summit, the Seoul gathering stopped short of delivering a binding agreement to pause high-risk AI development. The critical challenge for the Bletchley and Seoul initiatives was to successfully establish foundational rules in a landscape where democratic norms and principles are increasingly blurred while technological innovation is rapidly outpacing governance structures.

Looking ahead to France’s 2025 summit, the prospects of stronger regulation appear even dimmer, as the initiative has downplayed safety concerns in favor of broader discussions about AI’s potential for economic growth. With the focus shifting away from safety, the meeting could risk sidelining critical issues of AI safety in favor of more business-friendly agendas.

Summits should serve as launchpads for binding international agreements and thus move beyond symbolic declarations and voluntary norms toward concrete, enforceable commitments. This is where the recent boom of AI summits falls short. The proliferation of nonbinding norms in AI governance—often the currency of AI summit pageantry—raises serious questions about the effectiveness of such measures. Voluntary commitments play a crucial role, especially in the early stages of regulating emerging technologies like AI, but they often need to be more robust to ensure meaningful compliance. Without binding enforcement mechanisms, voluntary norms can create a fragmented environment in which companies pick and choose which guidelines to follow.

This regulatory gap is particularly concerning when it comes to AI’s potential for harm in areas such as military applications and mass surveillance. The lack of enforceable regulations allows governments and corporations to push the boundaries of what AI can do without sufficient oversight. This dynamic risks creating a race to the bottom in AI governance, in which the drive for technological superiority overshadows ethical considerations.

Toward a More Coherent Approach

Despite these challenges, there is an opportunity to move toward a more coordinated and effective approach to AI governance. To avoid the risks of overlap and overcrowding, each body involved in global AI governance efforts should focus on carving out a specific niche based on its unique strengths. AI summits, for instance, are well positioned to address fast-evolving, high-risk technologies and convene the right stakeholders to push for binding international treaties on these issues.

The EU aims for its AI Act to have a “Brussels effect,” to borrow a phrase coined by international trade expert Anu Bradford—that is, to significantly impact global markets and serve as a blueprint for other jurisdictions. Foreign governments often adopt EU laws because they are high quality, address widely accepted problems, and reflect sensible compromises among diverse legal systems.

However, it is not guaranteed the AI Act will set global norms. While the EU’s first-mover advantage in tech regulation is important, the union is not alone. China has also made major efforts to regulate AI and set standards, with innovations in audit, disclosure, and AI-enabled technology exports that are already globally significant. That is why the EU will need to promote its approach to AI governance through agreements made in international and multilateral organizations like the OECD, the G7, the G20, the Council of Europe, the UN, and many others.

Meanwhile, the Council of Europe can continue its important work on AI and human rights, using its recent framework convention as a model for other regions. This legally binding treaty shows how regional efforts can have a global impact, especially when they set standards that other nations can adopt. Through bodies like UNESCO and its Global Digital Compact for global governance of digital technology and AI, the UN should focus on bridging the divide in AI development. The UN has the global reach and the moral authority to address inequalities in AI access and development, ensuring that the benefits of these technologies are shared equitably and that vulnerable populations are not disproportionately affected by AI risks.

Finally, formal mechanisms for coordination between these institutions would prevent duplication of efforts and ensure that AI governance initiatives reinforce one another rather than compete. A more streamlined global governance system, in which each body focuses on its core strengths, would provide more explicit guidance to governments, corporations, and other stakeholders while ensuring that AI is developed and deployed responsibly.

The race to govern AI should be about more than just setting the first standards or imposing a specific governance vision of AI. Instead, it should create a unified, enforceable framework that prioritizes ethical responsibility, human rights, and global equity. While summits, frameworks, regulations, and declarations have drawn attention to the importance of AI governance, they need more binding commitments to enforce real change. As the international community moves forward, the focus must shift from symbolic gestures to concrete actions.

Carnegie Europe is grateful to the Patrick J. McGovern Foundation for its support of this work.

Carnegie India does not take institutional positions on public policy issues; the views represented herein are those of the author(s) and do not necessarily reflect the views of Carnegie India, its staff, or its trustees.