• Research
  • Strategic Europe
  • About
  • Experts
Carnegie Europe logoCarnegie lettermark logo
EUUkraine
  • Donate
{
  "authors": [
    "Amlan Mohanty"
  ],
  "type": "other",
  "centerAffiliationAll": "",
  "centers": [
    "Carnegie Endowment for International Peace",
    "Carnegie India"
  ],
  "collections": [
    "Artificial Intelligence"
  ],
  "englishNewsletterAll": "",
  "nonEnglishNewsletterAll": "",
  "primaryCenter": "Carnegie India",
  "programAffiliation": "",
  "programs": [],
  "projects": [
    "Technology and Society"
  ],
  "regions": [
    "South Asia",
    "India"
  ],
  "topics": [
    "Technology",
    "AI"
  ]
}

Source: iStock

Other
Carnegie India

Why We Need a Global AI Compact

This essay provides three reasons why the world needs a global AI compact, what it will hopefully achieve, and the role of different stakeholders in this process.

Link Copied
By Amlan Mohanty
Published on Mar 1, 2024
Project hero Image

Project

Technology and Society

This program focuses on five sets of imperatives: data, strategic technologies, emerging technologies, digital public infrastructure, and strategic partnerships.

Learn More

This essay is part of a series that highlights the main takeaways from discussions that took place at Carnegie India’s eighth Global Technology Summit, co-hosted with the Ministry of External Affairs, Government of India.


Artificial intelligence, or AI, has become a truly global technology. The adoption of AI in healthcare, finance, and transportation is impacting society on a global scale, while fundamental technical breakthroughs are being facilitated by research collaborations around the world.

At the same time, the rapid acceleration of AI has raised important geopolitical issues around digital sovereignty, economic competitiveness, and sustainable development, while the complex ethical and regulatory challenges posed by AI demand a calibrated global response.

For these reasons, world leaders are determined to establish a global AI compact through discussions at forums such as the G7, the G20, the AI Safety Summit, the Global Partnership on Artificial Intelligence (GPAI), and the United Nations. But after many rounds of dialogue at the Global Technology Summit (GTS), held in December 2023, the question that many participants were left with was: “What exactly will a global AI compact achieve?”

This essay provides three reasons why the world needs a global AI compact, what it will hopefully achieve, and the role of different stakeholders in this process.

Why Do We Need a Global AI Compact?

To Ensure That AI Benefits All of Humanity

A key insight from participants at the GTS was that developing countries are intent on maximizing the AI opportunity, while many developed countries are focused on managing risk. Representatives from the Global South expressed concern that their individual needs and aspirations were being overlooked, which could potentially exacerbate the digital divide. For example, many of them expect AI to boost socioeconomic development in the region. However, they harbor the concern that lack of access to compute, stringent export controls, and licensing requirements could hamper their developmental efforts. Additionally, current debates on AI governance are not always attuned to the cultural context of the Global South, and prescriptive rules imported from the West could stifle entrepreneurial spirit in these regions.

Therefore, a key outcome of any proposed global AI compact should be that all individuals and communities receive an equal chance to participate in the conversation. Doing so will ensure that the benefits that stem from these powerful technologies are evenly distributed around the world. 

To Decide Who Controls AI Resources 

Access to computing power, commonly referred to as “compute,” determines who reaps the rewards of AI. Currently, the factors of production for advanced AI systems—graphics processing units, skilled labor, and hyperscale cloud services—are concentrated in the hands of a few corporations, most of which are based in the West.

If compute is indeed like electricity, a critical resource that is essential to the advancement of AI, a global AI compact is necessary to acknowledge the risks of concentration and to diffuse control over such resources. A combination of technical innovations, market interventions, and developmental aid may be useful in this regard. The U.S.-India Global Digital Development Partnership, which seeks to promote the adoption of digital public infrastructure in developing countries, could be used as a template for this effort.

To Establish Principles for AI Regulation

In the absence of a global AI compact, we are seeing the emergence of divergent regulatory approaches—for example, the United States’ market-based model, the European Union’s rules-based approach, China’s state-led approach, and India’s “hybrid” model, among others. While some variation is expected on issues such as privacy and copyright, too many deviations at the national level are likely to create regulatory fragmentation and affect global businesses. 

Therefore, arriving at a shared understanding of the risks involved—both near-term and long-term risks to humanity—is necessary. A global AI compact would help identify these risks, frame regulatory principles, and recommend effective oversight and enforcement mechanisms.

What Will It Achieve?

Going back to the question at hand, below are five outcomes a global AI compact would achieve.

  1. Common standards: A global AI compact would reduce fragmentation by promoting interoperability through common standards. For example, a standard definition for “foundation models” and appropriate safety benchmarks to regulate the deployment of these models would benefit national regulators as they translate global principles into domestic policy. A global compact would also establish a neutral international body to develop such standards and update them based on regular evaluations.
  2. Ethical principles: Responsible AI norms ensure that AI systems are designed, deployed, and implemented at an organizational level with certain ethical considerations in mind. A global AI compact would ensure that fairness, accountability, transparency, and other key principles are reflected in responsible AI frameworks around the world, irrespective of the political and social values of a particular government or institution.
  3. Research and innovation: A global AI compact would incentivize open access to research and foster a collaborative ecosystem. This will, in turn, enable industry, academia, and startups from around the world to invest in innovation. Although some countries are likely to pursue sovereign AI goals, a global AI compact would facilitate cross-border collaboration, knowledge sharing, and mentorship.
  4. Equitable access: A global AI compact would encourage nations to share resources and develop solutions to decentralize AI infrastructure. International cooperation would also resolve supply chain constraints, promote the movement of workforces across borders, and democratize access to critical resources.
  5. Workforce development: A global AI compact would also support international efforts to train the workforce in the skills required to participate in an AI-driven economy.

The Role of Global Institutions

Developing a global AI compact is a multistage process that requires the involvement of multiple stakeholders. Identifying the role of each of these stakeholders is an important first step.

For example, the G20, with the African Union as a permanent member and Brazil as its sitting president, plays a vital role in representing the interests of the Global South. On the other hand, the GPAI can help in the harmonization of data governance and responsible AI frameworks.

To make the process more inclusive, representatives from technical bodies such as the Internet Engineering Task Force, the Institute of Electrical and Electronics Engineers, and the International Organization for Standardization should be involved. Additionally, the World Intellectual Property Organization can offer advice on standards and regulations.

Over time, specialized forums may be required to discuss specific issues such as military applications of AI, trust and safety, or trade, labor, and antitrust issues. With a clear mandate for each institution, a global AI compact will be closer to reality and much more likely to succeed.

Amlan Mohanty
Fellow, Technology and Society Program
Amlan Mohanty
TechnologyAISouth AsiaIndia

Carnegie does not take institutional positions on public policy issues; the views represented herein are those of the author(s) and do not necessarily reflect the views of Carnegie, its staff, or its trustees.

More Work from Carnegie Europe

  • Commentary
    Strategic Europe
    The EU and India in Tandem

    As European leadership prepares for the sixteenth EU-India Summit, both sides must reckon with trade-offs in order to secure a mutually beneficial Free Trade Agreement.

      Dinakar Peri

  • Commentary
    Strategic Europe
    Corporate Geopolitics: When Billionaires Rival States

    Tech giants are increasingly able to wield significant geopolitical influence. To ensure digital sovereignty, governments must insist on transparency and accountability.

      Raluca Csernatoni

  • Commentary
    Five Pillars for Europe in the Second Trump Era

    The second Trump administration has shifted the cornerstones of the liberal international order. How the EU responds will determine not only its global standing but also the very integrity of the European project.

      • Rym Momtaz

      Rym Momtaz

  • Democracy tech citizenship
    Paper
    Rethinking EU Digital Policies: From Tech Sovereignty to Tech Citizenship

    The EU’s pursuit of tech sovereignty has often sidelined the role of democracy in the digital sphere. The union should adopt a tech citizenship strategy that promotes citizen engagement, democratic innovation, and accountability.

      Richard Youngs

  • Commentary
    Can the EU Achieve Its Tech Ambitions?

    The EU’s quest for strategic autonomy in the digital domain is challenged by national interests. Brussels can set a bold direction on tech sovereignty, but its success will require a robust framework and delicate compromises.

      Raluca Csernatoni, Sinan Ülgen

Get more news and analysis from
Carnegie Europe
Carnegie Europe logo, white
Rue du Congrès, 151000 Brussels, Belgium
  • Research
  • Strategic Europe
  • About
  • Experts
  • Projects
  • Events
  • Contact
  • Careers
  • Privacy
  • For Media
  • Gender Equality Plan
Get more news and analysis from
Carnegie Europe
© 2026 Carnegie Endowment for International Peace. All rights reserved.