• Research
  • Emissary
  • About
  • Experts
Carnegie Global logoCarnegie lettermark logo
DemocracyIran
  • Donate
{
  "authors": [
    "Mustafa Suleyman",
    "Mariano-Florentino (Tino) Cuéllar",
    "Ian Bremmer",
    "Jason Matheny",
    "Philip Zelikow",
    "Eric Schmidt",
    "Dario Amodei"
  ],
  "type": "other",
  "centerAffiliationAll": "",
  "centers": [
    "Carnegie Endowment for International Peace"
  ],
  "collections": [
    "Artificial Intelligence"
  ],
  "englishNewsletterAll": "",
  "nonEnglishNewsletterAll": "",
  "primaryCenter": "Carnegie Endowment for International Peace",
  "programAffiliation": "",
  "programs": [],
  "projects": [],
  "regions": [],
  "topics": [
    "Technology",
    "AI"
  ]
}

Source: Getty

Other

Proposal for an International Panel on Artificial Intelligence (AI) Safety (IPAIS): Summary

IPAIS would offer opportunities for collaboration to inform policymakers and the public on issues of AI safety.

Link Copied
By Mustafa Suleyman, Mariano-Florentino (Tino) Cuéllar, Ian Bremmer, Jason Matheny, Philip Zelikow, Eric Schmidt, Dario Amodei
Published on Oct 27, 2023

There is a pressing need for fresh thinking on AI safety, security, and governance in order to reduce the risks of frontier AI technology and expand its benefits for the world. This proposal offers a list of key design principles for a new organization—an International Panel on AI Safety (IPAIS)—inspired by the Intergovernmental Panel on Climate Change (IPCC). This body would focus on assessing and validating the scientific evidence supporting a deep technical understanding of current AI capabilities, their improvement trajectories, and the relevant safety and security risks. With grounding in science, independence from political interference, and an international membership, IPAIS would present an opportunity for robust international collaboration to inform policymakers and the public. If successful, this institution could pave the way for further cooperation.

Design Principles

A new AI safety organization should satisfy at least the following key design principles:

  • Global engagement: Include experts from around the world in order to ensure research is trusted by responsible governments and the private sector globally. Use technical expertise and broad, international participation to advance a public interest mission and a state-of-the-art, actionable scientific consensus on AI safety and security.
  • Science-led and expert-driven: Focus on developing trustworthy, evidence-based analyses of the current and future state of AI safety and security independent of opinion and commercial bias. Assemble world-class talent to pool knowledge from the private sector, academia, the government, and civil society.
  • Focused: Target the most urgent AI governance challenge—safety and security—while leaving other research and policy issues to different fora.
  • Impartial: Respect and protect corporate intellectual property in order to become an organization with which all major actors feel comfortable sharing information at a high level of detail.
  • Minimizing entanglement with politics: Avoid acquiring regulatory or policymaking authority, but develop respect and trust to inform national decision-making and international coordination on technical matters that might be taken up by the policymaking process.
  • Internationally recognized benchmarks: Be capable of refining existing AI information-gathering paradigms, informing key policy dilemmas, and advancing new internationally recognized benchmarks for understanding and monitoring progress and developments in AI.
  • Independent: Maintain access to a stable funding base that allows for complete analytic independence.

 IPAIS’s Mission

IPAIS would fill the need for an objective, in-depth, expert-led understanding of where AI capabilities today and where they are headed. It would regularly and independently evaluate the state of AI, its risks and potential impacts, and estimated timelines for technological milestones. It would keep tabs on both technical and policy solutions to alleviate risks and enhance outcomes. Initially, IPAIS would be a purely fact-finding exercise giving comprehensive clarity on the state of AI and its trajectory, uses, and so on. It would focus, as much as possible, on objective, commonly agreed metrics and indicators—and where they are absent, it would help produce them.

IPAIS will complement a larger ecosystem of AI cooperation activity, including a network of safety institutes to perform research directly, the Frontier Model Forum, a host of voluntary measures, and various national-level initiatives. IPAIS will augment these activities by filling a need for dissemination of validated research that can build consensus about the most important risks to be mitigated so humanity can realize the benefits of AI, identification of key questions that merit further research, and building a community of scientists and technical experts who can collaborate and learn from one another.

Key Issues for Further Discussion

Further discussion can help address the goals of procuring sustainable, long-term funding for financial flexibility; achieving broad international buy-in; ensuring the governance structure serves the organization’s purposes; and escaping politicization and controversy by developing measures to buttress the organization’s credibility.

About the Authors

Mustafa Suleyman

Mariano-Florentino (Tino) Cuéllar

President, Carnegie Endowment for International Peace

Mariano-Florentino (Tino) Cuéllar is the tenth president of the Carnegie Endowment for International Peace. A former justice of the Supreme Court of California, he has served three U.S. presidential administrations at the White House and in federal agencies, and was the Stanley Morrison Professor at Stanford University, where he held appointments in law, political science, and international affairs and led the university’s Freeman Spogli Institute for International Studies.

Ian Bremmer

Jason Matheny

Philip Zelikow

Eric Schmidt

Eric Schmidt is the chairman of the National Security Commission on Artificial Intelligence. Previously, he served as the CEO of Google from 2001 to 2011.

Dario Amodei

Authors

Mustafa Suleyman
Mariano-Florentino (Tino) Cuéllar
President, Carnegie Endowment for International Peace
Mariano-Florentino (Tino) Cuéllar
Ian Bremmer
Jason Matheny
Philip Zelikow
Eric Schmidt

Eric Schmidt is the chairman of the National Security Commission on Artificial Intelligence. Previously, he served as the CEO of Google from 2001 to 2011.

Eric Schmidt
Dario Amodei
TechnologyAI

Carnegie does not take institutional positions on public policy issues; the views represented herein are those of the author(s) and do not necessarily reflect the views of Carnegie, its staff, or its trustees.

More Work from Carnegie Endowment for International Peace

  • Commentary
    Carnegie Politika
    Why Did Messaging App Telegram Fall From Grace in Russia?

    The history of Telegram’s relations with the Russian state offers a salutary lesson for international platforms that believe they can reach a compromise with the Kremlin.

      Maria Kolomychenko

  • Mourners hold up their phones showing images of Ali Khamenei during a memorial vigil after Iranian state media confirmed the death of Ayatollah Ali Khamenei on March 1, 2026 in Tehran, Iran.
    Article
    Iran Wields Wartime Internet Access as a Political Tool

    In an effort to disseminate its preferred message, the Iranian regime is offering a simple transaction: connectivity for amplification.

      Mahsa Alimardani

  • City at night
    Commentary
    Emissary
    The Iran War Is Also Now a Semiconductor Problem

    The conflict is exposing the deep energy vulnerabilities of Korea’s chip industry.

      Darcie Draudt-Véjares, Tim Sahay

  • Trump United Nations multilateralism institutions 2236462680
    Article
    Resetting Cyber Relations with the United States

    For years, the United States anchored global cyber diplomacy. As Washington rethinks its leadership role, the launch of the UN’s Cyber Global Mechanism may test how allies adjust their engagement.

      • Christopher Painter

      Patryk Pawlak, Chris Painter

  • High-tech data center with server racks
    Article
    The Architecture of Digital Repression

    Internet service providers can facilitate internet access but also draconian control.

      Irene Poetranto

Get more news and analysis from
Carnegie Endowment for International Peace
Carnegie global logo, stacked
1779 Massachusetts Avenue NWWashington, DC, 20036-2103Phone: 202 483 7600Fax: 202 483 1840
  • Research
  • Emissary
  • About
  • Experts
  • Donate
  • Programs
  • Events
  • Blogs
  • Podcasts
  • Contact
  • Annual Reports
  • Careers
  • Privacy
  • For Media
  • Government Resources
Get more news and analysis from
Carnegie Endowment for International Peace
© 2026 Carnegie Endowment for International Peace. All rights reserved.