• Research
  • About
  • Experts
Carnegie India logoCarnegie lettermark logo
Technology
{
  "authors": [
    "Mustafa Suleyman",
    "Mariano-Florentino (Tino) Cuéllar",
    "Ian Bremmer",
    "Jason Matheny",
    "Philip Zelikow",
    "Eric Schmidt",
    "Dario Amodei"
  ],
  "type": "other",
  "centerAffiliationAll": "",
  "centers": [
    "Carnegie Endowment for International Peace"
  ],
  "collections": [
    "Artificial Intelligence"
  ],
  "englishNewsletterAll": "",
  "nonEnglishNewsletterAll": "",
  "primaryCenter": "Carnegie Endowment for International Peace",
  "programAffiliation": "",
  "programs": [],
  "projects": [],
  "regions": [],
  "topics": [
    "Technology",
    "AI"
  ]
}

Source: Getty

Other

Proposal for an International Panel on Artificial Intelligence (AI) Safety (IPAIS): Summary

IPAIS would offer opportunities for collaboration to inform policymakers and the public on issues of AI safety.

Link Copied
By Mustafa Suleyman, Mariano-Florentino (Tino) Cuéllar, Ian Bremmer, Jason Matheny, Philip Zelikow, Eric Schmidt, Dario Amodei
Published on Oct 27, 2023

There is a pressing need for fresh thinking on AI safety, security, and governance in order to reduce the risks of frontier AI technology and expand its benefits for the world. This proposal offers a list of key design principles for a new organization—an International Panel on AI Safety (IPAIS)—inspired by the Intergovernmental Panel on Climate Change (IPCC). This body would focus on assessing and validating the scientific evidence supporting a deep technical understanding of current AI capabilities, their improvement trajectories, and the relevant safety and security risks. With grounding in science, independence from political interference, and an international membership, IPAIS would present an opportunity for robust international collaboration to inform policymakers and the public. If successful, this institution could pave the way for further cooperation.

Design Principles

A new AI safety organization should satisfy at least the following key design principles:

  • Global engagement: Include experts from around the world in order to ensure research is trusted by responsible governments and the private sector globally. Use technical expertise and broad, international participation to advance a public interest mission and a state-of-the-art, actionable scientific consensus on AI safety and security.
  • Science-led and expert-driven: Focus on developing trustworthy, evidence-based analyses of the current and future state of AI safety and security independent of opinion and commercial bias. Assemble world-class talent to pool knowledge from the private sector, academia, the government, and civil society.
  • Focused: Target the most urgent AI governance challenge—safety and security—while leaving other research and policy issues to different fora.
  • Impartial: Respect and protect corporate intellectual property in order to become an organization with which all major actors feel comfortable sharing information at a high level of detail.
  • Minimizing entanglement with politics: Avoid acquiring regulatory or policymaking authority, but develop respect and trust to inform national decision-making and international coordination on technical matters that might be taken up by the policymaking process.
  • Internationally recognized benchmarks: Be capable of refining existing AI information-gathering paradigms, informing key policy dilemmas, and advancing new internationally recognized benchmarks for understanding and monitoring progress and developments in AI.
  • Independent: Maintain access to a stable funding base that allows for complete analytic independence.

 IPAIS’s Mission

IPAIS would fill the need for an objective, in-depth, expert-led understanding of where AI capabilities today and where they are headed. It would regularly and independently evaluate the state of AI, its risks and potential impacts, and estimated timelines for technological milestones. It would keep tabs on both technical and policy solutions to alleviate risks and enhance outcomes. Initially, IPAIS would be a purely fact-finding exercise giving comprehensive clarity on the state of AI and its trajectory, uses, and so on. It would focus, as much as possible, on objective, commonly agreed metrics and indicators—and where they are absent, it would help produce them.

IPAIS will complement a larger ecosystem of AI cooperation activity, including a network of safety institutes to perform research directly, the Frontier Model Forum, a host of voluntary measures, and various national-level initiatives. IPAIS will augment these activities by filling a need for dissemination of validated research that can build consensus about the most important risks to be mitigated so humanity can realize the benefits of AI, identification of key questions that merit further research, and building a community of scientists and technical experts who can collaborate and learn from one another.

Key Issues for Further Discussion

Further discussion can help address the goals of procuring sustainable, long-term funding for financial flexibility; achieving broad international buy-in; ensuring the governance structure serves the organization’s purposes; and escaping politicization and controversy by developing measures to buttress the organization’s credibility.

About the Authors

Mustafa Suleyman

Mariano-Florentino (Tino) Cuéllar

President, Carnegie Endowment for International Peace

Mariano-Florentino (Tino) Cuéllar is the tenth president of the Carnegie Endowment for International Peace. A former justice of the Supreme Court of California, he has served three U.S. presidential administrations at the White House and in federal agencies, and was the Stanley Morrison Professor at Stanford University, where he held appointments in law, political science, and international affairs and led the university’s Freeman Spogli Institute for International Studies.

Ian Bremmer

Jason Matheny

Philip Zelikow

Eric Schmidt

Eric Schmidt is the chairman of the National Security Commission on Artificial Intelligence. Previously, he served as the CEO of Google from 2001 to 2011.

Dario Amodei

Authors

Mustafa Suleyman
Mariano-Florentino (Tino) Cuéllar
President, Carnegie Endowment for International Peace
Mariano-Florentino (Tino) Cuéllar
Ian Bremmer
Jason Matheny
Philip Zelikow
Eric Schmidt

Eric Schmidt is the chairman of the National Security Commission on Artificial Intelligence. Previously, he served as the CEO of Google from 2001 to 2011.

Eric Schmidt
Dario Amodei
TechnologyAI

Carnegie India does not take institutional positions on public policy issues; the views represented herein are those of the author(s) and do not necessarily reflect the views of Carnegie, its staff, or its trustees.

More Work from Carnegie India

  • Commentary
    The PSLV Setback: Restoring India’s Workhorse

    On January 12, 2026, India's "workhorse," the Polar Satellite Launch Vehicle, experienced a consecutive mission failure for the first time in its history. This commentary explores the implications of this incident on India’s space sector and how India can effectively address issues stemming from the incident.

      Tejas Bharadwaj

  • Commentary
    AI Adoption Journey for Population Scale

    Connecting real-world AI use cases across sectors such as health, education, agriculture, and livelihoods can help policymakers, innovators, and institutions align around a shared goal. This article looks at a framework ensuring that AI works for everyone.

      Shalini Kapoor, Tanvi Lall

  • Article
    The State of Digital Transformation in Pacific Island Countries

    Pacific Island Countries are at a pivotal moment in their digital journeys. Across the region, there is growing recognition of digital transformation as a key driver of economic growth, resilience, and global connectivity.

      Shruti Mittal, Adarsh Ranjan

  • Article
    Revisiting the Usage of Refurbished Equipment in India’s Semiconductor Ecosystem

    This article looks at the progress of the Semiconductor Laboratory fab modernization plan based on publicly available documents, and potential learnings for future upgrades to government-owned fabs and India’s larger semiconductor ecosystem, especially regarding the use of refurbished equipment.

      Shruti Mittal, Konark Bhandari

  • Article
    Military Lessons from Operation Sindoor

    The India-Pakistan conflict that played out between May 6 and May 10, 2025, offers several military lessons. This article presents key takeaways from Operation Sindoor and breaks down how India’s preparations shaped the outcome and what more is needed to strengthen future readiness.

      Dinakar Peri

Get more news and analysis from
Carnegie India
Carnegie India logo, white
Unit C-4, 5, 6, EdenparkShaheed Jeet Singh MargNew Delhi – 110016, IndiaPhone: 011-40078687
  • Research
  • About
  • Experts
  • Projects
  • Events
  • Contact
  • Careers
  • Privacy
  • For Media
Get more news and analysis from
Carnegie India
© 2026 Carnegie Endowment for International Peace. All rights reserved.