Source: Getty
commentary

Proposal for an International Panel on Artificial Intelligence (AI) Safety (IPAIS): Summary

IPAIS would offer opportunities for collaboration to inform policymakers and the public on issues of AI safety.

by Mustafa SuleymanMariano-Florentino (Tino) CuéllarIan BremmerJason MathenyPhilip ZelikowEric Schmidt, and Dario Amodei
Published on October 27, 2023

There is a pressing need for fresh thinking on AI safety, security, and governance in order to reduce the risks of frontier AI technology and expand its benefits for the world. This proposal offers a list of key design principles for a new organization—an International Panel on AI Safety (IPAIS)—inspired by the Intergovernmental Panel on Climate Change (IPCC). This body would focus on assessing and validating the scientific evidence supporting a deep technical understanding of current AI capabilities, their improvement trajectories, and the relevant safety and security risks. With grounding in science, independence from political interference, and an international membership, IPAIS would present an opportunity for robust international collaboration to inform policymakers and the public. If successful, this institution could pave the way for further cooperation.

Design Principles

A new AI safety organization should satisfy at least the following key design principles:

  • Global engagement: Include experts from around the world in order to ensure research is trusted by responsible governments and the private sector globally. Use technical expertise and broad, international participation to advance a public interest mission and a state-of-the-art, actionable scientific consensus on AI safety and security.
  • Science-led and expert-driven: Focus on developing trustworthy, evidence-based analyses of the current and future state of AI safety and security independent of opinion and commercial bias. Assemble world-class talent to pool knowledge from the private sector, academia, the government, and civil society.
  • Focused: Target the most urgent AI governance challenge—safety and security—while leaving other research and policy issues to different fora.
  • Impartial: Respect and protect corporate intellectual property in order to become an organization with which all major actors feel comfortable sharing information at a high level of detail.
  • Minimizing entanglement with politics: Avoid acquiring regulatory or policymaking authority, but develop respect and trust to inform national decision-making and international coordination on technical matters that might be taken up by the policymaking process.
  • Internationally recognized benchmarks: Be capable of refining existing AI information-gathering paradigms, informing key policy dilemmas, and advancing new internationally recognized benchmarks for understanding and monitoring progress and developments in AI.
  • Independent: Maintain access to a stable funding base that allows for complete analytic independence.

 IPAIS’s Mission

IPAIS would fill the need for an objective, in-depth, expert-led understanding of where AI capabilities today and where they are headed. It would regularly and independently evaluate the state of AI, its risks and potential impacts, and estimated timelines for technological milestones. It would keep tabs on both technical and policy solutions to alleviate risks and enhance outcomes. Initially, IPAIS would be a purely fact-finding exercise giving comprehensive clarity on the state of AI and its trajectory, uses, and so on. It would focus, as much as possible, on objective, commonly agreed metrics and indicators—and where they are absent, it would help produce them.

IPAIS will complement a larger ecosystem of AI cooperation activity, including a network of safety institutes to perform research directly, the Frontier Model Forum, a host of voluntary measures, and various national-level initiatives. IPAIS will augment these activities by filling a need for dissemination of validated research that can build consensus about the most important risks to be mitigated so humanity can realize the benefits of AI, identification of key questions that merit further research, and building a community of scientists and technical experts who can collaborate and learn from one another.

Key Issues for Further Discussion

Further discussion can help address the goals of procuring sustainable, long-term funding for financial flexibility; achieving broad international buy-in; ensuring the governance structure serves the organization’s purposes; and escaping politicization and controversy by developing measures to buttress the organization’s credibility.