• Research
  • Strategic Europe
  • About
  • Experts
Carnegie Europe logoCarnegie lettermark logo
EUUkraine
  • Donate
{
  "authors": [
    "Amlan Mohanty",
    "Tejas Bharadwaj"
  ],
  "type": "commentary",
  "centerAffiliationAll": "",
  "centers": [
    "Carnegie Endowment for International Peace",
    "Carnegie India"
  ],
  "englishNewsletterAll": "",
  "nonEnglishNewsletterAll": "",
  "primaryCenter": "Carnegie India",
  "programAffiliation": "",
  "programs": [],
  "projects": [
    "Technology and Society"
  ],
  "regions": [
    "India"
  ],
  "topics": [
    "Technology",
    "AI"
  ]
}
Commentary
Carnegie India

The Importance of AI Safety Institutes

This essay traces the evolution of AI safety institutes around the world, explores different national approaches, and examines the need for an AI safety institute in India.

Link Copied
By Amlan Mohanty and Tejas Bharadwaj
Published on Jun 28, 2024
Project hero Image

Project

Technology and Society

This program focuses on five sets of imperatives: data, strategic technologies, emerging technologies, digital public infrastructure, and strategic partnerships.

Learn More

Introduction

At the AI Seoul Summit held in May 2024, over twenty countries affirmed the role of artificial intelligence (AI) safety institutes to “enhance international cooperation on AI risk management and increase global understanding in the realm of AI safety and security.”

Countries at the forefront of AI development, including the United States, the United Kingdom, South Korea, Japan, Singapore, and the European Union, signed the Seoul declaration. This has laid the groundwork for a more collaborative and multilateral approach to AI safety through established institutions.

India has participated in the Seoul ministerial statement, suggesting there is political appetite for a national AI safety institute that can contribute to future engagements on AI safety.

In this essay, we trace the evolution of AI safety institutes around the world, explore different national approaches, and examine the need for an AI safety institute in India.

The Bletchley Effect

The launch of ChatGPT in November 2022 spurred policymakers to address the emerging risks of AI. These deliberations have resulted in the adoption of global principles for the safe development and use of AI at the G7, G20, GPAI, and the UN.

Meanwhile, the United Kingdom is taking the lead in identifying risks posed by “frontier models,” defined as “highly capable general purpose AI models that can perform a wide variety of tasks and can match or exceed the capabilities of current advanced models.” The inaugural AI Safety Summit, hosted by the United Kingdom at Bletchley Park in November 2023, helped develop a shared understanding of the risks of frontier models and the need for global norms on testing of AI models.

Till this point, the norm was for companies to brief government officials on the potential risks of frontier models. At Bletchley, participating companies committed to independent testing of their latest AI models – a significant move towards transparency.

The concept of an “AI Safety Institute” also emerged from this process. The UK launched the AI Safety Institute to coordinate research and build its internal capabilities for testing of frontier models. This was soon followed by AI safety institutes in the United States, Japan, Singapore,Canada, and the EU, in a process that is now being called the “Bletchley effect.”

The concept of AI safety institutes gained momentum at the Seoul Summit, jointly organized by the UK and South Korea in May. The Seoul Declaration not only endorsed the establishment of AI safety institutes in different countries but also proposed the creation of an international network of such institutes for greater multilateral collaboration on AI safety.

A Global Network for AI Safety

The “international network of AI Safety Institutes” is based on a statement of intent signed by ten countries at the Seoul Summit—Australia, Canada, France, Germany, Italy, Japan, South Korea, Singapore, the United States, and the UK—along with the EU. The purpose of this global network is to enable governments to develop best practices, share resources for testing, exchange findings, and collaborate on monitoring specific AI harms and incidents.

AI safety institutes are usually publicly funded and state-backed. They offer technical expertise and resources to governments to help understand and mitigate the risks of AI. They also help policymakers formulate risk-based strategies, and evaluate the safety of AI models (the UK’s AI Safety Institute has already tested five AI models).

National approaches may differ in some respects. For example, the AI Safety Institute set up by the U.S. is established under the National Institute of Standards and Technology (NIST), a standard-setting body. Singapore, meanwhile, has designated the existing Digital Trust Centre in the Nanyang Technological University as its AI safety institute. Differences in enforcement powers also exist. For example, the  EU AI office, which acts as the EU’s AI safety institute, can exercise legal powers under the Artificial Intelligence Act, whereas the UK’s AI Safety Institute does not have any regulatory powers.

Ensuring that the frameworks developed by these institutes are ‘interoperable’, despite their structural and functional differences, will be a key metric of success of the global network.

An AI Safety Institute for India?

Ensuring AI safety is a policy imperative for India. It features prominently as the ‘seventh pillar’ of the AI mission, and anchors the current debate on AI regulation. With the development of indigenous tools and frameworks to promote AI safety already in the works, a national AI safety institute could provide the required expertise and institutional backing.

The private sector will likely support the establishment of a national AI safety institute. Building national capacity in AI safety and testing could help counteract burdensome regulation and promote interoperability across international markets.

Moreover, India has actively engaged in the global discourse on AI safety. It is a signatory to the Bletchley declaration and a part of the Seoul ministerial statement, both if which endorse the idea of AI safety institutes. Establishing a national AI safety institute of its own would ensure India’s continued involvement in these conversations, including at the next AI safety summit in France.

Now is the right time for the Indian government to bring together key stakeholders and convene a strategic dialogue on the need for a national AI safety institute.

Authors

Amlan Mohanty
Fellow, Technology and Society Program
Amlan Mohanty
Tejas Bharadwaj
Senior Research Analyst, Technology and Society Program
Tejas Bharadwaj
TechnologyAIIndia

Carnegie does not take institutional positions on public policy issues; the views represented herein are those of the author(s) and do not necessarily reflect the views of Carnegie, its staff, or its trustees.

More Work from Carnegie Europe

  • Commentary
    Strategic Europe
    The EU and India in Tandem

    As European leadership prepares for the sixteenth EU-India Summit, both sides must reckon with trade-offs in order to secure a mutually beneficial Free Trade Agreement.

      Dinakar Peri

  • Commentary
    Strategic Europe
    Corporate Geopolitics: When Billionaires Rival States

    Tech giants are increasingly able to wield significant geopolitical influence. To ensure digital sovereignty, governments must insist on transparency and accountability.

      Raluca Csernatoni

  • Commentary
    Five Pillars for Europe in the Second Trump Era

    The second Trump administration has shifted the cornerstones of the liberal international order. How the EU responds will determine not only its global standing but also the very integrity of the European project.

      • Rym Momtaz

      Rym Momtaz

  • Democracy tech citizenship
    Paper
    Rethinking EU Digital Policies: From Tech Sovereignty to Tech Citizenship

    The EU’s pursuit of tech sovereignty has often sidelined the role of democracy in the digital sphere. The union should adopt a tech citizenship strategy that promotes citizen engagement, democratic innovation, and accountability.

      Richard Youngs

  • Commentary
    Can the EU Achieve Its Tech Ambitions?

    The EU’s quest for strategic autonomy in the digital domain is challenged by national interests. Brussels can set a bold direction on tech sovereignty, but its success will require a robust framework and delicate compromises.

      Raluca Csernatoni, Sinan Ülgen

Get more news and analysis from
Carnegie Europe
Carnegie Europe logo, white
Rue du Congrès, 151000 Brussels, Belgium
  • Research
  • Strategic Europe
  • About
  • Experts
  • Projects
  • Events
  • Contact
  • Careers
  • Privacy
  • For Media
Get more news and analysis from
Carnegie Europe
© 2026 Carnegie Endowment for International Peace. All rights reserved.