Program
Technology and International Affairs
Artificial Intelligence

As artificial intelligence (AI) changes how people around the world live and work, new frontiers for international collaboration, competition, and conflict are opening. AI can, for example, improve (or detract) from international cyber stability, optimize (or bias) cloud-based services, or guide the targeting of biotechnology toward great discoveries (or terrible abuses). Carnegie partners with governments, industry, academia, and civil society to anticipate and mitigate the international security challenges from AI. By confronting both the short-term (2-5 years) and medium-term (5-10 years) challenges, we hope to mitigate the most urgent risks of AI while laying the groundwork for addressing its slower and subtler effects.

Award

Award for Scholarship on AI and Liability

Carnegie is awarding $20,000 for innovative legal scholarship on the issue of large language models and legal liability. We are accepting submissions that have been published, or are candidates to be published, in a peer-reviewed law journal. Submissions will be accepted until August 1, 2024, and will be reviewed by an expert panel chaired by Carnegie President Mariano-Florentino (Tino) Cuéllar.

Award

Award for Scholarship on AI and Liability

Carnegie is awarding $20,000 for innovative legal scholarship on the issue of large language models and legal liability. We are accepting submissions that have been published, or are candidates to be published, in a peer-reviewed law journal. Submissions will be accepted until August 1, 2024, and will be reviewed by an expert panel chaired by Carnegie President Mariano-Florentino (Tino) Cuéllar.

Illustration of a digital data tunnel
commentary
Carnegie Award for Scholarship on AI and Liability: Winner and Finalists

Together, these papers underscore the importance of legal liability to the governance of this globally transformative technology.

· November 13, 2024
In The Media
in the media
How the Global Community Can Come Together to Understand AI’s Risks

Meeting the moment with AI will require a multifaceted approach with stakeholders from many sectors and disciplines. They will need to build a clear vision for larger and slower-moving global policy-centric projects to work alongside more focused and agile scientific efforts that meet pressing needs.

· November 5, 2024
OECD
In The Media
in the media
The Science of AI Is Too Important to Be Left to the Scientists

The lack of scientific consensus on AI’s risks and benefits has become a major stumbling block for regulation—not just at the state and national level, but internationally as well. 

· October 21, 2024
Foreign Policy
Two men shaking hands
commentary
How the UK Should Engage China at AI’s Frontier

During his visit, Foreign Secretary Lammy should engage Beijing on one of the few windows for UK-China cooperation: AI safety.

· October 18, 2024
California capitol building dome with US and state flags
commentary
A Heated California Debate Offers Lessons for AI Safety Governance

The bill exposed divisions within the AI community, but proponents of safety regulation can heed the lessons of SB 1047 and tailor their future efforts accordingly.

· October 8, 2024
article
Transnational AI and Corporate Imperialism

People across the world are grappling with a few global technology companies’ domination of their public spheres and increasingly of other spheres of social, economic, and political engagement.

  • Chinmayi Arun
· October 8, 2024
article
The AI Governance Arms Race: From Summit Pageantry to Progress?

The race to regulate AI has produced a complex web of competing initiatives, including high-profile summits. To develop a coherent and effective AI governance framework, the global community must move from symbolic gestures to enforceable commitments.

· October 7, 2024
In The Media
in the media
Chinese Whispers: Will AI Be the Next Arms Race?

China has a rich landscape of homegrown AI products, where progress is being led by tech giants like search engine Baidu and TikTok’s owner, ByteDance. So already there is a bifurcation in the AI worlds of China and the West.

· September 30, 2024
Best of the Spectator
paper
If-Then Commitments for AI Risk Reduction

If-then commitments are an emerging framework for preparing for risks from AI without unnecessarily slowing the development of new technology. The more attention and interest there is in these commitments, the faster a mature framework can progress.

· September 13, 2024
article
Understanding the Global Debate on Lethal Autonomous Weapons Systems: An Indian Perspective

This article explores the global debate on lethal autonomous weapons systems (LAWS), highlighting the convergences, complexities, and differences within and beyond the UN Group of Governmental Experts (GGE) on LAWS. It further examines India’s key position at the GGE and the probable reasons behind them.

· August 30, 2024
Blue and green lights in wavy lines across the bottom third of the image on a black background
article
China’s Views on AI Safety Are Changing—Quickly

Beijing’s AI safety concerns are higher on the priority list, but they remain tied up in geopolitical competition and technological advancement.

· August 27, 2024
Cover image was generated using artificial intelligence (AI) technology
paper
The Future of International Scientific Assessments of AI’s Risks

Managing the risks of artificial intelligence will require international coordination among many actors with different interests, values, and perceptions.

  • +21
  • Hadrien Pouget
  • Claire Dennis
  • Jon Bateman
  • Robert Trager
  • Renan Araujo
  • Haydn Belfield
  • Belinda Cleeland
  • Malou Estier
  • Gideon Futerman
  • Oliver Guest
  • Carlos Ignacio Gutierrez
  • Vishnu Kannan
  • Casey Mahoney
  • Matthijs Maas
  • Charles Martinet
  • Jakob Mökander
  • Kwan Yee Ng
  • Seán Ó hÉigeartaigh
  • Aidan Peppin
  • Konrad Seifert
  • Scott Singer
  • Maxime Stauffer
  • Caleb Withers
  • Marta Ziosi
· August 27, 2024