Program
Technology and International Affairs
Artificial Intelligence

As artificial intelligence (AI) changes how people around the world live and work, new frontiers for international collaboration, competition, and conflict are opening. AI can, for example, improve (or detract) from international cyber stability, optimize (or bias) cloud-based services, or guide the targeting of biotechnology toward great discoveries (or terrible abuses). Carnegie partners with governments, industry, academia, and civil society to anticipate and mitigate the international security challenges from AI. By confronting both the short-term (2-5 years) and medium-term (5-10 years) challenges, we hope to mitigate the most urgent risks of AI while laying the groundwork for addressing its slower and subtler effects.

Award

Award for Scholarship on AI and Liability

Carnegie is awarding $20,000 for innovative legal scholarship on the issue of large language models and legal liability. We are accepting submissions that have been published, or are candidates to be published, in a peer-reviewed law journal. Submissions will be accepted until August 1, 2024, and will be reviewed by an expert panel chaired by Carnegie President Mariano-Florentino (Tino) Cuéllar.

Award

Award for Scholarship on AI and Liability

Carnegie is awarding $20,000 for innovative legal scholarship on the issue of large language models and legal liability. We are accepting submissions that have been published, or are candidates to be published, in a peer-reviewed law journal. Submissions will be accepted until August 1, 2024, and will be reviewed by an expert panel chaired by Carnegie President Mariano-Florentino (Tino) Cuéllar.

In The Media
in the media
Chinese Whispers: Will AI Be the Next Arms Race?

China has a rich landscape of homegrown AI products, where progress is being led by tech giants like search engine Baidu and TikTok’s owner, ByteDance. So already there is a bifurcation in the AI worlds of China and the West.

· September 30, 2024
Best of the Spectator
article
Understanding the Global Debate on Lethal Autonomous Weapons Systems: An Indian Perspective

This article explores the global debate on lethal autonomous weapons systems (LAWS), highlighting the convergences, complexities, and differences within and beyond the UN Group of Governmental Experts (GGE) on LAWS. It further examines India’s key position at the GGE and the probable reasons behind them.

· August 30, 2024
Cover image was generated using artificial intelligence (AI) technology
paper
The Future of International Scientific Assessments of AI’s Risks

Managing the risks of artificial intelligence will require international coordination among many actors with different interests, values, and perceptions.

  • +21
  • Hadrien Pouget
  • Claire Dennis
  • Jon Bateman
  • Robert Trager
  • Renan Araujo
  • Haydn Belfield
  • Belinda Cleeland
  • Malou Estier
  • Gideon Futerman
  • Oliver Guest
  • Carlos Ignacio Gutierrez
  • Vishnu Kannan
  • Casey Mahoney
  • Matthijs Maas
  • Charles Martinet
  • Jakob Mökander
  • Kwan Yee Ng
  • Seán Ó hÉigeartaigh
  • Aidan Peppin
  • Konrad Seifert
  • Scott Singer
  • Maxime Stauffer
  • Caleb Withers
  • Marta Ziosi
· August 27, 2024
Blue and green lights in wavy lines across the bottom third of the image on a black background
article
China’s Views on AI Safety Are Changing—Quickly

Beijing’s AI safety concerns are higher on the priority list, but they remain tied up in geopolitical competition and technological advancement.

· August 27, 2024
map of the world with digital-looking connections between countries
commentary
The Risk of Bringing AI Discussions Into High-Level Nuclear Dialogues

Overly generalized discussions on the emerging technology may be unproductive or even undermine consensus to reduce nuclear risks at a time when such consensus is desperately needed.

  • Lindsay Rand
· August 19, 2024
Video of Chris Chivvis discussing the role of AI in national security decision-making.
Is AI the Future of National Security?

Through a simulation of a Chinese blockade on Taiwan, Carnegie scholars examine AI's potential impact on national security crises. How would AI impact the speed, perception, and groupthink of bureaucratic decisionmakers? Learn more in Christopher S. Chivvis and Jennifer Kavanagh's full article.

· July 25, 2024
Macron gesturing as he speaks at a dais
commentary
France’s AI Summit Is a Chance to Reshape Global Narratives on AI

But Paris first must hone its alternative vision.

· July 24, 2024
paper
Beyond Open vs. Closed: Emerging Consensus and Key Questions for Foundation AI Model Governance

Ideological conflict between “pro-open” and “anti-open” camps is receding. Carnegie gathered leading experts from a wide range of perspectives to identify common ground and help reset AI governance debates.

  • +15
· July 23, 2024
article
Governing Military AI Amid a Geopolitical Minefield

The lack of an international governance framework for military AI poses risks to global security. The EU should spearhead an inclusive initiative to set global standards and ensure the responsible use of AI in warfare.

· July 17, 2024
Lines of neon colors intersecting on a dark blue background
article
How AI Might Affect Decisionmaking in a National Security Crisis

In a time-sensitive U.S. national crisis, AI would impact the speed, perception, and groupthink of bureaucratic decisionmakers.

event
AI Governance for the Global Majority: Understanding Opportunities and Challenges
May 9, 2024

Carnegie’s AI in the Global Majority project brings together scholars, practitioners, and entrepreneurs to elucidate gaps and opportunities in the current global AI governance narrative through a series of publications. Join project authors for a virtual discussion moderated by Carnegie scholars.

  • +6
  • Aubra Anthony
  • Elina Noor
  • Jake Okechukwu Effoduh
  • Rachel Gong
  • Jun-E Tan
  • Ranjit Singh
  • Chijioke Okorie
  • Vukosi Marivate
  • Carolina Botero
article
Latin American AI Strategies Can Tackle Copyright as a Legal Risk for Researchers

Amid the rapid adoption of AI, there is a complicated balance between IP rights, especially copyright regimes, and international human rights standards. Latin American countries can update policy environments to strengthen support for the right to research with AI.

  • Carolina Botero
· April 30, 2024