Program
Technology and International Affairs
Artificial Intelligence

As artificial intelligence (AI) changes how people around the world live and work, new frontiers for international collaboration, competition, and conflict are opening. AI can, for example, improve (or detract) from international cyber stability, optimize (or bias) cloud-based services, or guide the targeting of biotechnology toward great discoveries (or terrible abuses). Carnegie partners with governments, industry, academia, and civil society to anticipate and mitigate the international security challenges from AI. By confronting both the short-term (2-5 years) and medium-term (5-10 years) challenges, we hope to mitigate the most urgent risks of AI while laying the groundwork for addressing its slower and subtler effects.

Award

Award for Scholarship on AI and Liability

Carnegie is awarding $20,000 for innovative legal scholarship on the issue of large language models and legal liability. We are accepting submissions that have been published, or are candidates to be published, in a peer-reviewed law journal. Submissions will be accepted until August 1, 2024, and will be reviewed by an expert panel chaired by Carnegie President Mariano-Florentino (Tino) Cuéllar.

Award

Award for Scholarship on AI and Liability

Carnegie is awarding $20,000 for innovative legal scholarship on the issue of large language models and legal liability. We are accepting submissions that have been published, or are candidates to be published, in a peer-reviewed law journal. Submissions will be accepted until August 1, 2024, and will be reviewed by an expert panel chaired by Carnegie President Mariano-Florentino (Tino) Cuéllar.

artificial intelligence abstract
paper
A Sketch of Potential Tripwire Capabilities for AI

Six possible future AI capabilities that could deserve advance preparation and pre-commitments, in order to avoid catastrophic risks.

· December 10, 2024
research
Defense Against the AI Dark Arts: Threat Assessment and Coalition Defense

The United States must now start working very hard with allies to secure democratic advantage in the domain of frontier AI

  • +1
· December 4, 2024
Hoover Institution
research
Shaping AI’s Impact on Billions of Lives

The AI community is at risk of becoming polarized to either take a laissez-faire attitude toward AI development, or to call for government overregulation. 

  • +6
· December 3, 2024
Arxiv
Source: iStock
paper
India’s Advance on AI Regulation

This paper provides a comprehensive analysis of AI regulation in India by examining perspectives across government, industry, and civil society stakeholders. It evaluates the current regulatory state and proposes a policy roadmap forward. Does India need new AI regulations? What should they look like? Who is driving this debate in India and what are their views?

· November 21, 2024
Illustration of a digital data tunnel
commentary
Carnegie Award for Scholarship on AI and Liability: Winner and Finalists

Together, these papers underscore the importance of legal liability to the governance of this globally transformative technology.

· November 13, 2024
research
A Tech Policy Planning Guide for India—Beyond the First 100 Days

This compendium provides an independent look at how to get the most out of India’s current technology ecosystem, and the measures that may need to be adopted or re-considered in order to build a lasting and enduring framework for policy changes in the select areas under the current administration.

In The Media
in the media
How the Global Community Can Come Together to Understand AI’s Risks

Meeting the moment with AI will require a multifaceted approach with stakeholders from many sectors and disciplines. They will need to build a clear vision for larger and slower-moving global policy-centric projects to work alongside more focused and agile scientific efforts that meet pressing needs.

· November 5, 2024
OECD
In The Media
in the media
The Emerging Age of AI Diplomacy

To compete with China, the United States must walk a tightrope in the Gulf.

· October 28, 2024
Foreign Affairs
In The Media
in the media
The Science of AI Is Too Important to Be Left to the Scientists

The lack of scientific consensus on AI’s risks and benefits has become a major stumbling block for regulation—not just at the state and national level, but internationally as well. 

· October 21, 2024
Foreign Policy
Two men shaking hands
commentary
How the UK Should Engage China at AI’s Frontier

During his visit, Foreign Secretary Lammy should engage Beijing on one of the few windows for UK-China cooperation: AI safety.

· October 18, 2024
In The Media
in the media
On Controlling AI Agents

A discussion about what distinguishes AI agents from current generative AI tools, sources of concern, and potential ways of realizing control.

· October 17, 2024
The Lawfare Podcast
California capitol building dome with US and state flags
commentary
A Heated California Debate Offers Lessons for AI Safety Governance

The bill exposed divisions within the AI community, but proponents of safety regulation can heed the lessons of SB 1047 and tailor their future efforts accordingly.

· October 8, 2024