research
Defense Against the AI Dark Arts: Threat Assessment and Coalition Defense
The United States must now start working very hard with allies to secure democratic advantage in the domain of frontier AI
published by on December 4, 2024
Hoover Institution
More work from Carnegie
- collectionArtificial Intelligence
As artificial intelligence (AI) changes how people around the world live and work, new frontiers for international collaboration, competition, and conflict are opening. AI can, for example, improve (or detract) from international cyber stability, optimize (or bias) cloud-based services, or guide the targeting of biotechnology toward great discoveries (or terrible abuses). Carnegie partners with governments, industry, academia, and civil society to anticipate and mitigate the international security challenges from AI. By confronting both the short-term (2-5 years) and medium-term (5-10 years) challenges, we hope to mitigate the most urgent risks of AI while laying the groundwork for addressing its slower and subtler effects.
- articleWar and Law in a Digital World
As technological innovation continues to change the realities of war, the lack of agreement about how pre-digital rules apply to the digitalized battlespace risk turning legal arguments in an extension of conflicts.
- Aurel Sari
- articleDigital Technology, Strategic Adaptation, and the Outcomes of Twenty-First Century Armed Conflict
If digital technology is truly transforming the nature of armed conflict, why aren’t these advances leading to decisive victories?
- Nate Allen
- researchThe California Report on Frontier AI Policy
The innovations emerging at the frontier of artificial intelligence are poised to create historic opportunities for humanity but also raise complex policy challenges. As the epicenter of global AI innovation, California has a unique opportunity to continue supporting developments in frontier AI while addressing substantial risks that could have far-reaching consequences for the state and beyond.
- +20
- Rishi Bommasani,
- Scott Singer,
- Ruth Appel,
- Sarah Cen,
- A. Feder Cooper,
- Elena Cryst,
- Lindsey Gailmard,
- Ian Klaus,
- Meredith Lee,
- Inioluwa Raji,
- Anka Reuel,
- Drew Spence,
- Alexander Wan,
- Angelina Wang,
- Daniel Zhang,
- Daniel Ho,
- Percy Liang,
- Dawn Song,
- Joseph Gonzalez,
- Jonathan Zittrain,
- Jennifer Chayes,
- Mariano-Florentino (Tino) Cuéllar,
- Li Fei-Fei
The Joint California Policy Working Group on AI Frontier Models - paperHow Some of China’s Top AI Thinkers Built Their Own AI Safety Institute
The emergence of the China AI Safety and Development Association (CnAISDA) is a pivotal moment for China’s frontier AI governance. How it navigates substantial domestic challenges and growing geopolitical tensions will shape conversations on frontier AI risks in China and abroad.
- Scott Singer,
- Karson Elmgren,
- Oliver Guest