research
Shaping AI’s Impact on Billions of Lives
The AI community is at risk of becoming polarized to either take a laissez-faire attitude toward AI development, or to call for government overregulation.
by Mariano-Florentino (Tino) Cuéllar, Jeff Dean, Finale Doshi-Velez, John Hennessy, Andy Konwinski, Sanmi Koyejo, Pelonomi Moiloa, Emma Pierson, and David Patterson
published by on December 3, 2024
Arxiv
More work from Carnegie
- collectionArtificial Intelligence
As artificial intelligence (AI) changes how people around the world live and work, new frontiers for international collaboration, competition, and conflict are opening. AI can, for example, improve (or detract) from international cyber stability, optimize (or bias) cloud-based services, or guide the targeting of biotechnology toward great discoveries (or terrible abuses). Carnegie partners with governments, industry, academia, and civil society to anticipate and mitigate the international security challenges from AI. By confronting both the short-term (2-5 years) and medium-term (5-10 years) challenges, we hope to mitigate the most urgent risks of AI while laying the groundwork for addressing its slower and subtler effects.
- researchIn Which Areas of Technical AI Safety Could Geopolitical Rivals Cooperate?
While many experts advocate for greater international cooperation on AI safety to address shared global risks, some view cooperation on AI with suspicion, arguing that it can pose unacceptable risks to national security. However, the extent to which cooperation on AI safety poses such risks, as well as provides benefits, depends on the specific area of cooperation.
- +19
- Ben Bucknall,
- Saad Siddiqui,
- Lara Thurnherr,
- Conor McGurk,
- Ben Harack,
- Anka Reuel,
- Patricia Paskov,
- Casey Mahoney,
- Sören Mindermann,
- Scott Singer,
- Vinay Hiremath,
- Charbel-Raphaël Segerie,
- Oscar Delaney,
- Alessandro Abate,
- Fazl Barez,
- Michael Cohen,
- Philip Torr,
- Ferenc Huszár,
- Anisoara Calinescu,
- Gabriel Davis Jones,
- Yoshua Bengio,
- Robert Trager
Arxiv - researchExamining AI Safety as a Global Public Good: Implications, Challenges, and Research Priorities
Drawing on lessons from climate change, nuclear safety, and global health governance, this analysis examines whether and how applying the framework of a “public good” could help us better understand and address the challenges posed by advanced AI systems.
- +13
- Kayla Blomquist,
- Elisabeth Siegel,
- Ben Harack,
- Kwan Yee Ng,
- Tom David,
- Brian Tse,
- Charles Martinet,
- Matt Sheehan,
- Scott Singer,
- Imane Bello,
- Zakariyau Yusuf,
- Robert Trager,
- Fadi Salem,
- Seán Ó hÉigeartaigh,
- Jing Zhao,
- Kai Jia
Oxford Martin AI Governance Initiative, Concordia AI, and Carnegie Endowment for International Peace - paperThe Missing Pieces in India’s AI Puzzle: Talent, Data, and R&D
This paper explores the question of whether India specifically will be able to compete and lead in AI or whether it will remain relegated to a minor role in this global competition.
- articleDeepSeek and Other Chinese Firms Converge with Western Companies on AI Promises
The AI race is breaking open. An upcoming summit offers an opportunity to U.S. and Chinese companies to agree on safety and security measures.