paper
A Sketch of Potential Tripwire Capabilities for AI
Six possible future AI capabilities that could deserve advance preparation and pre-commitments, in order to avoid catastrophic risks.
· December 10, 2024
As artificial intelligence (AI) changes how people around the world live and work, new frontiers for international collaboration, competition, and conflict are opening. AI can, for example, improve (or detract) from international cyber stability, optimize (or bias) cloud-based services, or guide the targeting of biotechnology toward great discoveries (or terrible abuses). Carnegie partners with governments, industry, academia, and civil society to anticipate and mitigate the international security challenges from AI. By confronting both the short-term (2-5 years) and medium-term (5-10 years) challenges, we hope to mitigate the most urgent risks of AI while laying the groundwork for addressing its slower and subtler effects.
Carnegie is awarding $20,000 for innovative legal scholarship on the issue of large language models and legal liability. We are accepting submissions that have been published, or are candidates to be published, in a peer-reviewed law journal. Submissions will be accepted until August 1, 2024, and will be reviewed by an expert panel chaired by Carnegie President Mariano-Florentino (Tino) Cuéllar.
Carnegie is awarding $20,000 for innovative legal scholarship on the issue of large language models and legal liability. We are accepting submissions that have been published, or are candidates to be published, in a peer-reviewed law journal. Submissions will be accepted until August 1, 2024, and will be reviewed by an expert panel chaired by Carnegie President Mariano-Florentino (Tino) Cuéllar.