The rapid advances in the field of artificial intelligence (AI) offer extraordinary opportunities and risks. The opportunities span almost every societal domain -- from improving the accuracy and speed of medical diagnoses to reducing the energy consumption of data centers. But the risks are equally ubiquitous and significant. AI accidents, AI-enabled synthetic media (e.g., deepfakes), mass unemployment, and algorithmic bias are just some examples of how AI, if hastily developed or deployed, could undermine long-standing social, economic, and political institutions.
Consider AI safety—the collection of research and regulatory efforts seeking to ensure that AI systems reliably perform as desired. Cooperation between technical experts, civil society, and governments is essential for creating the kinds of technical standards and governance mechanisms needed to reduce the risk of AI accidents, particularly as AI is increasingly integrated into critical military and energy systems. The Partnership on AI leverages a powerful community of technical experts and civil society organizations working on AI safety to which Carnegie brings experience navigating the intergovernmental landscape.
Another arena where multi-stakeholder partnerships are essential is synthetic media—images, audio, or video created with AI. Synthetic media has several beneficial applications. For instance, it can recreate the voices of people with ALS who have lost their ability to speak. But synthetic media can also cause harm, for example if a seemingly-realistic false depiction of a political leader doing something she or he didn’t do incites civil unrest. Perhaps more worrisome, a proliferation of synthetic media could increase public skepticism of authentic media, leading to what some have called a ‘post-truth’ society. Maintaining the public’s trust in authentic media against threats from AI-enabled forgeries requires cooperation between journalists, civil society organizations, and social media platforms which the Partnership on AI is forging.
Across these and other areas, we look forward to working with the Partnership’s diverse community of experts to help build the policy infrastructure required for AI’s safe and beneficial use.