Technology and International Affairs
Award for Scholarship on AI and Liability

The Carnegie Endowment for International Peace has been engaged with questions surrounding the global governance of artificial intelligence (AI) and how to handle liability for AI systems. Current legal approaches offer no easy answers to the tricky questions we must grapple with: What sort of regulations would protect public interests from harm without overly stifling innovation and scaled distribution? What frameworks would the European Union, India, Japan, and others, including China, plausibly accept?  

Innovative approaches are needed. To highlight and encourage novel thinking in this area, Carnegie sponsored an award for legal scholarship on AI and liability and has now announced the winners. The top selection was awarded to Yotam Kaplan and Ayelet Gordon-Tapiero for their paper, “Generative AI Training as Unjust Enrichment.”  

In the winning paper, Kaplan and Gordon-Tapiero offer a novel strategy for resolving an important debate that has divided scholars and policymakers since generative AI systems became widespread: should producers of the content used to build the models upon which these systems rely be entitled to compensation? 

In a forthcoming publication with Carnegie, Kaplan and Gordon-Tapiero will expand on how unjust enrichment as the basis for resolving AI copyright issues could inform a broader global liability regime governing AI. 

Submissions for the award were accepted from across the globe and addressed a wide range of conceptual frameworks and types of AI technology. The expert judging panel selected one winning paper alongside five finalists for the innovative perspectives they add to the discourse on AI and liability.  

Winners and finalists included: 

  • Yotam Kaplan (Hebrew University) and Ayelet Gordon-Tapiero (Hebrew University) (winners) 
  • Peter N. Salib (University of Houston Law Center) (finalist) 
  • Noam Koalt (University of Toronto) (finalist) 
  • Mindy Nunez Duffourc (Maastricht University) (finalist) 
  • Gabriel Weil (Touro University Law Center) (finalist) 
  • Tejas N. Narechania (UC Berkeley School of Law), and Ganesh Sitaraman (Vanderbilt Law School) (finalists) 

To read more about Kaplan and Gordon-Tapiero’s paper and the other five finalists, click here. 

The expert judging panel, chaired by Carnegie President Mariano-Florentino (Tino) Cuéllar, reflects Carnegie’s commitment to prioritize solutions to the complex challenges that this transformational technology brings and to highlight pioneering work from the next generation of leaders in the legal field.  

The panel of judges also included: 

  • Anu Bradford, Henry L. Moses Distinguished Professor of Law and International Organization at Columbia Law School 
  • Jonathan Cedarbaum, Professor of Practice for National Security, Cybersecurity, and Foreign Relations Law at George Washington University Law School 
  • Nora Freeman Engstrom, Ernest W. McFarland Professor of Law at Stanford Law School 
  • Gillian Hadfield, Professor of Government and Policy and Research Professor of Computer Science at Johns Hopkins University 
  • Zia Khan, Chief Innovation Officer at the Rockefeller Foundation 

About the Award

Carnegie awarded $20,000 for innovative legal scholarship on the issue of large language models and legal liability. We accepted submissions that have been published, or are candidates to be published, in a law review or law journal. Submissions were accepted until August 1, 2024, and were reviewed by an expert panel chaired by Carnegie President Mariano-Florentino (Tino) Cuéllar.

About the Competition

The Carnegie Endowment for International Peace is seeking to highlight bold ideas on the issue of large language models and legal liability, and to recognize rising legal scholars in this quickly emerging field.  

Though the world anxiously debates whether and how AI should be regulated, history suggests that important aspects of such a multifaceted technology will not be regulated in the United States or international fora for many more years. In this time, significant products and services will be sold, fortunes will be made, and harm will be done.  

Voluntary norms, ethical codes, and other gestures will affect relatively little. Unless and until there is liability—either in the legal sense or in the economic sense of penalties or lost revenue—not much will change. 

But what sort of liability regime would protect public interests from harm without overly stifling innovation and scaled distribution? What sort of regime would the European Union, India, Japan, and others, including China, plausibly accept? What economic incentives or penalties could be incorporated into a liability regime to ensure meaningful accountability for harms imposed by large tech companies? 

We seek innovative yet grounded approaches to this multifaceted issue and welcome papers offering diverse perspectives, including but certainly not limited to these questions above. We are particularly eager to recognize work that not only contributes to ongoing legal and policy conversations but also possesses enduring relevance.

Submission Deadline: August 1, 2024 

Eligibility and Submission

Open to all legal scholars, practitioners, and researchers passionate about shaping the discourse on AI and liability. All entries must have been submitted to or published by a law review or law journal before September 2024. Unpublished drafts are perfectly acceptable, though we will only accept finished pieces. All submissions must be written in English. Carnegie employees and affiliates are ineligible for consideration. All applications must be eligible to receive payments from a US institution. 

Selection

The judging panel was chaired by former California Supreme Court Justice and current Carnegie President Tino Cuéllar and included a slate of distinguished experts in tort law, tech law, and the broader artificial intelligence ecosystem. Winners were selected by October 1. 

Any inquiries about the contest may also be submitted via our online form.

The Carnegie Endowment for International Peace has been engaged with questions surrounding the global governance of artificial intelligence (AI) and how to handle liability for AI systems. Current legal approaches offer no easy answers to the tricky questions we must grapple with: What sort of regulations would protect public interests from harm without overly stifling innovation and scaled distribution? What frameworks would the European Union, India, Japan, and others, including China, plausibly accept?  

Innovative approaches are needed. To highlight and encourage novel thinking in this area, Carnegie sponsored an award for legal scholarship on AI and liability and has now announced the winners. The top selection was awarded to Yotam Kaplan and Ayelet Gordon-Tapiero for their paper, “Generative AI Training as Unjust Enrichment.”  

In the winning paper, Kaplan and Gordon-Tapiero offer a novel strategy for resolving an important debate that has divided scholars and policymakers since generative AI systems became widespread: should producers of the content used to build the models upon which these systems rely be entitled to compensation? 

In a forthcoming publication with Carnegie, Kaplan and Gordon-Tapiero will expand on how unjust enrichment as the basis for resolving AI copyright issues could inform a broader global liability regime governing AI. 

Submissions for the award were accepted from across the globe and addressed a wide range of conceptual frameworks and types of AI technology. The expert judging panel selected one winning paper alongside five finalists for the innovative perspectives they add to the discourse on AI and liability.  

Winners and finalists included: 

  • Yotam Kaplan (Hebrew University) and Ayelet Gordon-Tapiero (Hebrew University) (winners) 
  • Peter N. Salib (University of Houston Law Center) (finalist) 
  • Noam Koalt (University of Toronto) (finalist) 
  • Mindy Nunez Duffourc (Maastricht University) (finalist) 
  • Gabriel Weil (Touro University Law Center) (finalist) 
  • Tejas N. Narechania (UC Berkeley School of Law), and Ganesh Sitaraman (Vanderbilt Law School) (finalists) 

To read more about Kaplan and Gordon-Tapiero’s paper and the other five finalists, click here. 

The expert judging panel, chaired by Carnegie President Mariano-Florentino (Tino) Cuéllar, reflects Carnegie’s commitment to prioritize solutions to the complex challenges that this transformational technology brings and to highlight pioneering work from the next generation of leaders in the legal field.  

The panel of judges also included: 

  • Anu Bradford, Henry L. Moses Distinguished Professor of Law and International Organization at Columbia Law School 
  • Jonathan Cedarbaum, Professor of Practice for National Security, Cybersecurity, and Foreign Relations Law at George Washington University Law School 
  • Nora Freeman Engstrom, Ernest W. McFarland Professor of Law at Stanford Law School 
  • Gillian Hadfield, Professor of Government and Policy and Research Professor of Computer Science at Johns Hopkins University 
  • Zia Khan, Chief Innovation Officer at the Rockefeller Foundation 

About the Award

Carnegie awarded $20,000 for innovative legal scholarship on the issue of large language models and legal liability. We accepted submissions that have been published, or are candidates to be published, in a law review or law journal. Submissions were accepted until August 1, 2024, and were reviewed by an expert panel chaired by Carnegie President Mariano-Florentino (Tino) Cuéllar.

About the Competition

The Carnegie Endowment for International Peace is seeking to highlight bold ideas on the issue of large language models and legal liability, and to recognize rising legal scholars in this quickly emerging field.  

Though the world anxiously debates whether and how AI should be regulated, history suggests that important aspects of such a multifaceted technology will not be regulated in the United States or international fora for many more years. In this time, significant products and services will be sold, fortunes will be made, and harm will be done.  

Voluntary norms, ethical codes, and other gestures will affect relatively little. Unless and until there is liability—either in the legal sense or in the economic sense of penalties or lost revenue—not much will change. 

But what sort of liability regime would protect public interests from harm without overly stifling innovation and scaled distribution? What sort of regime would the European Union, India, Japan, and others, including China, plausibly accept? What economic incentives or penalties could be incorporated into a liability regime to ensure meaningful accountability for harms imposed by large tech companies? 

We seek innovative yet grounded approaches to this multifaceted issue and welcome papers offering diverse perspectives, including but certainly not limited to these questions above. We are particularly eager to recognize work that not only contributes to ongoing legal and policy conversations but also possesses enduring relevance.

Submission Deadline: August 1, 2024 

Eligibility and Submission

Open to all legal scholars, practitioners, and researchers passionate about shaping the discourse on AI and liability. All entries must have been submitted to or published by a law review or law journal before September 2024. Unpublished drafts are perfectly acceptable, though we will only accept finished pieces. All submissions must be written in English. Carnegie employees and affiliates are ineligible for consideration. All applications must be eligible to receive payments from a US institution. 

Selection

The judging panel was chaired by former California Supreme Court Justice and current Carnegie President Tino Cuéllar and included a slate of distinguished experts in tort law, tech law, and the broader artificial intelligence ecosystem. Winners were selected by October 1. 

Any inquiries about the contest may also be submitted via our online form.