The Carnegie Endowment for International Peace has been engaged with questions surrounding the global governance of artificial intelligence (AI) and how to handle liability for AI systems. Current legal approaches offer no easy answers to the tricky questions we must grapple with: What sort of regulations would protect public interests from harm without overly stifling innovation and scaled distribution? What frameworks would the European Union, India, Japan, and others, including China, plausibly accept?
Innovative approaches are needed. To highlight and encourage novel thinking in this area, Carnegie sponsored an award for legal scholarship on AI and liability and has now announced the winners. The top selection was awarded to Yotam Kaplan and Ayelet Gordon-Tapiero for their paper, “Generative AI Training as Unjust Enrichment.”
In the winning paper, Kaplan and Gordon-Tapiero offer a novel strategy for resolving an important debate that has divided scholars and policymakers since generative AI systems became widespread: should producers of the content used to build the models upon which these systems rely be entitled to compensation?
In a forthcoming publication with Carnegie, Kaplan and Gordon-Tapiero will expand on how unjust enrichment as the basis for resolving AI copyright issues could inform a broader global liability regime governing AI.
Submissions for the award were accepted from across the globe and addressed a wide range of conceptual frameworks and types of AI technology. The expert judging panel selected one winning paper alongside five finalists for the innovative perspectives they add to the discourse on AI and liability.
Winners and finalists included:
To read more about Kaplan and Gordon-Tapiero’s paper and the other five finalists, click here.
The expert judging panel, chaired by Carnegie President Mariano-Florentino (Tino) Cuéllar, reflects Carnegie’s commitment to prioritize solutions to the complex challenges that this transformational technology brings and to highlight pioneering work from the next generation of leaders in the legal field.
The panel of judges also included:
Carnegie awarded $20,000 for innovative legal scholarship on the issue of large language models and legal liability. We accepted submissions that have been published, or are candidates to be published, in a law review or law journal. Submissions were accepted until August 1, 2024, and were reviewed by an expert panel chaired by Carnegie President Mariano-Florentino (Tino) Cuéllar.
The Carnegie Endowment for International Peace is seeking to highlight bold ideas on the issue of large language models and legal liability, and to recognize rising legal scholars in this quickly emerging field.
Though the world anxiously debates whether and how AI should be regulated, history suggests that important aspects of such a multifaceted technology will not be regulated in the United States or international fora for many more years. In this time, significant products and services will be sold, fortunes will be made, and harm will be done.
Voluntary norms, ethical codes, and other gestures will affect relatively little. Unless and until there is liability—either in the legal sense or in the economic sense of penalties or lost revenue—not much will change.
But what sort of liability regime would protect public interests from harm without overly stifling innovation and scaled distribution? What sort of regime would the European Union, India, Japan, and others, including China, plausibly accept? What economic incentives or penalties could be incorporated into a liability regime to ensure meaningful accountability for harms imposed by large tech companies?
We seek innovative yet grounded approaches to this multifaceted issue and welcome papers offering diverse perspectives, including but certainly not limited to these questions above. We are particularly eager to recognize work that not only contributes to ongoing legal and policy conversations but also possesses enduring relevance.
Submission Deadline: August 1, 2024
Open to all legal scholars, practitioners, and researchers passionate about shaping the discourse on AI and liability. All entries must have been submitted to or published by a law review or law journal before September 2024. Unpublished drafts are perfectly acceptable, though we will only accept finished pieces. All submissions must be written in English. Carnegie employees and affiliates are ineligible for consideration. All applications must be eligible to receive payments from a US institution.
The judging panel was chaired by former California Supreme Court Justice and current Carnegie President Tino Cuéllar and included a slate of distinguished experts in tort law, tech law, and the broader artificial intelligence ecosystem. Winners were selected by October 1.
Any inquiries about the contest may also be submitted via our online form.
The Carnegie Endowment for International Peace has been engaged with questions surrounding the global governance of artificial intelligence (AI) and how to handle liability for AI systems. Current legal approaches offer no easy answers to the tricky questions we must grapple with: What sort of regulations would protect public interests from harm without overly stifling innovation and scaled distribution? What frameworks would the European Union, India, Japan, and others, including China, plausibly accept?
Innovative approaches are needed. To highlight and encourage novel thinking in this area, Carnegie sponsored an award for legal scholarship on AI and liability and has now announced the winners. The top selection was awarded to Yotam Kaplan and Ayelet Gordon-Tapiero for their paper, “Generative AI Training as Unjust Enrichment.”
In the winning paper, Kaplan and Gordon-Tapiero offer a novel strategy for resolving an important debate that has divided scholars and policymakers since generative AI systems became widespread: should producers of the content used to build the models upon which these systems rely be entitled to compensation?
In a forthcoming publication with Carnegie, Kaplan and Gordon-Tapiero will expand on how unjust enrichment as the basis for resolving AI copyright issues could inform a broader global liability regime governing AI.
Submissions for the award were accepted from across the globe and addressed a wide range of conceptual frameworks and types of AI technology. The expert judging panel selected one winning paper alongside five finalists for the innovative perspectives they add to the discourse on AI and liability.
Winners and finalists included:
To read more about Kaplan and Gordon-Tapiero’s paper and the other five finalists, click here.
The expert judging panel, chaired by Carnegie President Mariano-Florentino (Tino) Cuéllar, reflects Carnegie’s commitment to prioritize solutions to the complex challenges that this transformational technology brings and to highlight pioneering work from the next generation of leaders in the legal field.
The panel of judges also included:
Carnegie awarded $20,000 for innovative legal scholarship on the issue of large language models and legal liability. We accepted submissions that have been published, or are candidates to be published, in a law review or law journal. Submissions were accepted until August 1, 2024, and were reviewed by an expert panel chaired by Carnegie President Mariano-Florentino (Tino) Cuéllar.
The Carnegie Endowment for International Peace is seeking to highlight bold ideas on the issue of large language models and legal liability, and to recognize rising legal scholars in this quickly emerging field.
Though the world anxiously debates whether and how AI should be regulated, history suggests that important aspects of such a multifaceted technology will not be regulated in the United States or international fora for many more years. In this time, significant products and services will be sold, fortunes will be made, and harm will be done.
Voluntary norms, ethical codes, and other gestures will affect relatively little. Unless and until there is liability—either in the legal sense or in the economic sense of penalties or lost revenue—not much will change.
But what sort of liability regime would protect public interests from harm without overly stifling innovation and scaled distribution? What sort of regime would the European Union, India, Japan, and others, including China, plausibly accept? What economic incentives or penalties could be incorporated into a liability regime to ensure meaningful accountability for harms imposed by large tech companies?
We seek innovative yet grounded approaches to this multifaceted issue and welcome papers offering diverse perspectives, including but certainly not limited to these questions above. We are particularly eager to recognize work that not only contributes to ongoing legal and policy conversations but also possesses enduring relevance.
Submission Deadline: August 1, 2024
Open to all legal scholars, practitioners, and researchers passionate about shaping the discourse on AI and liability. All entries must have been submitted to or published by a law review or law journal before September 2024. Unpublished drafts are perfectly acceptable, though we will only accept finished pieces. All submissions must be written in English. Carnegie employees and affiliates are ineligible for consideration. All applications must be eligible to receive payments from a US institution.
The judging panel was chaired by former California Supreme Court Justice and current Carnegie President Tino Cuéllar and included a slate of distinguished experts in tort law, tech law, and the broader artificial intelligence ecosystem. Winners were selected by October 1.
Any inquiries about the contest may also be submitted via our online form.