Technology and International Affairs
Award for Scholarship on AI and Liability

Carnegie is awarding $20,000 for innovative legal scholarship on the issue of large language models and legal liability. We are accepting submissions that have been published, or are candidates to be published, in a law review or law journal. Submissions will be accepted until August 1, 2024, and will be reviewed by an expert panel chaired by Carnegie President Mariano-Florentino (Tino) Cuéllar. The top selection will receive $10,000, with two runners-up each receiving $5,000. All three winners may have the opportunity to publish a shorter article that highlights some of their ideas with Carnegie. 

About the Competition

The Carnegie Endowment for International Peace is seeking to highlight bold ideas on the issue of large language models and legal liability, and to recognize rising legal scholars in this quickly emerging field.  

Though the world anxiously debates whether and how AI should be regulated, history suggests that important aspects of such a multifaceted technology will not be regulated in the United States or international fora for many more years. In this time, significant products and services will be sold, fortunes will be made, and harm will be done.  

Voluntary norms, ethical codes, and other gestures will affect relatively little. Unless and until there is liability—either in the legal sense or in the economic sense of penalties or lost revenue—not much will change. 

But what sort of liability regime would protect public interests from harm without overly stifling innovation and scaled distribution? What sort of regime would the European Union, India, Japan, and others, including China, plausibly accept? What economic incentives or penalties could be incorporated into a liability regime to ensure meaningful accountability for harms imposed by large tech companies? 

We seek innovative yet grounded approaches to this multifaceted issue and welcome papers offering diverse perspectives, including but certainly not limited to these questions above. We are particularly eager to recognize work that not only contributes to ongoing legal and policy conversations but also possesses enduring relevance.

Submission Deadline: August 1, 2024 

Eligibility and Submission

Open to all legal scholars, practitioners, and researchers passionate about shaping the discourse on AI and liability. All entries must have been submitted to or published by a law review or law journal before September 2024. Unpublished drafts are perfectly acceptable, though we will only accept finished pieces. All submissions must be written in English. Carnegie employees and affiliates are ineligible for consideration. All applications must be eligible to receive payments from a US institution. 

Please submit all entries using our online submission form.

Selection

Three papers will be recognized. The author of the first-place selection will receive $10,000, and two runners-up will each receive $5,000. Winning authors may additionally have the opportunity to write a shorter piece to be published with Carnegie and have their ideas disseminated to a wider audience of stakeholders. 

The judging panel will be chaired by former California Supreme Court Justice and current Carnegie President Tino Cuéllar and will include a slate of distinguished experts in tort law, tech law, and the broader artificial intelligence ecosystem. Winners will be selected by October 1. 

Any inquiries about the contest may also be submitted via our online form.

Carnegie is awarding $20,000 for innovative legal scholarship on the issue of large language models and legal liability. We are accepting submissions that have been published, or are candidates to be published, in a law review or law journal. Submissions will be accepted until August 1, 2024, and will be reviewed by an expert panel chaired by Carnegie President Mariano-Florentino (Tino) Cuéllar. The top selection will receive $10,000, with two runners-up each receiving $5,000. All three winners may have the opportunity to publish a shorter article that highlights some of their ideas with Carnegie. 

About the Competition

The Carnegie Endowment for International Peace is seeking to highlight bold ideas on the issue of large language models and legal liability, and to recognize rising legal scholars in this quickly emerging field.  

Though the world anxiously debates whether and how AI should be regulated, history suggests that important aspects of such a multifaceted technology will not be regulated in the United States or international fora for many more years. In this time, significant products and services will be sold, fortunes will be made, and harm will be done.  

Voluntary norms, ethical codes, and other gestures will affect relatively little. Unless and until there is liability—either in the legal sense or in the economic sense of penalties or lost revenue—not much will change. 

But what sort of liability regime would protect public interests from harm without overly stifling innovation and scaled distribution? What sort of regime would the European Union, India, Japan, and others, including China, plausibly accept? What economic incentives or penalties could be incorporated into a liability regime to ensure meaningful accountability for harms imposed by large tech companies? 

We seek innovative yet grounded approaches to this multifaceted issue and welcome papers offering diverse perspectives, including but certainly not limited to these questions above. We are particularly eager to recognize work that not only contributes to ongoing legal and policy conversations but also possesses enduring relevance.

Submission Deadline: August 1, 2024 

Eligibility and Submission

Open to all legal scholars, practitioners, and researchers passionate about shaping the discourse on AI and liability. All entries must have been submitted to or published by a law review or law journal before September 2024. Unpublished drafts are perfectly acceptable, though we will only accept finished pieces. All submissions must be written in English. Carnegie employees and affiliates are ineligible for consideration. All applications must be eligible to receive payments from a US institution. 

Please submit all entries using our online submission form.

Selection

Three papers will be recognized. The author of the first-place selection will receive $10,000, and two runners-up will each receive $5,000. Winning authors may additionally have the opportunity to write a shorter piece to be published with Carnegie and have their ideas disseminated to a wider audience of stakeholders. 

The judging panel will be chaired by former California Supreme Court Justice and current Carnegie President Tino Cuéllar and will include a slate of distinguished experts in tort law, tech law, and the broader artificial intelligence ecosystem. Winners will be selected by October 1. 

Any inquiries about the contest may also be submitted via our online form.