Illustration of a digital data tunnel

Photo by MF3d/iStock

Announcement

Carnegie Award for Scholarship on AI and Liability: Winner and Finalists

Together, these papers underscore the importance of legal liability to the governance of this globally transformative technology.

by Jonathan Cedarbaum and Thom Crockett
Published on November 13, 2024

Around the world, governments, international organizations, industrial associations, and civil society groups of many kinds are engaged in debates over how best to govern artificial intelligence (AI) systems, both to promote beneficial uses of this transformational technology and to reduce the array of substantial risks it presents. Many governance strategies are likely to play a part. Some of the most important are systems of legal liability—sets of legal rules that establish economic costs for parties that engage in conduct that societies seek to discourage. In an effort to recognize and encourage innovative thinking about the pressing and complex issue of AI systems and legal liability, the Carnegie Endowment for International Peace recently launched an award for scholarship on the issue of AI and liability.

Carnegie accepted submissions that have been published, or are candidates for publication, in a law journal and received more than two dozen papers. The entries came from many continents and addressed a wide range of conceptual frameworks and types of AI technology. The expert judging panel was chaired by former California Supreme Court Justice and current Carnegie President Mariano-Florentino (Tino) Cuéllar and included Anu Bradford, Henry L. Moses Distinguished Professor of Law and International Organization at Columbia Law School; Jonathan Cedarbaum, professor of practice for National Security, Cybersecurity, and Foreign Relations Law at George Washington University Law School; Nora Freeman Engstrom, Ernest W. McFarland Professor of Law at Stanford Law School; Gillian Hadfield, Professor of Government and Policy and Research Professor of Computer Science at Johns Hopkins University; and Zia Khan, chief innovation officer at the Rockefeller Foundation. The panel selected one winning paper alongside five finalists for their innovative scholarship and enduring relevance. Each paper brought a distinctive perspective to the discourse on AI and liability. As a group, they underscore the importance of legal liability to the governance of this globally transformative technology.

The winning paper, “Generative AI Training as Unjust Enrichment” by Yotam Kaplan and Ayelet Gordon-Tapiero, offers a novel strategy for resolving an important debate that has divided scholars and policymakers since generative AI systems exploded into widespread use and recently is being debated in courts as well: Should producers of the content used to build the models upon which these systems rely be entitled to compensation? The answer, both sides have assumed, should come from copyright law. But copyright doctrine appears to yield only stark yes-or-no answers. Thus, the doctrinal debate has been accompanied by a policy debate: Would recognition of copyright liability severely handicap the development of AI models in the United States and other countries with strong legal protections for producers of expressive content, giving China and other countries lacking such protections a substantial leg up in the competition to develop AI systems? Or would failure to recognize such liability effectively authorize massive theft of intellectual property by what may prove to be the most wealthy and powerful technology companies in our societies?

Kaplan and Gordon-Tapiero’s article offers an ingenious legal and policy middle way out of these dilemmas, “a middle ground solution between the two extreme responses offered by copyright law.” Unjust enrichment law, they contend, “can provide better-tailored remedies that establish a layer of protection for human creators without paralyzing the market for generative AI.” Because unjust enrichment law relies on liability rules rather than property rules to protect human creators, it can provide compensation to creators, but at a level that should not block the training and development of generative AI systems. Unjust enrichment, Kaplan and Gordon-Tapiero contend, is not only more flexible than copyright law but also more likely to be applied evenhandedly across borders. An important contribution on its own terms, Kaplan and Gordon-Tapiero’s paper offers a model of how those confronting legal problems raised by AI will need to break out of accepted categories and look for alternative, more flexible approaches that can be used in multiple legal systems.

The winning paper was not the only one among the finalists to offer an innovative contribution to the ongoing debate over the legal treatment of generative AI. In “AI Outputs Are Not Protected Speech,” Peter N. Salib counters what he claims is the “emerging scholarly consensus” that the First Amendment will limit the regulation of generative AI because its outputs are forms of speech entitled to constitutional protection. Not so, argues Salib. In brief, that is because “when a generative AI system outputs some text, image, or sound, no one speaks. Or at least no one with First Amendment rights does.” Unlike other kinds of software, he contends, generative AI systems “are not intended to convey any particular message.” On the contrary, “they are designed to be able to ‘say’ essentially anything, producing innumerable ideas and opinions that neither their creators nor their users have conceived or endorsed.” If AI outputs are not protected speech, they are not entirely bereft of First Amendment protection in Salib’s view. His paper goes on to contend that both First Amendment doctrine and computer science suggest that these outputs should be entitled to the lesser protection afforded to some speech-facilitating activities and tools. 

These two papers, like many addressing legal liability problems raised by AI, focus on generative AI. But one of the finalists makes a compelling case that AI agents—or “autonomous systems that can plan and execute complex tasks with only limited human involvement”—constitute a more transformative form of machine learning technology than those that generate synthetic content. Noam Kolt’s “Governing AI Agents” makes three key important contributions. First, Kolt “uses agency law and theory to identify and characterize problems arising from AI agents, including issues of information asymmetry, discretionary authority, and loyalty.” Next, he identifies ways in which conventional solutions to agency problems—such as incentive design, monitoring, and enforcement—may not carry over to AI agents because of the opacity of their decisionmaking and the extraordinary speed and scale at which they operate. Finally, he considers some of the implications of agency law and theory for designing and regulating AI agents, arguing that “new technical and legal infrastructure is needed to support governance principles of inclusivity, visibility, and liability.”

Another of the finalists addresses one particularly important kind of AI agent, “the autonomous AI physician”—that is, systems capable of making autonomous decisions about diagnoses and prognoses. These systems are already far advanced and rapidly improving, so Mindy Nunez Duffourc’s paper appears at an auspicious moment. As Duffourc explains, autonomous medical providers fall into a “tort-law gap . . . because neither human-centric nor product-centric causes of action provide a mechanism for recovery.” After demonstrating the limitations of these approaches, Duffourc sets out a novel approach “focusing on control of the AI’s injury-causing output to assign liability to creators, organizations, individual providers, and the Autonomous AI Physician with limited legal personhood.” Duffourc’s contribution aims to “balance the benefits and risks of technological innovation in healthcare and advances tort law’s compensation and deterrence goals.” And while she builds her conceptual framework with particular reference to AI medical agents, her insights should have applications to the regulation of AI agents in many other fields as well.

Many of the entries focused on what might be called everyday AI harms—harms comparable to many others already addressed by the law in other contexts. But, as many technologists and policymakers have warned, advanced machine learning systems raise the prospect of genuinely catastrophic harms because of their increasing ability to mirror ever-more-complex forms of human thinking combined with their expanded levels of autonomy. In “Tort Law as a Tool for Mitigating Catastrophic Risk from Artificial Intelligence,” Gabriel Weil explains how tort law needs to be adapted to contend with these extreme forms of harm. “The current U.S. tort liability system is not set up to handle the catastrophic risk posed by AI,” Weil contends, because these extraordinary harms are not compensable. In order to deal with these harms, Weil proposes a new form of punitive damages “designed to pull forward this expected liability into cases of practically compensable harm,” thereby “offering

sufficient incentives for precaution.” Weil also assesses the contributions a range of other tort doctrines can make, including “recognizing the training and deployment of advanced AI systems as an abnormally dangerous activity subject to strict liability, adopting a capacious conception of foreseeability for the purposes of evaluating proximate cause in cases of AI harm, and modifying the scope of legally compensable damages in cases involving loss of human life.”

As reflected in the large number of excellent submissions, proposals for regulating different types of AI systems have proliferated, drawing on a wide array of legal and policy frameworks. But one important angle that has received relatively little attention is the subject of the fifth finalist, Tejas N. Narechania and Ganesh Sitaraman’s “An Antimonopoly Approach to Governing Artificial Intelligence.” Narechania and Sitaraman make the case that “AI’s industrial organization, which is rooted in AI’s technological structure, suffers from market concentration at and across many layers.” These oligopolistic features, they contend, “have undesirable economic, national security, social, and political consequences.” They then show how many antimonopoly tools can be marshaled to combat these ills, not just to improve competition in markets for AI but also to the reduce a number of the ills of AI systems that a variety of proposals for ex ante regulation have sought to address.

As machine learning technologies are applied in more and more varied domains, legal and policy analysis will need to reflect that rapid pattern of innovation. Carnegie hopes this competition and these outstanding contributions will help spur researchers and policymakers to consider not only their particular analyses but also broader questions those analyses raise, including: What practical situations are emerging that could benefit from the ideas in these papers and guide their refinement?  Who beyond the legal community should be learning about this work? What governance systems other than legal liability are ripe for more innovative scholarship? Carnegie will continue to contribute to debates over AI governance from these and other angles.

Carnegie does not take institutional positions on public policy issues; the views represented herein are those of the author(s) and do not necessarily reflect the views of Carnegie, its staff, or its trustees.