map of the US with lights across it and a focus on California

Photo by iStock

commentary

All Eyes on Sacramento: SB 1047 and the AI Safety Debate

The bill has galvanized a discussion about innovation, safety, and the appropriate role of government—particularly at the subnational level—in AI regulation.

Published on September 11, 2024

Over the past few years, policymakers around the world have been racing to understand and respond to the remarkable progress of artificial intelligence, and particularly the advent of sophisticated generative models with rapidly expanding capabilities. Significant attention has focused on national and international initiatives—including safety institutes, multilateral summits, and flagship policy measures such as U.S. President Joe Biden’s AI executive order and the European Union’s AI Act

In parallel, subnational policy has emerged as a dynamic and consequential force in its own right. In the United States, states in particular have ramped up their work on tech policy over the past decade, flexing their broad regulatory powers and, in some areas, filling a perceived vacuum of congressional inaction. For example, since 2018, more than a dozen states have passed comprehensive privacy laws, many of which impose heightened consent and governance obligations on the use of autonomous decisionmaking systems. States have also acted to limit misleading or harmful outputs of AI systems, including through new requirements in Utah that generative AI systems disclose to consumers that they are synthetic, and legislation in Colorado requiring developers and deployers of high-risk AI systems to prevent algorithmic discrimination.

As wide-ranging as state efforts have been, none have generated as much discussion and controversy as SB 1047—the Safe and Secure Innovation for Frontier Artificial Intelligence Models Act—recently passed by both houses of the California legislature. With prominent supporters and detractors across industry, government, and civil society, SB 1047 differs from other state efforts to curb problematic effects of AI outputs, such as misinformation, abuse, or discrimination. Instead, the California bill looks principally to the development process, requiring pre-training and pre-deployment measures to mitigate “critical harms” such as chemical, biological, radiological, or nuclear (CBRN) proliferation; mass casualty events; or cyberattacks on critical infrastructure. In the process, it has galvanized a pitched debate weighing innovation, safety, and the appropriate role of government—particularly at the subnational level—in AI regulation.

Introduced by Senator Scott Wiener of San Francisco, which sits at the epicenter of the AI revolution, SB 1047 regulates the largest, most advanced frontier models and their derivatives, those trained at costs exceeding $100 million (or $10 million for fine-tuning) and meeting computing power thresholds adapted from the federal executive order. It would require developers of these models to:

  • take reasonable protections to prevent unauthorized access, misuse, or “unsafe post-training modifications”
  • exercise “reasonable care” to avoid the risk of causing or enabling a critical harm and implement a written safety and security protocol that is published online in redacted form, shared with the attorney general, and reviewed annually
  • build and retain the ability, in the face of a critical harm, to implement a full shutdown of models controlled by the developer
  • assess, before using or releasing a covered model, whether the model could cause or enable a critical harm, implement appropriate safeguards, and take reasonable steps to ensure the attributability of the actions taken by covered models and their derivatives, including critical harms they cause
  • retain a third-party auditor to assess the developer’s compliance annually and submit a statement of compliance to the attorney general

The bill includes additional provisions, including a notification requirement for safety incidents, “know your customer” requirements for computing cluster operators whose resources may be used to train a covered model, whistleblower protections for developers’ employees and contractors, and measures to promote the development of a public computing cluster to foster public-oriented research and improve access to computing resources.

While earlier drafts of SB 1047 would have established a new division within California’s Government Operations Agency to issue implementing guidance, receive developer certifications, and promote model oversight, recent amendments centralize oversight and enforcement authority in the attorney general, authorizing monetary damages, injunctive relief, and—in limited circumstances, including actual harm to a person or property—civil penalties. The attorney general, in turn, must consider the quality of a developer’s safety and security protocol and risk mitigation efforts. Rather than a new sub-agency, the amended bill establishes a nine-member board and assigns the Government Operations Agency several responsibilities, including to update the computing thresholds for covered models in light of technical advances, scientific understanding, and evolving standards.

Opposition has been vigorous, from many quarters. Developers who would be subject to the law, including Google, Meta, and OpenAI, and an array of tech industry associations have mobilized against it, predicting that by regulating the process of model development, rather than harms caused by use, SB 1047 would hamper innovation and undermine Californian and American competitiveness. OpenAI has further argued that the bill’s focus on national security risks, such as CBRN proliferation, renders it more appropriate for federal than state action.

Many critics have argued that SB 1047 would damage the ecosystem for open model development and use. While the debate on open- versus closed-model governance is not simple or binary, Wiener has expressed support for the democratizing and pro-innovation effects of open models. He argues that the bill incentivizes upstream risk mitigation without unduly burdening the open ecosystem and that it reflects concerns from the open source community, for example in its $10 million threshold for fine-tuning as well as amendments limiting key requirements to models “controlled by the developer.”

Disagreement has come from civil society and the public sector as well. For example, Fei-Fei Li, codirector of the Human-Centered AI Institute at Stanford University, has argued that SB 1047 would “stifle innovation” and “shackle open-source development,” rendering downstream developers unlikely to build atop models that could be rendered inoperative under the bill’s “full shutdown” capability requirement. Li, who is also reportedly developing an AI startup, has further warned that the shutdown requirement would disproportionately impair academic and public sector research at a time when both are needed in a field presently dominated by private development. Despite the bill’s support in California’s majority-Democratic legislature, SB 1047 has drawn criticism from prominent members of the state’s congressional delegation, including Speaker Emerita Nancy Pelosi and House Committee on Science, Space, and Technology Ranking Member Zoe Lofgren, who argues that the bill imposes “premature requirements based on [the] underdeveloped science” of AI safety.

A similarly multisector coalition of supporters have argued in SB 1047’s favor. AI startup Anthropic, which would be regulated under the bill, has embraced the legislation’s core premise that frontier models will imminently pose real risks of catastrophic harm requiring near-term precautionary regulation. After suggesting amendments, some subsequently incorporated into the bill, Anthropic recently announced measured support for SB 1047, arguing that its “work with biodefense experts, cyber experts, and others shows a trend towards the potential for serious misuses in the coming years – perhaps in as little as 1-3 years,” and expressing doubt that catastrophic risks will be adequately addressed by the free market or Congress.

Prominent AI researchers such as Yoshua Bengio and Geoffrey Hinton, Turing Award winners for their work on deep learning, have disagreed with Li’s analysis. Bengio has praised SB 1047 as “a bare minimum for effective regulation,” arguing that the bill’s focus on safety and security protocols, rather than specific compliance measures, reflects a light touch without imposing rigid design decisions or testing protocols on developers. Bengio also contends that, for all their benefits, open models present genuine risks of irreversible harm that cannot be remedied by “a trivial box-checking exercise, like putting ‘illegal activity’ outside the terms of service.” 

Other observers have argued that the final bill does not go far enough, or that it misses the mark on other grounds. For example, some advocates have claimed that amendments adopted during the legislative process—such as a narrowing of pre-harm enforcement, removal of criminal penalties for perjury, and substitution of “reasonable care” in place of the original draft’s “reasonable assurance” standard—result in a watered-down compromise. Numerous critics have urged the legislature to focus on concrete rather than catastrophic risks. Lofgren argues that SB 1047 wrongly “focus[es] on hypothetical rather than demonstrable risks” such as “misinformation, discrimination, nonconsensual deepfakes, environmental impacts, and workforce displacement.” Wiener, by contrast, contends that while SB 1047 focuses on catastrophic harms, other legislative proposals have been introduced to tackle many of the challenges Lofgren names.

Despite the controversy, SB 1047 has been passed by both houses of the California legislature and heads now to the desk of California Governor Gavin Newsom, who will decide whether to sign it into law. While the governor has declined to take a position so far, he will soon need to weigh the competing arguments, in particular the foundational premises on which the bill’s opponents and supporters part ways: first, whether frontier models may indeed pose catastrophic risk; if so, whether sufficient and timely preventive action from the federal government is likely; and, third, whether a different approach such as regulating misuse presents a better alternative. In the backdrop hangs a fourth question, implicit in AI safety debates: whether society should or will, as a practical matter, opt instead to self-insure against catastrophic risk in pursuit of the manifold benefits AI seems to bring within reach.

Policymakers around the world are hard at work on these questions. Wiener, for his part, has said he would welcome strong federal legislation to preempt SB 1047. For the time being, California appears poised to offer its own preliminary response.

Carnegie does not take institutional positions on public policy issues; the views represented herein are those of the author(s) and do not necessarily reflect the views of Carnegie, its staff, or its trustees.