Late last month, California Governor Gavin Newsom signed a historic law focused on increasing transparency among the companies building the world’s most advanced AI models. The law, formally titled the Transparency in Frontier AI Act and more commonly known as SB-53, fills a void Congress has left open.
Many AI policy bills have emerged at the state level, but only a few are designed to address potentially catastrophic risks from advanced AI systems. SB-53 is the first to make it into law. The law introduces protections for whistleblowers inside AI labs, mandatory reporting of certain safety incidents, and requirements that large developers publish so-called frontier AI frameworks to explain how they plan to mitigate catastrophic risks.
As the home of many of the world’s largest and most important AI companies, California has a unique role in AI policy. It is one of two jurisdictions with the greatest capacity to enact legally binding policies that affect frontier AI developers. The other, of course, is the U.S. federal government. But Washington, so far, has largely declined to act. That gives California huge influence over national, and even global, AI policy. SB-53 could provide a blueprint for other states and governments to follow—including, perhaps, a future Congress.1
The successful passage of SB-53 represents the culmination of a yearlong rethink of California’s frontier AI policy, following Newsom’s September 2024 veto of a previous AI regulation known as SB-1047. That bill would have imposed broader rules on AI development and was criticized for creating new liability for AI-induced harms, requiring that developers build in “shutdown” capabilities to their models, and imposing know-your-customer rules on cloud computing services. SB-53 takes a lighter touch, offering a more flexible framework with requirements that companies can fill in as they establish best practices.
Given that SB-53 is one of the first puzzle pieces in the U.S. frontier AI policy landscape, it is important to understand what the law does—and does not do—and how it might shape other emerging state, national, and global policy efforts.
Developer- and model-based thresholds: SB-53 is designed to cover the largest AI developers and the models most likely to pose catastrophic risks while avoiding regulatory burdens on smaller companies and less capable systems. The law defines a “large frontier developer” as a company with more than $500 million in gross revenues over the preceding calendar year. A “frontier model” is one trained using more than 10²6 FLOPS—a metric that encompasses pretraining as well as subsequent fine-tuning and reinforcement learning. For most requirements, both thresholds need to be triggered, meaning that most of the law’s requirements apply only to the most compute-intensive models developed by the largest AI companies. Today, outside analysts estimate that only a few models exceed the compute threshold, including recent offerings from OpenAI and xAI, but other major labs will likely cross it soon.
Although compute thresholds attempt to capture the most risky models, models that fall under the FLOPS threshold are not necessarily risk free. For example, DeepSeek’s latest models would not be captured by this threshold, even though testing by the U.S. Center for AI Standards and Innovation indicates they are far more vulnerable to hijacking and jailbreaking than similar models from OpenAI and Anthropic. Furthermore, algorithmic breakthroughs and efficiency gains could mean that more capable—and risky—models are trained under 10²6 FLOPS. Recognizing this, the law requires the California Department of Technology to write an annual report beginning in 2027 that would inform whether the thresholds for frontier model and large frontier developers should be updated.
Catastrophic risks: SB-53 focuses many of its provisions on the most serious potential risks caused by AI. It defines these “catastrophic risk[s]” as the possibility that the development, storage, or use of a frontier model will materially contribute to more than fifty deaths or serious injuries, or more than $1 billion in economic damages resulting from AI-assisted creation or release of a chemical, biological, radiological, or nuclear weapon; cyber attacks or serious crimes caused by an AI system without meaningful human oversight; or AI activity that evades the control of the AI’s developer or user.
This focus on catastrophic risks comes amid growing evidence that both capabilities and risks have increased over the past year, especially around the creation of AI-assisted biological weapons.
Frontier AI frameworks: SB-53 requires major AI developers to publish safety frameworks that explain how the company will test its models for the risk of catastrophic harm, implement protections against those risks, respond to dangerous incidents, and secure its systems from unauthorized access. The law doesn’t specify what standards or methods a company needs to put in its framework, but once it has laid them out, the law requires the company to follow them. If a company alters its framework, it must publish the updated framework––along with a justification for the change––within thirty days. Companies must update their frameworks at least once a year.
The framework must also cover internal deployments, as SB-53 recognizes that risks can emerge from models that are used only by company employees, not the general public. Before deploying a frontier model publicly or using it extensively within the company, the framework must explain what risk assessments and mitigations the company has in place and why it believes they are adequate. As part of the assessment of internal deployments, the law requires companies to address the risk that their models may circumvent internal oversight mechanisms—a provision that reflects growing concerns about advanced AI systems evading the safety controls meant to govern them.
As well as publishing its risk frameworks, the company must periodically send an assessment of any catastrophic risks created by its internal use of its models, including if a frontier model circumvents oversight mechanisms, to the California Office of Emergency Services (OES).
Deployment reporting: In addition to the overall framework, developers will also have to publish information each time they release a new frontier model in California. How much they have to report depends on whether they cross the threshold for large frontier developers. All developers will have to publish basic information about new models, including intended uses, languages, modalities, and terms of service. Large developers must also explain how they complied with their frontier frameworks, including the results of any catastrophic risk assessments and the role played by third-party evaluators in testing the model. (They are allowed to redact information to protect trade secrets, their own cybersecurity, public safety, or U.S. national security.)
Incident reporting: The law requires large frontier developers to report certain critical safety incidents to OES. There’s a high bar for what needs to be reported: only incidents causing death, bodily injury, or materialized catastrophic harm, plus deceptive model behavior that materially increases catastrophic risk. That’s a significantly higher standard than in earlier drafts of SB-53 and in New York’s proposed RAISE Act. The company compliance burden will be lower, but less information sharing might also hamper the ability of governments to address future harms.
Developers have fifteen days to file a report, unless the incident poses imminent risk of death or serious bodily injury, in which case they have to notify the OES immediately.
Federal deference: SB-53 builds in a novel mechanism for federal deference: If developers are already reporting incidents under federal laws, regulations, or guidance documents with similar standards, the OES can declare those standards equivalent to SB-53’s reporting requirements and allow companies to fulfill California’s rules by meeting the federal rule. This provision should allow California’s framework to avoid redundant compliance burdens if new federal requirements emerge.
Whistleblower protections: SB-53 strengthens whistleblower protections for specific “covered employees” responsible for assessing, managing, or addressing risk of critical safety incidents. The law requires large frontier developers to create anonymous whistleblowing channels. Covered employees are protected from retaliation for using those channels or for reporting information to the federal or state governments, if they had reasonable cause to believe that their employer’s activities posed a substantial danger to public health or safety resulting from a catastrophic risk.
No external auditing: An earlier version of the bill would have required third-party audits for some AI systems. This provision was removed before final passage.
Enforcement: SB-53’s enforcement centers on civil penalties and oversight by the California attorney general. Large frontier developers that fail to meet their reporting obligations, or that make false or misleading statements, can face fines of up to $1 million per violation if the California attorney general brings civil action.
Next Steps for Frontier AI Policy
The effects of SB-53’s passage will reverberate across the U.S. AI policy landscape, perhaps first at the state level. In New York, Governor Kathy Hochul will need to decide whether to sign the RAISE Act, a recently passed bill that now sits on her desk and would introduce a similar set of transparency-focused rules. In addition to signing or vetoing the bill, Hochul can negotiate with its sponsors, Assemblymember Alex Bores and Senator Andrew Gounardes, on final language. Sections of the AI industry have been lobbying hard against the bill. The passage of SB-53 may strengthen the hand of the bill’s supporters, as they can now argue that New York should at least match California’s standard.
A Republican-sponsored bill in Michigan, the Artificial Intelligence Safety and Security Transparency Act, would introduce similar transparency requirements along with further accountability measures such as third-party audits. The bill has been introduced in the state House and is currently in the committee process. And next year, a new suite of bills could take inspiration from SB-53: Many states often wait for others to set a precedent before replicating or adapting their bills.
At the federal level, the law’s deference mechanism offers a template for harmonizing state and federal oversight, should Congress move forward with its own transparency requirements or the executive branch create standards that California deems sufficient. Internationally, governments from Brussels to Beijing are watching California’s experiment closely. As the EU refines its AI Act implementation and China continues evolving its safety governance frameworks, SB-53 offers a blueprint for evidence-generating transparency measures that could shape the next few years of frontier AI governance.
Notes
1The law states that it draws on the policy principles set out in the California Report on Frontier AI Policy, which was commissioned by Newsom following his veto of SB-1047. One of us (Scott) was a lead writer of that report. The views reflected in this piece are the authors’ own and do not reflect the views of the Joint California Policy Working Group on AI Frontier Models.