Washington and New Delhi should be proud of their putative deal. But international politics isn’t the domain of unicorns and leprechauns, and collateral damage can’t simply be wished away.
Evan A. Feigenbaum
{
"authors": [
"Alasdair Phillips-Robins",
"Scott Singer"
],
"type": "commentary",
"blog": "Emissary",
"centerAffiliationAll": "dc",
"centers": [
"Carnegie Endowment for International Peace"
],
"englishNewsletterAll": "",
"nonEnglishNewsletterAll": "",
"programAffiliation": "TIA",
"programs": [
"Technology and International Affairs"
],
"regions": [
"United States"
],
"topics": [
"Subnational Affairs",
"AI",
"Domestic Politics",
"Technology"
]
}Hochul speaking in Albany in January. (Photo by Alejandra Villa Loarca/Newsday RM via Getty Images)
The bills differ in minor but meaningful ways, but their overwhelming convergence is key.
Amid a national debate over AI laws, New York has become the second state to impose new requirements on advanced AI models and the companies that develop them. Yet instead of contributing to a patchwork of state laws ranging from algorithmic discrimination to child safety that some fear may emerge in the coming months, the Responsible AI Safety and Education (RAISE) Act takes a different path: It harmonizes directly with existing rules.
RAISE, which originated with New York Assemblymember Alex Bores and State Senator Andrew Gounardes, adopts the “trust but verify” paradigm set out by SB-53, the California bill that became the first U.S. frontier AI law last September. Both bills create a core set of transparency requirements, mandating developers to publish frontier AI risk frameworks and report safety incidents to state officials.
Though the bills differ in minor but meaningful ways, their overwhelming convergence matters. The worst fears of those who argued that the federal government had to step in to preempt a coming flood of state AI laws have not borne fruit—at least when it comes to frontier AI rules. Even absent a federal framework, California and New York’s harmonization means that developers are not yet facing the substantial additional compliance burden in frontier AI policy that preemption advocates anticipated.
With California and New York aligning, the next question is whether other states will join them—and whether the federal government might adopt a similar standard itself.
The RAISE Act, which will come into effect January 1, 2027, draws heavily on the “trust but verify” framework of SB-53; in many cases, it directly copies the text of the California law. Most of the legislative findings––which frame the bill’s objectives and provide interpretive guidance about what the legislature believed and intended––are the same. RAISE borrows many of the key definitions from SB-53, including those for catastrophic risk, critical safety incident, and foundation model. Both laws apply their strictest requirements to AI models trained on more than 10^26 FLOPS and companies with annual gross revenue exceeding $500 million in the previous year.
At the heart of both bills is an identical set of transparency requirements for frontier AI development. Like SB-53, RAISE requires companies to publish their approach to safety testing, risk mitigation, incident response, and cybersecurity controls. Companies can choose their methods and standards but must then adhere to whatever commitments they’ve made. They also have to report severe harms caused by AI—primarily those involving death, bodily injury, or major economic damage, along with deceptive model behavior that materially increases catastrophic risk—to government officials.
Notably, the RAISE Act, like SB-53, requires even models that are deployed only internally—that is, for the private use of the companies developing them—to be covered by the frontier AI framework. The law also attempts to reduce the risk that state and federal requirements impose duplicative or conflicting burdens on developers. Drawing on SB-53, RAISE allows New York to designate a federal rule or standard as equivalent to the state transparency standard. If that happens, companies can comply with the state requirements simply by meeting the federal rule.
RAISE didn’t start out as a carbon copy of SB-53. In its original form, the bill contained several stricter requirements for AI developers, including a bar on deploying models in New York that posed unreasonable risks of death or injury, broader incident reporting requirements, and higher fines for violations. After the state legislature passed the bill, Governor Kathy Hochul negotiated with the bill’s authors, pushing to move the text to more closely match SB-53. Hochul didn’t give a public explanation for the shift, but it likely reflected that the SB-53 framework was seen as less vulnerable to industry blowback.
Although the final version of RAISE overlaps heavily with SB-53, the two laws aren’t identical. RAISE doesn’t include new protections for whistleblowers inside AI companies like SB-53 does, likely because New York has background whistleblower protections that may apply already. RAISE also creates a new office focused on AI governance, the Office of Digital Innovation, Governance, Integrity, and Trust (DIGIT), which will receive company reports and can create new transparency requirements through regulation. SB-53, by contrast, relied on existing state agencies.
The biggest question raised by the new law is whether it will be overtaken by a federal effort to supersede state regulation of AI—and whether any such federal rule would closely hew to the provisions of SB-53 and the RAISE Act or take a different and possibly more laissez-faire approach. President Donald Trump’s administration has announced renewed efforts to preempt state AI laws either through federal regulation or congressional action. Although two attempts to bar state AI regulation failed in Congress last year, some members of Congress, such as California Representative Jay Obernolte, have expressed interest in reviving the effort, and the White House issued an executive order in December promising federal rules and draft legislation. Some parts of SB-53 and RAISE, such as incident reporting, are included in the president’s AI Action Plan.
Congressional action is the most likely route for any or all of RAISE to be preempted. Without it, there’s likely to be little the administration can do to block the law. Trump’s December executive order directs the Federal Communications Commission (FCC) to consider adopting a “reporting and disclosure standard” with the goal of preempting state laws—presumably with the aim of targeting incident reporting and safety standard disclosure requirements like those in RAISE. But the FCC hasn’t issued rules governing AI developers in this way before, and it likely doesn’t have the legal authority to override state legislation on AI.
Even without regulatory authority, the FCC or another federal agency might issue nonbinding guidance on transparency and reporting. That guidance wouldn’t preempt state laws such as RAISE, but states could allow companies to satisfy state requirements by complying with the federal standard.
More broadly, the RAISE Act and SB-53 no longer seem to be the primary targets of the administration’s attack on state AI laws. A leaked draft of the executive order gave SB-53 as an example of the kind of “burdensome” law the order aimed to block, but the published version no longer mentions it. Shifting the RAISE Act to closely resemble SB-53 was widely seen as a concession to the tech industry, suggesting that many companies are unlikely to lobby aggressively to abolish such transparency rules.
Even if the federal government’s pathway toward a unified framework remains unclear, states may be moving toward an initial consensus on frontier AI policy. Governors of both parties have signaled a desire to regulate AI, and transparency-focused bills that overlap with SB-53 have been introduced in Michigan and Utah. Many observers feared last year that an explosion of state legislation on AI would create a thicket of AI regulations that would stifle American AI development. So far, that has largely not come to pass.
That still leaves California and New York with big questions over how the “trust but verify” framework will be implemented and whether the states have the capacity and understanding to enforce these laws. For example, it’s not clear what governments are meant to do with company risk reports when they have them: Neither SB-53 nor the RAISE Act offers a framework for analyzing critical safety incidents or internal deployment reports. Both laws require state agencies to produce expert reports explaining whether the laws should be updated. And New York’s efforts to centralize transparency and accountability represent a critical test for state-level government enforcement capabilities. Whether, and how, these new laws are actually enforced will determine whether “trust but verify” frameworks create genuine transparency and accountability or prove merely symbolic efforts to guide the frontier of AI.
Carnegie does not take institutional positions on public policy issues; the views represented herein are those of the author(s) and do not necessarily reflect the views of Carnegie, its staff, or its trustees.
Washington and New Delhi should be proud of their putative deal. But international politics isn’t the domain of unicorns and leprechauns, and collateral damage can’t simply be wished away.
Evan A. Feigenbaum
What happens next can lessen the damage or compound it.
Mariano-Florentino (Tino) Cuéllar
The uprisings showed that foreign military intervention rarely produced democratic breakthroughs.
Amr Hamzawy, Sarah Yerkes
An Armenia-Azerbaijan settlement may be the only realistic test case for making glossy promises a reality.
Garo Paylan
Venezuelans deserve to participate in collective decisionmaking and determine their own futures.
Jennifer McCoy