The supposed threats from China and Russia pose far less of a danger to both Greenland and the Arctic than the prospect of an unscrupulous takeover of the island.
Andrei Dagaev
Source: Getty
State legislators believe they have a window to shape the future of AI governance for their own citizens—and for everyone else.
The dramatic collapse of the proposed federal AI regulatory moratorium, which was defeated last month in a 99-to-1 Senate vote, has put the spotlight back on state efforts to govern AI. Across the country, state legislators are pushing to establish frameworks that they hope will protect their citizens from AI risks and perhaps, in the absence of congressional action, become de facto national standards.
The most significant of these proposed laws are three recent bills that focus on the models advancing the frontier of AI capabilities and the unique risks those models may pose. These bills coalesce around a core set of policies to strengthen transparency, although they differ in some notable provisions. That focus on transparency is in part a response to the controversy over the California AI bill, known as SB-1047, vetoed by California Governor Gavin Newsom last year. All three bills depart from some of SB-1047’s most controversial provisions, such as liability for AI developers. But the bills’ differences show that debates over what comes next in AI governance aren’t settled yet.
In California, Democratic State Senator Scott Wiener, the author of SB-1047, this week proposed a set of policies that would increase transparency requirements on frontier AI companies and strengthen whistleblower protections for AI workers, amending a previously introduced bill known as SB-53.1 In New York, the state legislature last month passed the Responsible AI Safety and Education (RAISE) Act, which would bar model releases that pose certain risks, impose transparency requirements on model developers, and require AI companies to report safety incidents to the public. And in Michigan, Republican State Representative Sarah Lightner recently introduced the Artificial Intelligence Safety and Security Transparency Act, which would create similar transparency requirements and whistleblower protections while also subjecting AI developers to third party audits.
The interplay between these frontier AI bills—and whatever new proposals emerge—will shape the future of the AI debate not only in the states, but also on Capitol Hill. If states end up with dramatically different approaches, they could create the compliance patchwork that proponents of the AI moratorium feared. But if they converge on a similar set of principles, they could lay the groundwork for broader, harmonized standards. That makes it important for observers to understand what each would and wouldn’t do and to watch closely as state efforts develop.
Much of the current debate over state regulation of frontier AI stems from the controversy over SB-1047. When it was introduced in February 2024, SB-1047 was the first bill at the state or national level focused specifically on extreme risks from frontier AI models. Warning that AI could enable the proliferation of weapons of mass destruction and dangerous cyber capabilities, the bill’s proponents argued that if Congress wasn’t going to put guardrails on AI, California should. In the months that followed, legislators, industry, and civil society debated what measures were appropriate, how to decide which models would be covered, and whether states should pass laws aimed at addressing frontier AI risks at all.
SB-1047 proved controversial for many reasons. For example, it would have imposed new statutory liability on AI developers whose models caused or materially enabled “critical harms,” a move that could have created significant additional legal risk for AI companies. The bill would also have required developers to build “full shutdown” capabilities into their models and mandated that providers of cloud computing services monitor their customers’ AI development activities. The shutdown provision provoked debate about its impact on open-source development, and the cloud customer monitoring requirements sparked worries about surveillance and mandatory sharing of business information between competitors.
SB-1047 also came in for criticism for relying on a rigid computational threshold for regulation. The policies the bill outlined would have applied to any model trained on more than 10²⁶ floating point operations (FLOPs) that cost more than $100 million to produce. Critics argued that without the ability to update the threshold to incorporate other metrics as the technology developed, the bill risked becoming obsolete, potentially capturing routine, nonfrontier AI development while missing smaller models that achieved concerning capabilities through more efficient training methods.
Partly in response to the controversy over SB-1047, several state legislative efforts have shifted toward a greater focus on transparency, and less on liability. The three most significant recent bills—California’s SB-53, New York’s RAISE Act, and Michigan’s AI Transparency Act—have several important overlapping features (see the appendix for more detail.)
The three bills are at different stages, and they could all still change in response to feedback from industry, civil society, other states, and the federal government. The RAISE Act is the furthest along, having passed the New York legislature. But Governor Kathy Hochul has until December 31, 2025, to negotiate with legislative leaders and Assemblymember Alex Bores and Senator Andrew Gounardes, the act’s sponsors, if she wants to revise the bill. As a result, there is time for developments in California, Michigan, and elsewhere to influence Hochul’s decision.
SB-53 is earlier in the process. It still needs to pass through multiple committees, followed by the full California Assembly and Senate, before it reaches the governor’s desk. An upcoming hearing in the Assembly’s Committee on Privacy and Consumer Protection on July 16 will offer the first indication of whether SB-53 will be further amended. The Michigan bill is the most nascent, having been introduced for the first time in late June.
It’s not yet clear how other actors, including other U.S. states, the federal government, and foreign countries, will react if some or all of these proposals become law. But given the size and importance of the states involved, laws they enact are likely to influence what happens beyond their borders. For now at least, state legislators believe they have a window to shape the future of AI governance for their own citizens—and for everyone else.
Carnegie does not take institutional positions on public policy issues; the views represented herein are those of the author(s) and do not necessarily reflect the views of Carnegie, its staff, or its trustees.
The supposed threats from China and Russia pose far less of a danger to both Greenland and the Arctic than the prospect of an unscrupulous takeover of the island.
Andrei Dagaev
Western negotiators often believe territory is just a bargaining chip when it comes to peace in Ukraine, but Putin is obsessed with empire-building.
Andrey Pertsev
Unexpectedly, Trump’s America appears to have replaced Putin’s Russia’s as the world’s biggest disruptor.
Alexander Baunov
The Kremlin will only be prepared to negotiate strategic arms limitations if it is confident it can secure significant concessions from the United States. Otherwise, meaningful dialogue is unlikely, and the international system of strategic stability will continue to teeter on the brink of total collapse.
Maxim Starchak
For years, the Russian government has promoted “sovereign” digital services as an alternative to Western ones and introduced more and more online restrictions “for security purposes.” In practice, these homegrown solutions leave people vulnerable to data leaks and fraud.
Maria Kolomychenko