• Research
  • Emissary
  • About
  • Experts
Carnegie Global logoCarnegie lettermark logo
Democracy
  • Donate
{
  "authors": [
    "Scott Kohler"
  ],
  "type": "commentary",
  "blog": "Emissary",
  "centerAffiliationAll": "dc",
  "centers": [
    "Carnegie Endowment for International Peace"
  ],
  "collections": [
    "Tech in Context"
  ],
  "englishNewsletterAll": "",
  "nonEnglishNewsletterAll": "",
  "primaryCenter": "Carnegie Endowment for International Peace",
  "programAffiliation": "CC",
  "programs": [
    "Carnegie California"
  ],
  "projects": [],
  "regions": [
    "United States"
  ],
  "topics": [
    "AI",
    "Technology",
    "Domestic Politics",
    "Subnational Affairs"
  ]
}
Attribution logo
Mike Johnson surrounded by lawmakers while speaking to the press

Mike Johnson speaks at the Capitol after the House passed the One Big Beautiful Bill Act on July 3, 2025. (Photo by Jemal Countess/AFP via Getty Images)

Commentary
Emissary

State AI Regulation Survived a Federal Ban. What Comes Next?

How key actors navigate state-federal tensions in the coming months could significantly impact the development of U.S. AI policy.

Link Copied
By Scott Kohler
Published on Jul 3, 2025
Emissary

Blog

Emissary

Emissary harnesses Carnegie’s global scholarship to deliver incisive, nuanced analysis on the most pressing international affairs challenges.

Learn More
Program mobile hero image

Program

Carnegie California

Carnegie California links developments in California and the West Coast with national and global conversations around technology, democracy, and trans-Pacific relationships. At a distance from national capitals, and located in one of the world’s great experiments in pluralist democracy, Carnegie California engages a wide array of stakeholders as partners in its research and policy engagement.


Learn More

Following weeks of intense debate, Congress has now managed to pass H.R. 1, its landmark tax and spending package to reorient federal investment and deliver numerous Republican priorities. Among a host of changes, the law will permanently extend President Donald Trump’s 2017 tax cuts and curb federal support for causes such as Medicaid expansion, electric vehicles, and renewable energy development.

For the AI community, the most important part of the law is something it does not do—but very nearly did. As enacted, the legislation leaves untouched, for now, the authority of state and local governments to regulate AI technologies and companies, from multinational platforms vying to develop superintelligent systems to the smallest startups and everyone in between.

It was very nearly otherwise. As originally passed by the House of Representatives, the law would have prohibited state and local governments from “limiting, restricting, or otherwise regulating AI models, systems, or automated decision systems” for ten years, excepting only laws of general (not AI-focused) applicability and laws meant to spur the spread of AI technologies. In the Senate’s version of the bill, Republicans sought first to downshift the mandatory moratorium to a conditional one, dangling federal funds in exchange for a voluntary “pause” in AI regulation, before scrapping the measure entirely.

Debate over the moratorium has thrust federalism questions to the forefront of national AI policy. With the Trump administration working to finalize its own AI Action Plan and states flexing their authority with increasingly active regulation, policymakers and the AI community are left pondering whether federal preemption is now off the table and what implications the near-enactment of the state regulatory “pause” might hold for the future.

If the past is any guide, disagreement over the appropriate interplay of state and federal policy is unlikely to recede. The nearest precedent for the regulatory moratorium debate, the U.S. digital privacy landscape, has been marked for years by two phenomena: states acting to fill a federal vacuum and federal lawmakers, particularly in the Republican Party, urging Congress to preempt the growing patchwork of state privacy laws.

Disagreement over federal preemption has doomed recent congressional attempts to pass federal privacy legislation. Just this week, in renouncing support for a compromise that would have preserved the bill’s conditional moratorium, Senator Marsha Blackburn urged that “[u]ntil Congress passes federally preemptive legislation like the Kids Online Safety Act and an online privacy framework, we can’t block states from making laws that protect their citizens.” Tension between state and federal control of technology policy is a durable part of the current landscape, and at least one supporter of the regulatory moratorium has already predicted it will reemerge.

How key actors navigate this tension will have a significant impact on the development of U.S. AI policy.

For Congress, the collapse of majority-party support, with Republican governors and the House Freedom Caucus alike rallying to oppose the regulatory moratorium, could deter lawmakers from resurrecting the proposal anytime soon. If not, the paths to try again would require either another reconciliation vehicle or bipartisan compromise. The latter seems particularly unlikely in the current climate. If feasible, it would presumably involve enacting a federal framework leveraging the bipartisan 2024 roadmap and final report released respectively by Senate and House working groups. An attempt could incorporate funding for AI research, development, and deployment alongside action, however incomplete, on risks such as cybersecurity, children’s safety, and the governance of public sector systems. But as congressional attention turns to the midterm elections in the months ahead, this window—vanishingly narrow already—may close.

Still, the onward march of state-level activity, such as New York’s significant progress last month toward enacting meaningful AI safety legislation modeled on California’s contentious, ultimately-vetoed AI safety bill, could rekindle the issue. Either way, Congress should consider that by seriously tackling core concerns related to public safety and security, it can more credibly lay claim to federal primacy in those areas, reinforcing rather than undermining the foundations of a thriving domestic innovation capacity.

For states, the near-miss of a federal moratorium could have divergent outcomes. On the one hand, some jurisdictions may be reluctant to stray from federal orthodoxy, cautious of provoking intraparty tensions or a renewed push to federalize AI policy. On the other hand, policymakers elsewhere may be emboldened, free for now from the specter of a federal ban.

One reality occasionally missing in the moratorium debate was that states have generally acted with considerable self-restraint, disciplined more by a desire to remain competitive as havens for innovation and investment than by federal coercion. For now, the moratorium’s failure means that a meaningful part of U.S. AI policy will be set in state capitals rather than Washington. Even so, continued corporate and competitive pressures will likely temper the expansion of state mandates.

For the Trump administration, which has been clear in its desire to remove “barriers” to U.S. AI leadership, states’ continued ability to set countervailing policies will be an aggravant. Given the diversity of Republican views evident in the debate, the administration may face a strategic choice on how aggressively to act. At the far end of the spectrum, an administration keen on executive power could seek to preempt nonconforming state policies directly through agency rulemaking. In opposing the California AI safety bill, OpenAI argued that the state was intruding on federal competence and jurisdiction over national security issues. It is not a stretch to imagine the present Justice Department taking up this argument should states impose safety or governance constraints in tension with administration policy of unfettered progress in strategic competition with China. If so, litigation would be likely. At the less risky end of the spectrum, the administration might simply criticize policies from Sacramento, Albany, and elsewhere as heavy-handed, job-killing intrusions upon the innovation ecosystem or U.S. national security—a useful contrast as the campaign season heats up.

Finally, the technology sector must now continue to navigate a multipolar landscape. Many of the largest frontier model developers, such as Meta, Google, and OpenAI, invested capital and prestige supporting a regulatory moratorium. They may continue to enjoy a relatively accommodative federal landscape over the remainder of the 119th Congress (albeit, with strings attached), though that is not certain. Other elements of their federal engagement have borne greater fruit. Still, they may have explaining to do in state capitals.

Regardless, AI companies will be looking for parameters on which to base future support or opposition, assuming state governments continue to propose new rules. Measures imposing criminal liability or mandatory shutdown requirements, or policies threatening their cost structures or burdening their ability to train and release increasingly powerful models will likely draw fierce opposition. Short of those red lines, however, constructive engagement is possible. Companies can acknowledge states’ legitimate concerns over possible risks while encouraging them to court investment and channeling the energy behind regulatory activism toward policies that address risk in a minimally burdensome fashion. Examples could include harmonization (minimizing unnecessary variation from one state to the next) and clarifying what governance measures (such as risk assessment, testing, and documentation) will satisfy their obligations under existing law, which already obligates companies to take reasonable care in designing and offering AI systems for public use.

The regulatory moratorium has failed. For now. The federalism tensions that produced it remain. Going forward, Congress, the administration, and the private sector should take seriously the contribution of states, as well as their own opportunity and obligation to lay a durable foundation for prosperity and competitiveness by working to promote sound AI governance.

Scott Kohler
Nonresident Scholar, Carnegie California
Scott Kohler
AITechnologyDomestic PoliticsSubnational AffairsUnited States

Carnegie does not take institutional positions on public policy issues; the views represented herein are those of the author(s) and do not necessarily reflect the views of Carnegie, its staff, or its trustees.

More Work from Emissary

  • Trump raises hands behind a lectern
    Commentary
    Emissary
    How Middle Powers Are Responding to Trump’s Tariff Shifts

    Despite considerable challenges, the CPTPP countries and the EU recognize the need for collective action.

      • Barbara Weisel

      Barbara Weisel

  • People yelling and holding Yoon Again banners
    Commentary
    Emissary
    What Happens When a Conservative Movement Continues on Without a Leader?

    Lessons from Korea’s political right.

      Darcie Draudt-Véjares

  • Kushner and Putin shaking hands, with Witkoff standing next to them
    Commentary
    Emissary
    What If Trump Gets His Russia-Ukraine Deal?

    It’s dangerous to dismiss Washington’s shambolic diplomacy out of hand.

      Eric Ciaramella

  • Hochel stading behind a dais, with a hand raised
    Commentary
    Emissary
    With the RAISE Act, New York Aligns With California on Frontier AI Laws

    The bills differ in minor but meaningful ways, but their overwhelming convergence is key.

      Alasdair Phillips-Robins, Scott Singer

  • Wide shot of Trump and Modi, with Trump pointing
    Commentary
    Emissary
    The Trump-Modi Trade Deal Won’t Magically Restore U.S.-India Trust

    Washington and New Delhi should be proud of their putative deal. But international politics isn’t the domain of unicorns and leprechauns, and collateral damage can’t simply be wished away.

      Evan A. Feigenbaum

Get more news and analysis from
Carnegie Endowment for International Peace
Carnegie global logo, stacked
1779 Massachusetts Avenue NWWashington, DC, 20036-2103Phone: 202 483 7600Fax: 202 483 1840
  • Research
  • Emissary
  • About
  • Experts
  • Donate
  • Programs
  • Events
  • Blogs
  • Podcasts
  • Contact
  • Annual Reports
  • Careers
  • Privacy
  • For Media
  • Government Resources
Get more news and analysis from
Carnegie Endowment for International Peace
© 2026 Carnegie Endowment for International Peace. All rights reserved.