Following weeks of intense debate, Congress has now managed to pass H.R. 1, its landmark tax and spending package to reorient federal investment and deliver numerous Republican priorities. Among a host of changes, the law will permanently extend President Donald Trump’s 2017 tax cuts and curb federal support for causes such as Medicaid expansion, electric vehicles, and renewable energy development.
For the AI community, the most important part of the law is something it does not do—but very nearly did. As enacted, the legislation leaves untouched, for now, the authority of state and local governments to regulate AI technologies and companies, from multinational platforms vying to develop superintelligent systems to the smallest startups and everyone in between.
It was very nearly otherwise. As originally passed by the House of Representatives, the law would have prohibited state and local governments from “limiting, restricting, or otherwise regulating AI models, systems, or automated decision systems” for ten years, excepting only laws of general (not AI-focused) applicability and laws meant to spur the spread of AI technologies. In the Senate’s version of the bill, Republicans sought first to downshift the mandatory moratorium to a conditional one, dangling federal funds in exchange for a voluntary “pause” in AI regulation, before scrapping the measure entirely.
Debate over the moratorium has thrust federalism questions to the forefront of national AI policy. With the Trump administration working to finalize its own AI Action Plan and states flexing their authority with increasingly active regulation, policymakers and the AI community are left pondering whether federal preemption is now off the table and what implications the near-enactment of the state regulatory “pause” might hold for the future.
If the past is any guide, disagreement over the appropriate interplay of state and federal policy is unlikely to recede. The nearest precedent for the regulatory moratorium debate, the U.S. digital privacy landscape, has been marked for years by two phenomena: states acting to fill a federal vacuum and federal lawmakers, particularly in the Republican Party, urging Congress to preempt the growing patchwork of state privacy laws.
Disagreement over federal preemption has doomed recent congressional attempts to pass federal privacy legislation. Just this week, in renouncing support for a compromise that would have preserved the bill’s conditional moratorium, Senator Marsha Blackburn urged that “[u]ntil Congress passes federally preemptive legislation like the Kids Online Safety Act and an online privacy framework, we can’t block states from making laws that protect their citizens.” Tension between state and federal control of technology policy is a durable part of the current landscape, and at least one supporter of the regulatory moratorium has already predicted it will reemerge.
How key actors navigate this tension will have a significant impact on the development of U.S. AI policy.
For Congress, the collapse of majority-party support, with Republican governors and the House Freedom Caucus alike rallying to oppose the regulatory moratorium, could deter lawmakers from resurrecting the proposal anytime soon. If not, the paths to try again would require either another reconciliation vehicle or bipartisan compromise. The latter seems particularly unlikely in the current climate. If feasible, it would presumably involve enacting a federal framework leveraging the bipartisan 2024 roadmap and final report released respectively by Senate and House working groups. An attempt could incorporate funding for AI research, development, and deployment alongside action, however incomplete, on risks such as cybersecurity, children’s safety, and the governance of public sector systems. But as congressional attention turns to the midterm elections in the months ahead, this window—vanishingly narrow already—may close.
Still, the onward march of state-level activity, such as New York’s significant progress last month toward enacting meaningful AI safety legislation modeled on California’s contentious, ultimately-vetoed AI safety bill, could rekindle the issue. Either way, Congress should consider that by seriously tackling core concerns related to public safety and security, it can more credibly lay claim to federal primacy in those areas, reinforcing rather than undermining the foundations of a thriving domestic innovation capacity.
For states, the near-miss of a federal moratorium could have divergent outcomes. On the one hand, some jurisdictions may be reluctant to stray from federal orthodoxy, cautious of provoking intraparty tensions or a renewed push to federalize AI policy. On the other hand, policymakers elsewhere may be emboldened, free for now from the specter of a federal ban.
One reality occasionally missing in the moratorium debate was that states have generally acted with considerable self-restraint, disciplined more by a desire to remain competitive as havens for innovation and investment than by federal coercion. For now, the moratorium’s failure means that a meaningful part of U.S. AI policy will be set in state capitals rather than Washington. Even so, continued corporate and competitive pressures will likely temper the expansion of state mandates.
For the Trump administration, which has been clear in its desire to remove “barriers” to U.S. AI leadership, states’ continued ability to set countervailing policies will be an aggravant. Given the diversity of Republican views evident in the debate, the administration may face a strategic choice on how aggressively to act. At the far end of the spectrum, an administration keen on executive power could seek to preempt nonconforming state policies directly through agency rulemaking. In opposing the California AI safety bill, OpenAI argued that the state was intruding on federal competence and jurisdiction over national security issues. It is not a stretch to imagine the present Justice Department taking up this argument should states impose safety or governance constraints in tension with administration policy of unfettered progress in strategic competition with China. If so, litigation would be likely. At the less risky end of the spectrum, the administration might simply criticize policies from Sacramento, Albany, and elsewhere as heavy-handed, job-killing intrusions upon the innovation ecosystem or U.S. national security—a useful contrast as the campaign season heats up.
Finally, the technology sector must now continue to navigate a multipolar landscape. Many of the largest frontier model developers, such as Meta, Google, and OpenAI, invested capital and prestige supporting a regulatory moratorium. They may continue to enjoy a relatively accommodative federal landscape over the remainder of the 119th Congress (albeit, with strings attached), though that is not certain. Other elements of their federal engagement have borne greater fruit. Still, they may have explaining to do in state capitals.
Regardless, AI companies will be looking for parameters on which to base future support or opposition, assuming state governments continue to propose new rules. Measures imposing criminal liability or mandatory shutdown requirements, or policies threatening their cost structures or burdening their ability to train and release increasingly powerful models will likely draw fierce opposition. Short of those red lines, however, constructive engagement is possible. Companies can acknowledge states’ legitimate concerns over possible risks while encouraging them to court investment and channeling the energy behind regulatory activism toward policies that address risk in a minimally burdensome fashion. Examples could include harmonization (minimizing unnecessary variation from one state to the next) and clarifying what governance measures (such as risk assessment, testing, and documentation) will satisfy their obligations under existing law, which already obligates companies to take reasonable care in designing and offering AI systems for public use.
The regulatory moratorium has failed. For now. The federalism tensions that produced it remain. Going forward, Congress, the administration, and the private sector should take seriously the contribution of states, as well as their own opportunity and obligation to lay a durable foundation for prosperity and competitiveness by working to promote sound AI governance.