Who should govern AI—U.S. state governments, the federal government, or both? This question, once an abstract concern, has recently moved to the center of practical policymaking. In July, after weeks of intense debate, an attempt to restrict states from regulating the technology was defeated in the Senate. But preemption—the idea that Congress should override state-level laws with national-level standards—may soon return in a new guise. Members of Congress are writing fresh proposals. The administration’s AI Action Plan endorsed a more limited version of the concept. And big tech companies will continue to push for freedom from state-level regulation.
Supporters of preemption argue that innovation will suffer if AI companies are subject to a tangle of state rules. For its opponents, the rapid pace of AI development makes policy experimentation more valuable than uniformity, and state governments, they suggest, are used to regulating many of the sectors, from policing to education, where AI is being deployed.
Yet a crucial fact has largely been missing from the debate: we have been here before. Many times over the past century, Congress has confronted a new technology—from nuclear power to genetically modified food—and has had to decide whether, and how, to preempt state authority to regulate it.
What can we learn from that history? The first takeaway concerns Congress’s purposes. In general, when Congress has preempted state laws on emerging technology, it appears to have done so to advance three main goals: to prevent conflicting state laws from fragmenting a national industry; to restrain outlier states from effectively imposing their preferred rules on the rest of the country; or to draw on federal expertise, especially in areas related to national security. These are broad categories, and there are other potential drivers of federal action, such as crisis response, that do not fit neatly into any one bucket. But these goals explain many of the major moves toward preemption.
Several additional lessons stand out from recent history:
- Congress does not typically preempt state law without a federal replacement. Indeed, the process of deciding on the federal rule often helps clarify which aspects of state authority should be wiped out.
- Federal regulatory action is most likely when a compromise is possible between a pro-regulatory camp and an industry coalition that is more afraid of multiplying state requirements than any single federal rule. Major preemption laws have thus typically been bipartisan. The sweet spot seems to be the period after conflicting laws have started to multiply but before state regulation is deeply entrenched.
- The mere fact of diverging state laws is not typically enough to justify preemption. The regulated industry must form a truly national market, one in which economies of scale outweigh the benefits of local choice over governance.
- Legislators do not need to work out the final division between federal and state governments all in one go. The governance of many past technologies evolved as regulators experimented and industry adapted. When states push too far too fast, Congress has often been capable of responding rapidly to shore up federal interests.
- It is rare for Congress to preempt an entire policy area. Instead, a combination of political and policy motivations usually leads it to carve out areas of respective, or overlapping, state and federal responsibility.
Of course, simply because these principles illustrate the history of federal regulation does not mean that Congress has always gotten it right. The balance between federal and state control is a tricky one; both too much and too little federal intervention can leave citizens unprotected or create bottlenecks to innovation. Yet past experience provides a useful guide to current debates. Applying these lessons to AI today yields a mixed picture:
- The much-feared chaos of conflicting state regulations over a national industry has not materialized. Most states have focused their regulations on specific AI uses that fall within traditional state authority. A few states are debating upstream regulations on AI developers, but these largely focus on transparency, not substantive requirements.
- No outlier state is on the verge of setting a de facto national policy across the AI stack. One state, Colorado, has passed a sweeping new AI regulation that includes extensive developer requirements. But other states have so far declined to follow suit, and Colorado’s own leaders are already considering revisions before the law goes into effect.
- The strongest case for federal regulation is in AI model development—a national industry where the federal government is building unique expertise. National security risks are one obvious starting point for Congress. Transparency rules are another, although experimentation at the state level may be valuable in the near term.
In sum, universal preemption is likely unnecessary for now, but some initial federal steps make sense, with an eye toward revisiting the federal-state boundary over time. AI is a new, complex, and hard-to-define technology. Policymakers will need to embrace uncertainty and experiment with different approaches before landing on the right rules and the right division of responsibilities.
The Theory of Preemption
Much of the debate over the moratorium has so far suffered from a key conceptual confusion. Because the original provision would have barred the enforcement of most state AI laws without supplying a federal replacement, the discussion focused on the case for and against preemption in the abstract. But when Congress has preempted state laws in the past, it has typically replaced them with a substantive federal rule.1 Indeed, the Supreme Court has held that Congress may not bar states from enacting laws in a given field unless it also creates an associated federal scheme that “regulates the conduct of [the] private actors” involved.2
Seen in that light, the question becomes not whether the federal government should bar state regulation, but whether it should step in to regulate AI itself and in the process displace some, or all, state law. A prominent strain of economic theory suggests that governance of each issue should be set by the most local level of government that has the ability to manage it and can fully internalize the effects of its policies.3 In general, states are better positioned to regulate when an economic market is highly differentiated, when consumers will benefit more from rules that fit their local situation than firms will suffer from nonuniformity, and when local regulators have the right expertise. Varied local policies allow for experimentation and adjustment over time.
Central authority, on the other hand, works best in markets with significant economies of scale—think commercial aviation or drug development—where nationwide rules eliminate the need for conflicting compliance regimes. These efficiencies are often reinforced by network effects, as in telecommunications or payments, where fragmentation reduces the value of the service for everyone. In such settings, it is also often more effective to centralize regulatory expertise, allowing a single national body to develop and apply specialized knowledge consistently across the market. In this world, the informational advantages states have in regulating their local economies are outweighed by their inability to determine, or to account for, the costs that a local regulation imposes on the entire nation.
Of course, state governance may have a place even in markets that are, in theory, best suited to federal control. The division of labor between the state and federal governments is more a spectrum than a binary. If local legislators believe that regulation is necessary but political gridlock makes national action implausible, they may step in, perhaps with the goal of forcing federal intervention. Even if federal action is on the horizon, a state may wish to legislate on the topic. It might want to demonstrate that a particular approach is politically or practically viable—or to establish facts on the ground that will strengthen its hand in negotiating alterations to, or exemptions from, federal rules. Policymakers should think of their authority as increasing or decreasing with the national nature of the market, not as turning on or off completely.
Promoting Uniformity
Many advocates for preemption have emphasized the threat that conflicting laws pose to AI innovation. This theme has been a consistent one in past debates over federalism. Take air travel. In 1958, Congress created the Federal Aviation Agency (later the Federal Aviation Administration) in response to a series of fatal crashes between military and civilian aircraft that were operating under separate sets of flight rules.4 The agency was designed to promote “the safe and efficient use of the nation’s airspace” by replacing state law with “a uniform and exclusive system of federal [airspace] regulation.”5 In 1978, Congress streamlined things even further when it pared back federal regulation and expressly barred states from enacting rules relating to airline “rates, routes, or services.”6
A similar story played out with nutrition labels on food. Agriculture was once a local business, but by the late twentieth century, food production had become a decidedly national industry. As a result, when, in the late 1980s, numerous states began considering a raft of labeling bills that the industry feared would create a hard-to-follow tangle of regulation, food manufacturers began lobbying Congress to preempt state labeling laws.7 At first, the George H.W. Bush administration was skeptical of the need for preemption, and state leaders feared that weak federal rules and distant enforcement would leave consumers unprotected.8 But when legislators in Congress crafted tough rules and empowered states to enforce, although not to change, them, the administration got behind the push for uniformity.9
Thus, preemption can make the most sense once the problem of conflicting laws has started to arrive, but before state policies become deeply entrenched. Regulations on vehicle emissions tell a similar tale. In the early 1960s, in response to worsening smog over Los Angeles, California issued the first limits on automobile emissions in the United States. Although industry had opposed the regulations, once they were enacted, firms switched to arguing that California presented a special case, and that neither the federal government nor other states needed to follow suit. It was not until other states, including New York and Pennsylvania, began considering their own emissions regulations a few years later that industry called for national standards. The result was that in 1965, Congress gave the executive branch the authority to write vehicle emissions rules, and in 1967, after a protracted preemption debate, agreed to override state rules on vehicle pollution control. Only California, whose rules were grandfathered in, was allowed to seek a waiver.10 Once again, the threat of conflicting rules at the state level—and the demonstration of successful coalition-building by state politicians—helped drive federal standard-setting. Industry learned that it could live with two standards—California and the rest of the country—but not more.
How Much Uniformity Does AI Need Today?
When it comes to AI, the case for uniformity varies widely across different policy areas. Many uses of AI, from systems that evaluate mortgage applications to the deployment of self-driving cars, fall into areas where states have traditional advantages in policymaking and local experimentation can serve everyone well.
But some aspects, most notably the development of foundation models, form a more fully national market with significant economies of scale. This sector features fairly homogeneous goods, in the form of large models; significant up-front costs, including research spending and datacenter hardware; and relatively low marginal costs when the model is served to individual customers.11 So-called economies of scope are also high, as AI developers can offer multiple products based on the same foundation model.12 Model development is thus likely to benefit from uniform governance, making the case for federal intervention stronger than in other areas.
But is federal intervention necessary now? So far, the threat of conflicting state laws covering model development is plausible, but unrealized. Most state legislation has focused on specific use-cases, not AI development. In 2024, forty-one states passed 107 laws relating directly to AI, according to the NYU Center on Tech Policy.13 Most were relatively narrowly scoped. Twenty-two states passed laws barring the use of AI to generate child sexual abuse material (CSAM) or other nonconsensual sexual imagery. Another seventeen states passed laws restricting, or requiring disclosure of the use of AI in political advertising. And three states passed laws limiting the use of an individual’s likeness or voice in AI-generated content. Many of the remaining laws involved little regulation at all: sixteen states enacted laws establishing bodies to evaluate the use and impact of AI in state government and society, and fifteen states created new programs to fund research into the use of AI in sectors like education and local government.
Only one state, Colorado, enacted a more comprehensive piece of AI regulation that imposes significant requirements on AI development. The state’s AI Act, which regulates multiple stages of AI model development and use, imposes a suite of requirements on developers and deployers if their systems are used to make “consequential” decisions, including in housing, employment, health care, and education.14 When the law takes effect on June 30, 2026, developers will have to follow a wide variety of documentation requirements; conduct “algorithmic impact assessments” for the risk of racial, gender, and political bias; and face liability if they do not take “reasonable” care to protect consumers from the risk of discrimination.15
Advocates for preemption have pointed to this law as an example of the kind of burdensome state regulation that could soon proliferate. They are right that Colorado’s law represents a significant regulatory expansion. For one thing, courts typically find liability for discrimination only after it has occurred, and liability primarily attaches to the discriminatory employer or landlord (for example). It’s harder to sue a third-party software supplier. Colorado’s law, by contrast, imposes new requirements on developers before their systems have ever been used.
Yet while the law emerged from a multistate AI working group, and several other states have considered such laws, only Colorado has so far passed one. Virginia Governor Glenn Youngkin vetoed one such effort earlier this year. In Texas, a similar bill was narrowed significantly before it was enacted in June. The Texas law is now limited to clarifying that existing antidiscrimination law applies to AI systems, prohibiting the intentional use of AI to violate constitutional rights, and requiring state agencies to disclose their use of AI.16 California’s SB-1047, a major proposal to regulate developers—in that case, to reduce potential extreme risks caused by AI systems—was vetoed last year by Governor Gavin Newsom. If Colorado remains alone in its extensive requirements on AI developers, the case for preemption will be considerably weaker.
Although broad state-level AI bills have lost momentum, three states are considering more targeted bills that focus on risks posed by the most advanced AI models, such as their potential to enable nonexperts to develop weapons of mass destruction or carry out large scale cyber attacks. These bills differ in their details, but they would impose largely similar transparency requirements and whistleblower protections on major AI developers without creating new substantive rules or liability for AI harm.
As a result, transparency rules might well be a fruitful place for federal AI regulation, and perhaps preemption, to begin. Given the rapid pace of AI development, greater transparency, incident reporting, and whistleblower safeguards for foundation model developers are both reasonable substantive first steps and natural points for federal intervention.
Even if broader federal action is not necessary yet, the potential for state-level chaos remains real. Some states are considering new rules on development that go beyond transparency requirements. New York’s transparency bill would bar model releases in the state that pose “unreasonable” risks of deadly or costly incidents. A proposed bill in California would require developers to conduct performance evaluations of systems that are used to make consequential decisions, and deployers of those systems would need to conduct third party audits. A bill introduced in Nebraska contains similar requirements. These proposals, and others like them, may well be sensible public policy, but if they create numerous conflicting or overlapping requirements for AI developers and deployers, they will likely be less efficient than a single federal standard. Congress should watch developments closely to determine whether it needs to intervene.
Holding Back Outlier States
Although Congress most frequently steps in to prevent a patchwork of laws among several states, it sometimes acts in response to a single outlier. Several Republican advocates for the AI moratorium have argued that Congress should not let a few major Democratic states like California and New York decide the nation’s AI policy. Yet history suggests that several factors need to be in place for congressional action to make sense in this context. Not only does a state need to have adopted an outlier rule, that rule must also be substantively flawed, and there must be a real prospect that it will become a de facto national standard.
The clearest example of this phenomenon comes in the field of genetically modified (GM) food. In 2014, Vermont enacted a law requiring food producers to add labels identifying GM products. The law prompted a rapid backlash, and less than a month after it went into effect, then president Barack Obama signed legislation barring state-level GM labeling requirements and replacing them with a far weaker federal standard.17 At the time, no other state had a law on the books requiring GM labeling. A single aggressive state had jolted Congress into action.
Yet the federal response was the result not merely of Vermont’s outlier status, but also of the substantive problems with Vermont’s rule—a broad expert consensus agreed that GM foods posed no additional risk, and many believed that prominent labels would confuse, not help, consumers—and the possibility that other states would follow Vermont’s lead. In 2013 and 2014, Connecticut and Maine had passed labeling requirements that were set to take effect if enough other states passed similar laws.18 About half the states, including New York, had pending GM labeling bills or ballot initiatives.19 Industry was thus concerned more by the substance of Vermont’s rule and the prospect of widespread state action than the fear of a single state remaining an outlier.
When it comes to AI, the possibility that one state will seize the chance to impose an extreme policy on everyone else seems to be, for now, more theoretical than real. Many supporters of federal preemption have pointed to California’s SB-1047 as an example of the kind of developer- and model-focused law that could have sweeping effects outside California. Yet SB-1047 is dead, leaving Colorado as the best candidate for an outlier state. The Colorado AI Act arguably meets the first and second conditions for congressional action: the state stands alone in the breadth of its rules, and there are reasonable critiques of its approach. Even Colorado Governor Jared Polis, who signed the bill, has expressed reservations about its sweep, and called a special legislative session in August to consider amendments. Although lawmakers did not agree on revisions to the law, they did delay its implementation date.
But the third condition remains unfulfilled: Colorado’s model has not yet taken off elsewhere—and it might not take off even in Colorado, if the Polis continues to push for amendments. Texas, the state that seemed most likely to follow Colorado’s lead, passed a far more lightweight set of rules. None of the current transparency-focused state proposals come close to the sweeping nature of Colorado’s AI Act or California’s abandoned SB-1047, and other states do not seem to be on the brink of action. The experience of Vermont’s effort to require GM disclosures on food labels suggests, moreover, that if a state does set rules that impose unacceptable costs on the national AI industry, Congress is well equipped to step in quickly, especially as many state AI laws include implementation periods before they become effective.
The Need for Federal Expertise
A third major category of federal intervention comes when the area calls out for expertise that the federal government is best positioned to provide. Many highly technical fields require sustained institutional scientific capacity to evaluate complex safety trade-offs. At least in theory, recruiting and retaining the necessary expertise in one place, rather than dispersing it at the state level, can allow the federal government to regulate more effectively.
In health care, for example, Congress gradually expanded the authority of the Food and Drug Administration (FDA) over the course of the twentieth century to cover a growing range of products (prescription and nonprescription drugs, branded and generic drugs, medical devices, and so on) as well as new contexts in which consumers encountered them (like advertising and labeling).20 Express preemption language first appeared in the 1966 Fair Packaging and Labeling Act21 and the 1976 Medical Device Amendments, in part in response to high profile accidents.22 Today, thousands of scientists work at the FDA, and the agency has come to be regarded as the global gold standard for the evaluation of pharmaceutical products.
Nuclear energy is another domain where Congress has long deemed federal expertise essential. In the postwar period, the Atomic Energy Commission was created to both promote and regulate the technology. Mounting concerns over safety, exacerbated by such high-profile accidents as the fatal SL-1 reactor accident in Idaho in 1961 and the partial meltdown at Fermi Unit 1 in Michigan in 1966, prompted Congress to stand up a new, independent agency, the Nuclear Regulatory Commission (NRC), to act as the field’s lead regulator. Today, the NRC issues licenses and regulations, maintains two on-site inspectors at every commercial nuclear facility, and operates a dedicated Incident Response Program to address emergencies. As with the FDA’s evaluation of new drugs, such complex and long-term regulatory demands are generally beyond the capabilities of most state agencies, which lack the institutional scale or technical bench to perform comparable functions.
When it comes to AI, some aspects of the technology are well suited to the centralization of knowledge. The need for such expertise is clearest in two areas: the development of general-purpose AI models and the extreme risks those models pose, especially those related to national security, such as the potential for AI to accelerate the proliferation of chemical and biological weapons. In addition to the substantial existing scientific and national security expertise in the federal government, federal agencies will find it far easier than states to attract the technical talent necessary to evaluate and govern AI development. In many other areas where AI is deployed, from housing to insurance, states will be better positioned, thanks to their existing understanding of the relevant local market.
The federal government has already taken some steps toward building the right kind of expertise. The Center for AI Standards and Innovation (formerly the AI Safety Institute) employs both technical researchers and policy experts. The center is responsible for evaluating national security risks posed by AI models, producing guidelines for the security of AI systems, and assessing the threats posed by U.S. adversaries’ use of AI. Although the U.S. government still has a long way to go to match the technical capacity of some other governments—most notably the UK AI Security Institute—those efforts are the right place to start creating the technical capacity necessary to govern AI.
Conclusion
Determining the right balance between state and federal regulation of AI will take time and careful judgement. The governance of many past technologies, from air travel to medicine, evolved as regulators experimented with different approaches, and industry adapted to new rules. AI is new enough, and the field encompasses so many different economic sectors, that an all-or-nothing approach is likely the wrong answer.
What’s more, both supporters and opponents of preemption should recognize that the consequences of federal intervention are hard to predict. Many backers of the recent moratorium suggested that preemption would boost innovation, but federal control is not guaranteed to reduce regulatory barriers. The Nuclear Regulatory Commission and the Food and Drug Administration—both of which are designed to promote uniformity in major markets—have been criticized for imposing heavy regulatory burdens that slow innovation well beyond the best safety-efficiency trade-off.23 If a preemptive scheme results in a single regulator, that body can act as a bottleneck to new inventions reaching the market. Those who want to see continued rapid innovation in AI should take care not to stifle development in a rush to centralize political control.
Legislators do not need to work out the final division of labor between the federal government and the states all in one go. Congress should remain nimble in the face of a fast-changing technology, taking incremental steps rather than trying to lock in a rigid system from the outset. A partition between development and deployment is one natural place to start. Some AI uses will likely also call for federal intervention. In many areas, states and the feds will be able to cooperate, sharing the burden of regulation and enforcement under a single legal scheme. Whatever it does, Congress should ground any new rules in the principles that have served the federal system well so far.
Carnegie Technology and International Affairs Program
Receive research announcements and event invitations from the Carnegie Technology and International Affairs Program.
Notes
1See Jonathan Remy Nash, “Null Preemption,” Notre Dame L.Rev. 85, no. 3 (2010) (noting that instances of “null preemption,” when Congress wipes out state rules without a federal replacement, are “historically rare”).
2Murphy v. National Collegiate Athletic Association, 584 U.S. 453 (2017) (noting that the Supreme Court has never upheld “a federal statute that commanded state legislatures to enact or refrain from enacting state law”).
3See, generally, Charles M. Tiebout, “A Pure Theory of Local Expenditures,” J. Pol. Econ. 64, no. 5 (1956); Robert P. Inman & Daniel L. Rubinfeld, “Rethinking Federalism,” J. Econ. Persp. 11, no. 4 (1997); Wallace Oates, “An Essay on Fiscal Federalism,” J. Econ. Lit. 37, no. 3 (1999).
4Abdullah v. American Airlines, Inc., 181 F.3d 363, 375 (3d Cir. 1999).
5Air Line Pilots Association, Int'l v. Quesada, 276 F.2d 892, 894 (2d Cir. 1960); City of Burbank v. Lockheed Air Terminal Inc., 411 U.S. 624, 638-39 (1973).
649 U.S.C. § 41713(b)(1).
7David Greenberg & Mary Graham, “Improving Communication About New Food Technologies,” Issues in Science & Technology, 2000.
8Michele M. Bradley, “The States’ Role in Regulating Food Labeling and Advertising: The Effect of the Nutrition Labeling and Education Act of 1990,” Food & Drug L.J. 49, no. 4 (1994).
9Id. at 658.
10David Vogel, “The Politics of Preemption: American Federalism and Risk Regulation,” Regulation & Governance 16, no. 4 (2022).
11See Jon Schmid, Tobias Sytsma, and Anton Schenk, “Evaluating Natural Monopoly Conditions in the AI Foundation Model Market,” RAND, September 12, 2024, https://www.rand.org/pubs/research_reports/RRA3415-1.html.
12Id.
13Scott Babwah Brennen and Zeve Sanderson, “The State of State Technology Policy: 2024 Report,” NYU Center on Tech Policy, 2024, https://techpolicynyu.org/wp-content/uploads/2024/12/state-tech-policy_2024_CTP_CSMaP_final.pdf.
14“Consumer Protections for Artificial Intelligence,” Colorado General Assembly, n.d.,
15Id.
16Amanda Witt & Jennie Cunningham, “Texas Legislature Passes House Bill 149 to Regulate AI Use,” Nelson Mullins, June 12, 2025, https://www.nelsonmullins.com/insights/alerts/privacy_and_data_security_alert/all/texas-legislature-passes-house-bill-149-to-regulate-ai-use.
17Jordan Fraboni, “A Federal GMO Labeling Law: How It Creates Uniformity and Protects Consumers,” Berkeley Technology Law Journal 32 (2017).
18Erin Close, “Maine Becomes Second State to Require GMO Labels,” Norton Rose Fulbright, January 10, 2014. Ballot measures to enact similar laws in California, Oregon, and Washington had failed in 2012 and 2013.
19Pamela Prah, “Half of States Have Bills About Labeling Genetically Modified Foods,” Stateline, March 13, 2014, https://stateline.org/2014/03/13/many-states-weigh-gmo-labels/.
20John Shaeffer, “Prescription Drug Advertising—Should States Regulate What Is False and Misleading,” Food & Drug L.J. 58, no. 4 (2003): 629, 634, https://pubmed.ncbi.nlm.nih.gov/15027454/. The succession of acts includes the 1906 Pure Food and Drug Act (PFDA); the 1938 Federal Food, Drug, and Cosmetic Act (FDCA); the 1966 Fair Packaging and Labeling Act (FPLA); the 1976 Medical Device Amendments (MDA); and the 1997 Food and Drug Administration Modernization Act (FDAMA).
21Congress provided that “it is the express intent of Congress to supersede any and all laws of the States or political subdivisions thereof insofar as they may now or hereafter provide for the labeling of the net quantity of contents of the package of any consumer commodity . . . which are less stringent than or require information different from the requirements of section 1453 of this title or regulations promulgated pursuant thereto.” 15 U.S.C. § 1461 (2024).
22“Except as provided in subsection (b), no State . . . may establish or continue in effect . . . any requirement . . . (1) which is different from, or in addition to, any requirement applicable under this chapter . . . and (2) which relates to . . . safety or effectiveness . . . or any other matter included in a requirement applicable to the device under this chapter.” 21 U.S.C. § 360k(a) (2024).
23Doug Bandow, “End the FDA Drug Monopoly: Let Patients Choose Their Medicines,” Cato Institute, June 11, 2012, https://www.cato.org/commentary/end-fda-drug-monopoly-let-patients-choose-their-medicines; Christopher Koopman and Eli Dourado, “A Lawless NRC Obstructs Safe Nuclear Power,” Wall Street Journal, January 5, 2025, https://www.wsj.com/opinion/let-states-run-small-nuclear-reactors-energy-policy-f92488ae; Matthew Yglesias, “Can American Get to Yes on a New Reactor?,” Slow Boring (blog), January 18, 2022, https://www.slowboring.com/p/can-america-get-to-yes-on-a-new-reactor (noting that from 1975 to 2022, the NRC had not approved a new nuclear reactor design from start to finish).