Trump sitting at a desk, surrounded by men in suits on one side and journalists on the other

Trump and members of his administration during a signing ceremony on AI on December 11. (Photo by Alex Wroblewski/AFP via Getty Images)

The Most Likely Outcomes of Trump’s Order Targeting State AI Laws

In a way, it puts even more of a spotlight on Congress.

Published on December 15, 2025

What does this EO do?

Jon Bateman: President Donald Trump’s new executive order (EO) tells federal agencies to initiate legal attacks on state AI laws seen as “onerous” or possibly unconstitutional. There are several facets of the order, all of which will require cooperation from the courts.

The Justice Department will set up a task force to bring lawsuits challenging state AI laws. The Commerce Department will publish a list of state AI laws deserving of such challenges. All agencies will look for ways to withhold “discretionary grant programs” from states that overregulate AI, with the Commerce Department, in particular, leveraging its broadband assistance fund. The Federal Communications Commission and Federal Trade Commission will issue new regulations and guidance on AI model disclosures and truthfulness, with the goal of superseding state laws on those same issues. Finally, the White House will develop a legislative package for Congress.

The EO is vague about which state laws will be targeted. Colorado’s AI Act, which imposes heavy new burdens on developers and could have national spillovers, is the only law called out by name. Other clues suggest that AI disclosure requirements, such as California’s SB-53 and New York’s pending RAISE Act, will be scrutinized. Beyond that, Trump simply says he wants “to check the most onerous and excessive laws emerging from the States that threaten to stymie innovation.” We’ll have to wait to find out what that really means.  Regardless, most legal commentators expect the administration will face an uphill battle in court.

Alasdair Phillips-Robins: Essentially, the EO gives agencies a list of tasks. It doesn’t really do anything by itself, other than seek to put political pressure on Congress and state governments. Many of the agency actions have deadlines in the next few months, but as Jon says, most of the real-world effects will depend on court cases that could take years to play out. The designers of the EO seem to believe that existing federal law gives the administration a lot of tools to shape state AI regulation, but it’ll likely be a while before we find out whether the Supreme Court agrees.

Scott Singer: The most likely outcome of this EO will be a federal incident reporting system. That’s part of both this EO and the administration’s U.S. AI Action Plan. SB-53, California’s first-in-the-country frontier AI law, includes a novel legal mechanism of federal deference that invites preemption by a federal standard. The rest of this EO doesn’t really have teeth, making it unlikely to survive court challenges.

What motivated it?

Alasdair Phillips-Robins: The basic motivation is that the White House is worried about an explosion of state-level AI regulation over the next year.

The administration’s AI and crypto czar David Sacks has argued that lots of overlapping or conflicting rules will make it difficult for AI companies to build new models and offer new products to Americans. He’s afraid that America will lose the AI race to China if developers have to deal with a patchwork of state regulations.

Jon Bateman: The basic theory of preemption is clear: Cut red tape, thereby unleashing U.S. companies to outcompete China and reducing blue states’ influence over the AI industry.

But the practical need and supposed urgency is questionable at best. Yes, states have passed a bunch of AI laws—far fewer than some suggest, though. Yet if you actually read these laws, you won’t find many glaring examples of overreach. Mostly, states are governing the use of AI in traditionally regulated areas (such as employment discrimination) within their own borders.

The one example of state mischief cited by the EO—Colorado—has already been delayed, and amendments are ongoing. In other words, the top instance of state overregulation is ultimately a reminder of how cautiously states are treading on AI laws.

How might the EO affect politics around AI?

Anton Leicht: In a way, the EO puts even more of a spotlight on the legislative process. Whether it’s a political success or liability depends on whether it ultimately results in actual legislation—something durable and substantive.

You can tell two political stories here: Either the EO works to move Congress into action, or it creates further resistance among preemption opponents who might be happy to wait until after the midterm election next November. Currently, the first outcome lacks an obvious legislative vehicle, so the second is looking a bit more likely.

Alasdair Phillips-Robins: The EO may discourage some Republican state legislators from pursuing new AI regulations, either because they find its arguments persuasive or because they don’t want a political and legal fight with the Trump administration. But many Democratic state governments will probably want to position themselves against it, and even some Republicans, such as Florida Governor Ron DeSantis, have criticized the idea of federal preemption.

Scott Singer: Counterintuitively, the EO underscores that states will remain at the heart of American AI policy in 2026. This EO will be challenged in the courts, and Congress will remain bottlenecked, leaving states with a clear lane. Meanwhile, in the frontier AI space, other states such as New York may be converging on the approach outlined by California’s SB-53. In areas such as child safety, where both Republicans and Democrats have called for regulation, the EO suggests more deference to states’ role––and there’s substantial activity in California, including recent ballot initiatives, specifically focused on those issues.

Jon Bateman: The EO furthers the GOP’s movement toward becoming the “pro-AI” party. That’s a major win for the so-called Tech Right, a loose collection of Silicon Valley figures who helped Trump secure reelection but have often lost policy battles within the new administration. It’s also a risky gambit for Republicans: Public sentiment on AI is fairly hostile, and the technology’s harms (real or perceived) have growing political salience.

To be sure, the GOP has plenty of AI skeptics. Steve Bannon, one of the most influential right-wing voices, has relentlessly inveighed against AI threats such as labor disruption and rogue superintelligence. Georgia Representative Marjorie Taylor Greene fought hard against preemption in Congress, even citing the issue as one of a handful—alongside strikes on Iran and failure to release the Epstein files—that led to her career-altering break with Trump.

But the EO makes official what has been evident for months: Trump, who still firmly controls his party and defines its public image, has sided with the AI accelerationists. This pro-AI brand may come back to haunt GOP candidates in the 2026 midterms and beyond, as concerns about AI penetrate further into mainstream political conversations.

What role have tech companies played in this process, and how are they responding?

Anton Leicht: Tech companies have been surprisingly quiet so far. AI advocates frequently argue that a patchwork of state-level laws makes advancement harder for strategically important developers, but the developers themselves have been somewhat more reserved about regulation. This divide may be hinting at a growing gap between what plays well in the politics of preemption in Washington and what’s needed by the standards of the technical policy conversation in Silicon Valley.

Scott Singer: I don’t find the tech companies’ silence surprising. Companies care about the compliance burdens associated with policies because a lack of legal clarity can be genuinely expensive. Frontier AI developers are no different. An EO that will inevitably be challenged throughout the courts introduces regulatory uncertainty, potential for greater legal liability in the long term, and de facto red tape.

Although the administration has railed against state AI laws and attacked SB-53 in particular, the California law was ultimately a compromise that had industry buy-in. Undoing SB-53 threatens breaking a fragile peace between the strongest accelerationists and safetyists that could leave companies worse off.

Jon Bateman: Here is a great example of the AI industry’s mixed signals about preemption: Back in March, OpenAI asked Congress for federal preemption to “provid[e] the private sector relief from the 781 and counting proposed AI-related bills already introduced this year in US states.” 

Yet just a few days ago, the company proposed its own state AI law that could itself be vulnerable to preemption. OpenAI’s California ballot initiative would regulate AI “companion chatbots” to address mental health risks, such as suicide, through improved disclosures to users and the government. Under Trump’s EO, though, the Department of Justice could sue to block OpenAI’s proposed California rule if the federal government deemed it “onerous” or unconstitutional.

Maybe this ballot initiative isn’t the kind of state law that OpenAI—or Trump’s team—intends to preempt. But the possibility of crossed wires here highlights the general lack of clarity that has bedeviled AI preemption plans.

How might this affect the U.S. AI global advantage?

Alasdair Phillips-Robins: Not very much. That’s a lukewarm take, but I think the EO is more about signaling and political pressure on Congress and the states than genuine legal impact on American AI companies. Without action from Congress, the executive branch just doesn’t have many levers to get states to do what it wants in areas like domestic AI regulation. And given the uncertainty in how the political debate plays out in Congress, it’s too soon to say if this will do anything to speed up American AI development.

Scott Singer: I agree with Alasdair. The uncertainty this EO creates for companies could generate unnecessary costs, but American technology competitiveness has always run through the combination of leveraging Silicon Valley’s extensive capital, attracting high-end talent, and carefully tailoring policies to foster a culture of entrepreneurialism. The EO doesn’t change that—it just distracts and creates uncertainty when the core innovation formula driving American competitiveness was working.

Jon Bateman: The AI industry often paints a picture of hyperactive state legislatures busily carving America into a Byzantine patchwork of laws, seriously threatening innovation. If this were true, then immediate federal preemption would be important for U.S. competitiveness.

But it’s not true. During this same period of supposed regulatory uncertainty at the state level, U.S. AI companies have gained trillions of dollars in market value, debuted whole new categories of AI products, and retained a clear global leadership position.

The reality is that U.S. states simply aren’t doing much to hold back this industry, which will stand or fall on its own merits. So Trump’s EO won’t—and can’t—do much to clear obstacles away.

Carnegie does not take institutional positions on public policy issues; the views represented herein are those of the author(s) and do not necessarily reflect the views of Carnegie, its staff, or its trustees.