• Research
  • Emissary
  • About
  • Experts
Carnegie Global logoCarnegie lettermark logo
Democracy
  • Donate
{
  "authors": [
    "Ian Klaus",
    "Scott Singer",
    "Mark Baldassare"
  ],
  "type": "commentary",
  "blog": "Emissary",
  "centerAffiliationAll": "",
  "centers": [
    "Carnegie Endowment for International Peace"
  ],
  "collections": [
    "Democratic Innovation"
  ],
  "englishNewsletterAll": "",
  "nonEnglishNewsletterAll": "",
  "primaryCenter": "Carnegie Endowment for International Peace",
  "programAffiliation": "",
  "programs": [
    "Carnegie California"
  ],
  "projects": [],
  "regions": [
    "United States"
  ],
  "topics": [
    "Technology",
    "AI",
    "Subnational Affairs"
  ]
}
Attribution logo
Altman sitting in a chair on a stage

Open AI CEO Sam Altman speaks in June in San Francisco. (Photo by Justin Sullivan/Getty Images)

Commentary
Emissary

AI’s New Test on California’s Direct Democracy

OpenAI has waded into the state’s process by releasing a ballot initiative on safety controls for AI companion chatbots.

Link Copied
By Ian Klaus, Scott Singer, Mark Baldassare
Published on Dec 22, 2025
Emissary

Blog

Emissary

Emissary harnesses Carnegie’s global scholarship to deliver incisive, nuanced analysis on the most pressing international affairs challenges.

Learn More
Program mobile hero image

Program

Carnegie California

Carnegie California links developments in California and the West Coast with national and global conversations around technology, democracy, and trans-Pacific relationships. At a distance from national capitals, and located in one of the world’s great experiments in pluralist democracy, Carnegie California engages a wide array of stakeholders as partners in its research and policy engagement.


Learn More

State governments have been at the forefront of AI policy debates on issues such as deepfakes and protections for whistleblowers and consumers. California, home to a majority of the industry’s largest companies, has been in the spotlight of these rapidly moving discussions. Now, OpenAI, one of the largest AI companies, has waded into California’s process by releasing a ballot initiative on safety controls for AI companion chatbots. The initiative, if ultimately adopted, would require AI operators to increase trust and safety standards for users, particularly minors, and put in place reporting requirements around such efforts that would increase transparency.

Voters wouldn’t weigh in on the initiative until November 2026, and there are numerous hurdles to clear before the initiative even makes it on the ballot. But just as the release of ChatGPT began a new era of AI in November 2022, the proposed initiative opens a new arena in AI policy by appealing directly to voters at a time when the executive branch is preparing to launch legal challenges at state-level AI initiatives.

The proposed initiative opens a new arena in AI policy by appealing directly to voters at a time when the executive branch is preparing to launch legal challenges at state-level AI initiatives.

California’s ballot initiative process means that the features and bugs of direct democracy will be brought to bear on the policy debate surrounding one of the most powerful technologies ever invented. While couched within a seemingly domestic issue—child safety—the initiative brings important questions about AI’s future, including widespread diffusion through chatbots, to the ballot. And while there are no perfect historical parallels for AI, very few if any direct democracy processes were brought to bear on transformative general purpose technologies like electricity or the personal computer, or on powerful technologies like nuclear weapons and energy.

These ballot initiatives have emerged against rapidly changing dynamics within the United States around who can set policies for different domains of AI policy. Last week, President Donald Trump signed an executive order that would initiate legal attacks on states with overly burdensome AI laws. 

But child safety rules may not be subject to the same attacks as other state AI laws. The EO explicitly suggests leaving laws in that domain to the states so long as they are lawful. Both Republicans and Democrats have expressed concerns about AI’s effects on child safety, and a bipartisan group of senators have proposed child safety rules around AI. The introduction of California’s direct democracy mechanisms to address child safety in AI could mean that any voting-age Californian could potentially directly input one of the most discussed and contentious areas of AI policy across the United States.

How AI Rules Get Made in California

So far, California Governor Gavin Newsom has signed more than twenty AI-related bills in the last two years. SB-1047 (Safe and Secure Innovation for Frontier Artificial Intelligence Models Act) in 2024 and SB-53 (Transparency in Frontier Artificial Intelligence Act) in 2025 have received the most attention. SB-1047, introduced by California Senator Scott Wiener in February 2024, was passed by the state assembly in August 2024. Newsom ultimately vetoed it in September 2024. SB-53 passed the state house and was signed into law by the governor in September 2025. 

The trajectories of both bills were by no means straightforward. The introduction of the first frontier AI-focused bill, SB-1047, split industry leaders and sparked debate on questions of geopolitical competition, existential safety, and economic competitiveness. Just about everyone in Silicon Valley—A16Z, OpenAI, Anthropic, Meta—had a position. SB-53, meanwhile, found its origins in an independent report product by scholarly experts tasked with providing the state guidance on frontier AI policy. Though the results and debates differed, both bills originated in the state house, worked their way through Sacramento, and ultimately found their way to the governor’s office.

 But California’s democracy is more complicated than that. In addition to the governor’s own executive orders and legislation, California has a robust direct democracy process. Though the OpenAI ballot initiative is in fact the second to take up the question of AI chatbots in the context of traditional trust and safety issues, joining an October-released initiative from Common Sense Media, it had been unusual until recently for technology issues to find their way to the ballot through this route. Previous initiatives around emerging technologies included Prop 22 (2020) on ride sharing employment status and Prop 30 (2022) on electric vehicle adoption. At a moment when preemption is being discussed at the federal level, industry leaders remain focused on policy at the state level, including in the ballot initiative arena.

In this case, the presence of multiple ballots certainly makes things more complicated for the voters as well as the initiative proponents and opponents running campaigns. If they both pass, then the one with the higher majority will become the new law. 

How California’s Ballot Initiatives Work

The AI initiative’s proponents who submitted a request for a title and summary for a November 2026 ballot measure are at the front end of a lengthy and complex process with a storied history of successes and failures in California. In the throes of the Progressive Era in 1911, California passed a series of constitutional amendments to allow a majority of state voters to make new state laws, change state policies that had been passed by the legislature, and remove statewide elected officials before their term ends. Since the initiative process began, to date there have been2,152 initiatives submitted for title and summary and 19 percent succeeded in collecting enough signatures for the ballot. Of the 401 initiatives that appeared on the ballot, only 35 percent have passed with a majority vote.

The first hurdle for this initiative’s proponents will be collecting signatures from546,651registered voters in six months. There are many other ballot measures in circulation so there will be stiff competition and a hefty price for achieving this goal. Assuming this minimum threshold is reached, the citizens’ initiative will qualify for the November ballot, and then there is the much heavier lift of running a successful campaign that will gain the interest and support of a majority of voters. Currently there arenineteen citizen initiativespending at the Attorney General’s Office, seventeen cleared for circulation, and four have qualified for the November 2026 ballot.

Last but not least, it is important to note that the legislature created an offramp about a decade ago so that the proponents could withdraw their initiative before the election, thus allowing initiative proponents an opportunity for legislative compromise.

What Californians Think About Direct Democracy

Californians have consistently told us inpolling that they think it is a “good thing” that a majority of voters can make laws and change public policies. However, they are only “somewhat satisfied” with the way the process is working today, and believe that special interests have too much influence. Voters complain that initiatives can be too complicated to understand, that they are often asked to weigh in on too many ballot measures, and they often lack the information that they need to make informed choices through the initiative process. In this context, voters often look for trusted sources in the list of supporters and opponents of an initiative to help them decide what to do. What do the governor, major political parties, business leaders, and labor organizations have to say about this ballot initiative? When in doubt, the default is for voters to say “no” and keep the status quo rather than making a change in state policy.

Californians expect AI to have dramatic effects on the economy and their communities, and many are anxious about those impacts.

And what of this specific issue? Well, in the summer of 2025, we actually asked California voters. Californians expect AI to have dramatic effects on the economy and their communities, and many are anxious about those impacts. But when asked “How concerned are you about family members using AI for social companionship,” 46 percent of Californians say they are very or somewhat concerned, but a similar number (43 percent) report not being concerned. There is no clear mandate in either direction, meaning a political contest will come.

Assuming the process moves forward, 23 million California voters, most of whom may have never thought carefully about AI governance but are increasingly confronted with the technology, will have the potential to make binding policy on a technology that experts have struggled to regulate effectively.

Authors

Ian Klaus
Founding Director, Carnegie California
Ian Klaus
Scott Singer
Fellow, Technology and International Affairs
Scott Singer
Mark Baldassare
Nonresident Scholar, Carnegie California
Mark Baldassare
TechnologyAISubnational AffairsUnited States

Carnegie does not take institutional positions on public policy issues; the views represented herein are those of the author(s) and do not necessarily reflect the views of Carnegie, its staff, or its trustees.

More Work from Emissary

  • Soldier looking at a drone on the ground
    Commentary
    Emissary
    Are All Wars Now Drone Wars?

    From Sudan to Ukraine, UAVs have upended warfighting tactics and become one of the most destructive weapons of conflict.

      • Jon Bateman

      Jon Bateman, Steve Feldstein

  • Carney speaking on stage
    Commentary
    Emissary
    Carney’s Remarkable Message to Middle Powers

    And how they can respond.

      • +1

      Sophia Besch, Steve Feldstein, Stewart Patrick, …

  • Trump speaking on a stage
    Commentary
    Emissary
    The Greenland Episode Must Be a Lesson for Europe and NATO

    They cannot return to the comforts of asymmetric reliance, dressed up as partnership.

      Sophia Besch

  • Trump speaking to a room of reporters
    Commentary
    Emissary
    Unpacking Trump’s National Security Strategy

    Carnegie scholars examine the crucial elements of a document that’s radically different than its predecessors.

      • Cecily Brewer
      • +18

      James M. Acton, Saskia Brechenmacher, Cecily Brewer, …

  • Biden and Xi standing in front of a set of doors, with Xi waving
    Commentary
    Emissary
    Foreign Policy Outcomes Can Be Hard to Measure. This One Isn’t.

    A new study found that a combination of policy and diplomatic focus contributed to a dramatic shift in fentanyl-related overdose deaths.

      • Jeffrey Prescott

      Jeffrey Prescott

Get more news and analysis from
Carnegie Endowment for International Peace
Carnegie global logo, stacked
1779 Massachusetts Avenue NWWashington, DC, 20036-2103Phone: 202 483 7600Fax: 202 483 1840
  • Research
  • Emissary
  • About
  • Experts
  • Donate
  • Programs
  • Events
  • Blogs
  • Podcasts
  • Contact
  • Annual Reports
  • Careers
  • Privacy
  • For Media
  • Government Resources
Get more news and analysis from
Carnegie Endowment for International Peace
© 2026 Carnegie Endowment for International Peace. All rights reserved.