State governments have been at the forefront of AI policy debates on issues such as deepfakes and protections for whistleblowers and consumers. California, home to a majority of the industry’s largest companies, has been in the spotlight of these rapidly moving discussions. Now, OpenAI, one of the largest AI companies, has waded into California’s process by releasing a ballot initiative on safety controls for AI companion chatbots. The initiative, if ultimately adopted, would require AI operators to increase trust and safety standards for users, particularly minors, and put in place reporting requirements around such efforts that would increase transparency.
Voters wouldn’t weigh in on the initiative until November 2026, and there are numerous hurdles to clear before the initiative even makes it on the ballot. But just as the release of ChatGPT began a new era of AI in November 2022, the proposed initiative opens a new arena in AI policy by appealing directly to voters at a time when the executive branch is preparing to launch legal challenges at state-level AI initiatives.
The proposed initiative opens a new arena in AI policy by appealing directly to voters at a time when the executive branch is preparing to launch legal challenges at state-level AI initiatives.
California’s ballot initiative process means that the features and bugs of direct democracy will be brought to bear on the policy debate surrounding one of the most powerful technologies ever invented. While couched within a seemingly domestic issue—child safety—the initiative brings important questions about AI’s future, including widespread diffusion through chatbots, to the ballot. And while there are no perfect historical parallels for AI, very few if any direct democracy processes were brought to bear on transformative general purpose technologies like electricity or the personal computer, or on powerful technologies like nuclear weapons and energy.
These ballot initiatives have emerged against rapidly changing dynamics within the United States around who can set policies for different domains of AI policy. Last week, President Donald Trump signed an executive order that would initiate legal attacks on states with overly burdensome AI laws.
But child safety rules may not be subject to the same attacks as other state AI laws. The EO explicitly suggests leaving laws in that domain to the states so long as they are lawful. Both Republicans and Democrats have expressed concerns about AI’s effects on child safety, and a bipartisan group of senators have proposed child safety rules around AI. The introduction of California’s direct democracy mechanisms to address child safety in AI could mean that any voting-age Californian could potentially directly input one of the most discussed and contentious areas of AI policy across the United States.
How AI Rules Get Made in California
So far, California Governor Gavin Newsom has signed more than twenty AI-related bills in the last two years. SB-1047 (Safe and Secure Innovation for Frontier Artificial Intelligence Models Act) in 2024 and SB-53 (Transparency in Frontier Artificial Intelligence Act) in 2025 have received the most attention. SB-1047, introduced by California Senator Scott Wiener in February 2024, was passed by the state assembly in August 2024. Newsom ultimately vetoed it in September 2024. SB-53 passed the state house and was signed into law by the governor in September 2025.
The trajectories of both bills were by no means straightforward. The introduction of the first frontier AI-focused bill, SB-1047, split industry leaders and sparked debate on questions of geopolitical competition, existential safety, and economic competitiveness. Just about everyone in Silicon Valley—A16Z, OpenAI, Anthropic, Meta—had a position. SB-53, meanwhile, found its origins in an independent report product by scholarly experts tasked with providing the state guidance on frontier AI policy. Though the results and debates differed, both bills originated in the state house, worked their way through Sacramento, and ultimately found their way to the governor’s office.
But California’s democracy is more complicated than that. In addition to the governor’s own executive orders and legislation, California has a robust direct democracy process. Though the OpenAI ballot initiative is in fact the second to take up the question of AI chatbots in the context of traditional trust and safety issues, joining an October-released initiative from Common Sense Media, it had been unusual until recently for technology issues to find their way to the ballot through this route. Previous initiatives around emerging technologies included Prop 22 (2020) on ride sharing employment status and Prop 30 (2022) on electric vehicle adoption. At a moment when preemption is being discussed at the federal level, industry leaders remain focused on policy at the state level, including in the ballot initiative arena.
In this case, the presence of multiple ballots certainly makes things more complicated for the voters as well as the initiative proponents and opponents running campaigns. If they both pass, then the one with the higher majority will become the new law.
How California’s Ballot Initiatives Work
The AI initiative’s proponents who submitted a request for a title and summary for a November 2026 ballot measure are at the front end of a lengthy and complex process with a storied history of successes and failures in California. In the throes of the Progressive Era in 1911, California passed a series of constitutional amendments to allow a majority of state voters to make new state laws, change state policies that had been passed by the legislature, and remove statewide elected officials before their term ends. Since the initiative process began, to date there have been 2,152 initiatives submitted for title and summary and 19 percent succeeded in collecting enough signatures for the ballot. Of the 401 initiatives that appeared on the ballot, only 35 percent have passed with a majority vote.
The first hurdle for this initiative’s proponents will be collecting signatures from 546,651 registered voters in six months. There are many other ballot measures in circulation so there will be stiff competition and a hefty price for achieving this goal. Assuming this minimum threshold is reached, the citizens’ initiative will qualify for the November ballot, and then there is the much heavier lift of running a successful campaign that will gain the interest and support of a majority of voters. Currently there are nineteen citizen initiatives pending at the Attorney General’s Office, seventeen cleared for circulation, and four have qualified for the November 2026 ballot.
Last but not least, it is important to note that the legislature created an offramp about a decade ago so that the proponents could withdraw their initiative before the election, thus allowing initiative proponents an opportunity for legislative compromise.
What Californians Think About Direct Democracy
Californians have consistently told us in polling that they think it is a “good thing” that a majority of voters can make laws and change public policies. However, they are only “somewhat satisfied” with the way the process is working today, and believe that special interests have too much influence. Voters complain that initiatives can be too complicated to understand, that they are often asked to weigh in on too many ballot measures, and they often lack the information that they need to make informed choices through the initiative process. In this context, voters often look for trusted sources in the list of supporters and opponents of an initiative to help them decide what to do. What do the governor, major political parties, business leaders, and labor organizations have to say about this ballot initiative? When in doubt, the default is for voters to say “no” and keep the status quo rather than making a change in state policy.
Californians expect AI to have dramatic effects on the economy and their communities, and many are anxious about those impacts.
And what of this specific issue? Well, in the summer of 2025, we actually asked California voters. Californians expect AI to have dramatic effects on the economy and their communities, and many are anxious about those impacts. But when asked “How concerned are you about family members using AI for social companionship,” 46 percent of Californians say they are very or somewhat concerned, but a similar number (43 percent) report not being concerned. There is no clear mandate in either direction, meaning a political contest will come.
Assuming the process moves forward, 23 million California voters, most of whom may have never thought carefully about AI governance but are increasingly confronted with the technology, will have the potential to make binding policy on a technology that experts have struggled to regulate effectively.



