Micah Weinberg
{
"authors": [
"Micah Weinberg"
],
"type": "commentary",
"centerAffiliationAll": "dc",
"centers": [
"Carnegie Endowment for International Peace"
],
"englishNewsletterAll": "ctw",
"nonEnglishNewsletterAll": "",
"primaryCenter": "Carnegie Endowment for International Peace",
"programAffiliation": "CC",
"programs": [
"Carnegie California"
],
"regions": [
"United States"
],
"topics": [
"AI",
"Subnational Affairs",
"Domestic Politics",
"Technology"
]
}Photo by BlackJack3D/iStock
California Sees Ways AI Can Support Policymaking. Here’s What It Needs to Succeed.
For AI to capture the public’s policy concerns, people need to know that the models are elevating human concerns in human words, not generating their own.
At the very moment that artificial intelligence is generating profound anxiety about everything from the future of work to the future of warfare, democratic institutions are turning that same technology toward a different purpose: giving citizens a genuine voice in how their governments act. However, there is a key unresolved tension at the heart of AI and democratic governance. It helps policymakers metabolize massive amounts of public input, but it can also obscure what is authentically human. California is navigating this tension with ambition and caution simultaneously.
Through the Engaged California program, the state has leveraged digital technologies and artificial intelligence to revolutionize public engagement with the goal of “listening at scale” and crafting public policies that are more responsive to public concerns at a granular level. These technologies are able to usefully summarize the experiences and perspectives of millions of citizens and connect people to potential avenues of responsiveness in ways that were impossible before their advent.
The program has already twice demonstrated potential in more contained settings: In the aftermath of the devastating Los Angeles wildfires in early 2025, AI-assisted analysis transformed over a thousand detailed resident responses into clear, actionable insights while, crucially, preserving people’s own language and elevating shared concerns about housing, insurance, and long-term resilience. The second engagement focused on public employees, soliciting their ideas about how to make government more efficient. Again, AI helped to summarize their concerns in a way that allowed decisionmakers to link the recommendations back to specific comments by specific people in a transparent and coherent manner.
The use of AI for sensemaking, not for replacing human judgment but for making large-scale human input legible to policymakers, represents a genuinely new civic capability. Carnegie California has been a collaborative partner in developing these approaches, drawing on the expertise of its Deliberation and Response Technologies (DART) working group, a cohort of public and private sector innovators working to build a replicable model for AI-assisted engagement at the national, state, and local levels.
But California is also confronting, with equal seriousness, the ways in which AI threatens the integrity of public participation rather than enhancing it. State Senator Christopher Cabaldon has introduced Senate Bill 1159, which would specify that for purposes of the California Public Records Act and relevant open meeting statutes, the legal terms “person” and “member of the public” do not include artificial intelligence systems, autonomous agents, robots, or other nonhuman entities. The legislation is a direct response to the documented weaponization of AI to simulate mass participation in policy deliberations. In one striking recent episode, at least 20,000 AI-generated public comments flooded Southern California’s top air pollution authority and appear to have contributed to the agency scrapping a plan to phase out gas-powered appliances, a policy outcome driven not by the considered views of human constituents but by the manufactured volume of machine-generated input.
Cabaldon has framed the threat vividly: “AI slop drowns out the voices of genuine human citizens.” His bill represents one of the first explicit legislative attempts in the country to define what counts as legitimate public participation in the age of generative AI, a question that every democracy will eventually have to answer.
SB 1159 still faces practical challenges related to verification: determining, at scale, which comments are of human origin. But the normative statement the bill makes is important regardless of its enforcement mechanics. Democratic legitimacy depends on the participation of citizens, and participation that can be automated at zero marginal cost by any well-resourced interest group is participation that has effectively been abolished.
California is not the only state wrestling with these questions, though it is arguably the most systematic. In Fort Collins, Colorado, AI-enabled analysis helped the city engage with over 4,000 long-form resident responses on a highly contested land-use issue, a scale of qualitative engagement that would previously have been analytically impossible for a municipal government. Colorado has also been at the frontier of AI regulation more broadly: Governor Jared Polis signed the country’s first comprehensive AI accountability law in 2024, and while the state has since struggled with the practical complexities of implementation, delaying the law’s effective date to June 2026 while convening a new task force to rewrite portions of it, the effort reflects a genuine and serious attempt to get governance right in real time.
Across the United States, more than a dozen states have issued executive orders addressing how AI should and should not be used in state government, with California, Oregon, and Maryland among those most explicitly prioritizing civil rights protections and equity outcomes in AI deployment. In 2025 alone, 1,208 AI-related bills were introduced across all fifty states, with 145 enacted into law. States are, unmistakably, serving as laboratories of democracy on this question, a function the federal government has largely vacated.
What California is attempting with Engaged California, and what Cabaldon is attempting with SB 1159, together constitute a dual mandate that every state and every democracy will need to adopt: using AI to make public deliberation more inclusive, rigorous, and actionable, while simultaneously protecting public deliberation from AI’s capacity to distort, inflate, and manufacture the appearance of citizen voice.
These goals are complementary, not contradictory. The case for AI-assisted deliberation rests precisely on the premise that authentic human voices, properly aggregated and synthesized, can inform better policy. That case collapses entirely if the inputs themselves cannot be trusted to reflect genuine human views.
The legitimacy of Engaged California and other similar processes will depend on participants trusting that their voices, not fabricated ones, are shaping the conclusions. Cabaldon’s legislation is not a skeptic’s intervention against AI in democracy; it is a necessary precondition for the very possibility of AI-enhanced democracy.
What remains to be built is the institutional infrastructure to make this dual mandate durable. That means verification mechanisms for public comment processes, public transparency standards for AI-assisted sensemaking tools, and clear doctrines about what kinds of AI assistance enhance versus distort deliberative processes. California, through Engaged California and the legislation moving through its capitol building, is closer than any other state to articulating such a framework. The rest of the country and the rest of the world would do well to watch closely, and to build on what California gets right.
About the Author
Nonresident Scholar, Carnegie California
Dr. Micah Weinberg is a nonresident scholar at Carnegie California. His scholarship centers on the global relevance of the quality of democracy and public policy in this important subnational polity.
- The Promise and Potential of Engaged CaliforniaCommentary
- The Governors Public Health Alliance May Represent a Major Shift in GovernanceCommentary
Ian Klaus, Micah Weinberg
Recent Work
Carnegie does not take institutional positions on public policy issues; the views represented herein are those of the author(s) and do not necessarily reflect the views of Carnegie, its staff, or its trustees.
More Work from Malcolm H. Kerr Carnegie Middle East Center
- What Does the Strait of Hormuz’s Closure Mean?Commentary
In an interview, Roger Diwan discusses where the global economy may be going in the third week of the U.S.-Israeli war with Iran.
Nur Arafeh
- Tehran’s Easy TargetsCommentary
In an interview, Andrew Leber discusses the impact the U.S. and Israeli war against Iran is having on Arab Gulf states.
Michael Young
- The Gulf Conflict and the South CaucasusCommentary
In an interview, Sergei Melkonian discusses Armenia’s and Azerbaijan’s careful balancing act among the United States, Israel, and Iran.
Armenak Tokmajyan
- Syria Skirts the Conflict With IranCommentary
In an interview, Kheder Khaddour explains that Damascus is trying to stabilize its borders, but avoiding war isn’t guaranteed.
Michael Young
- Israel’s Forever WarsCommentary
The country’s strategy is no longer focused on deterrence and diplomacy, it’s about dominance and degradation.
Nathan J. Brown