Stock image of an unreadable conversation with an AI chatbot.
Source: Getty
commentary

How AI Can Unlock Public Wisdom and Revitalize Democratic Governance

Human expertise and oversight are crucial for success.

by Rahmin Sarabi
Published on July 22, 2025

In February 2025, the state of California announced a new deliberative democracy program and platform. Carnegie California played a collaborative role in its development and launch, bringing in scholarly and practitioner expertise from California and around the world. The essay below captures key ideas from the experts who informed that process. To read more on this subject, please go to our California Deliberative Democracy Program essay series page.

Democracy Under Strain

Across the United States, democracy faces mounting challenges from polarization, public distrust, and increasingly complex societal challenges. Traditional systems of civic participation—and the broader foundations of democratic governance—have struggled to adapt as media and electoral incentives increasingly reward outrage over understanding.

Despite these challenges, new possibilities are emerging. Artificial intelligence—specifically large language models (LLMs)—is beginning to serve as a transformative tool for public engagement and policymaking, led by innovative governments and civic institutions.

When used thoughtfully, LLMs can help unlock public wisdom, rebuild trust, and enable better decisionmaking—not by replacing human judgment, but by strengthening it. This promise doesn’t dismiss the serious concerns about AI’s impact on social cohesion, work, and democracy—which remain vital to address. Yet these emerging capabilities can enhance both institutional efficiency and, more importantly, core democratic values: inclusiveness, meaningful participation, and deliberative reasoning.

By strengthening these foundations, AI can enable the collaborative problem-solving today’s interconnected problems demand and help us renew democracy to meet the challenges of our time. This piece examines concrete applications where LLM-based AI is already enhancing democratic processes—from citizen engagement to survey and context analysis—and explores principles for scaling these innovations responsibly.

Listening, Insights, and Deliberation at Scale

The Traditional Challenge. Governments have traditionally struggled to gather meaningful input from large numbers of residents, let alone engage them in co-governance. While multiple-choice polls can capture top-of-mind opinions on predefined options, they rarely surface deeper insights or emergent ideas. In contrast, open-ended surveys, personal reflections, and group deliberation offer far richer input—but have often gone underutilized, not because they lack value but because institutions lacked the time, expertise, or tools to apply them. Research, for example, shows that open-ended responses offer windows into public attitudes, but extracting that meaning requires conceptual grounding and interpretive care.

The AI Breakthrough. With recent advances, LLMs such as those from companies like OpenAI and Anthropic can analyze thousands of detailed public responses with both depth and precision, preserving nuance that was previously lost. These tools do more than summarize. They can surface underlying values and tensions, map areas of agreement and disagreement, and reveal how people are making sense of difficult trade-offs. The result is not just a more detailed picture of what people think, but a more human understanding of why they think it—expanding what may be politically possible.

At the frontier of democratic innovation, early research on AI-enhanced citizens’ assemblies suggests that blending AI analysis with human facilitation can retain the nuance and trust-building central to civic engagement while expanding deliberative capacity and transparency in ways not previously possible.

These developments address a central challenge in democracy: how to deepen the quality of public participation and deliberation while expanding its reach. This capacity to engage at larger scale with high quality analysis represents a major shift: Institutions can now gain insightful civic intelligence that was previously out of reach—and move further toward governing with the public.

Subnational and National Examples

Cities Leading the Way. In Fort Collins, Colorado, AI-enabled analysis helped the city engage with over 4,000 long-form responses on a highly contested land-use issue. Normally, this volume might have been reduced to a few quotes and vague themes, unless backed by a large budget. Instead, the city gained a clear, multidimensional view of public perspectives that directly informed both the city council and a council-commissioned civic assembly that produced twenty-two consensus recommendations, transforming years of polarized debate into coordinated action that is shaping the future of the site. 

In Bowling Green, Kentucky, local leaders used similar tools to engage nearly 8,000 participants on the question of what residents wanted to see in their community over the next twenty-five years. With the city’s population expected to double in that time, the process allowed people to reflect on the future they wanted—and be heard. Warren County Judge Executive Doug Gorman described the impact clearly: “The advancements in new technologies like AI . . . make it possible for us to incorporate ideas and understand the opinions, concerns, and interests of thousands of people.”

National Scale Applications. This kind of scale and inclusivity is not limited to local planning. In France, the national Citizens’ Convention on the End of Life used AI to help the broader public understand and engage with its findings. Through a chat interface, citizens could ask questions like “What are the convention’s recommendations for palliative care?” and receive plain-language, contextual answers. The platform offered a glimpse of how AI might open up the policy cycle itself, making complex deliberative processes legible and accessible to the public.

Screenshot showing a conversation with an AI chatbot about France's End of Life Convention.

France’s End of Life Convention Used AI to Help the Public Engage with Recommendations and Related Laws in Plain Language


Source: “Demander, Comprendre: La Convention citoyenne sur la fin de vie,” Le Conseil Économique, Social et Environmental and Convention Citoyenne Cese sur la fin de vie, 2023, https://panoramic.make.org/partner/cese/event/convention-citoyenne-sur-la-fin-de-vie-projet-de-loi/panoramic.

Finally, recent breakthroughs in real-time translation further point to an even more inclusive future. For decades, language barriers have prevented millions of people from fully participating in civic life. Now, AI-powered translation is making it possible for communities to deliberate in multiple languages without delay or prohibitive costs.

California’s Role

California is already emerging as a leader in applying these innovations. Through the state’s Engaged California program at the Office of Data and Innovation, where I serve as a program design partner, agencies and civic partners are beginning to use AI to make civic input more substantive and actionable.

One notable example came in the wake of devastating wildfires in Los Angeles early in 2025. AI analysis transformed over 1,000 detailed resident responses into clear, actionable insights while preserving people’s own language and elevating shared concerns about housing, insurance, and long-term resilience. Rather than amplifying the loudest voices or interest groups, the analysis reflected a fuller spectrum of lived experiences and gave policymakers a more grounded understanding of what mattered most. This, in turn, is informing both leaders’ immediate actions and the design of follow-up engagements.

Another innovative use case came from a national-scale project funded by OpenAI’s Democratic Inputs to AI initiative. Organizers brought together 1,500 demographically representative Americans using the Remesh platform and AI-assisted synthesis tools. Over three rapid rounds of deliberation, participants developed concrete guidelines for how AI systems should handle sensitive subjects like medical advice and armed conflict. What might once have taken months was completed in two weeks—demonstrating that participatory policymaking can be inclusive, thoughtful, and fast.

Together, these efforts suggest a future in which public voice is not limited to elections or symbolic comment periods, but becomes a regular, rigorous part of democratic governance.

Proceeding with Caution

Still, this moment demands caution as well as ambition. AI presents real risks that must be addressed if it’s to support—not undermine—democracy.

Data Privacy and Ownership. The flagship generative AI models are currently developed by private corporations. When civic engagement relies on these systems, it raises critical questions: Where does the data live? Who controls it? How might it be used in the future? Especially for communities historically subjected to surveillance or exclusion, trust hinges on transparency and accountability.

To address this, California is exploring public-interest alternatives. The proposed CalCompute initiative envisions open-source AI systems hosted on public servers, governed by public institutions. New York’s Empire AI consortium is taking a complementary approach by investing in academic and state-led AI research infrastructure. Though distinct in design, both efforts reflect a shared belief: Public infrastructure is essential to ensuring AI serves democratic—not corporate—ends.

Bias and Hallucination. AI models are trained on vast datasets that often mirror societal inequities. Without careful human oversight, AI systems can amplify existing biases, misinterpret public input, or in rare cases, generate false information. That’s why AI should never be used to replace human facilitators, researchers, or public servants. It must augment and empower them.

Transparency and Explainability. Citizens deserve to understand how their input is being processed and analyzed. In democratic contexts, AI analysis must be auditable and explainable, with clear documentation of how conclusions were reached. When thousands of comments are synthesized into key themes, the public should be able to trace those themes back to the underlying data and understand the analytical process.

Genuine Influence, Not “Participation-Washing.” There’s a risk that AI-enhanced engagement becomes a sophisticated form of “participation-washing”—impressive-sounding public input that doesn’t actually influence decisions. Real democratic innovation requires not just better listening tools, but institutional commitment to acting on what’s heard. Without this commitment, even the most advanced engagement becomes an exercise in futility.

These challenges underscore a fundamental principle: Human expertise must remain central to AI-assisted democratic processes. Research on AI integration in citizens’ assemblies reinforces this principle, showing that effective facilitation requires what scholars describe as “craft”—the ability to read body language, deploy humor appropriately, navigate cultural sensitivities, and respond to group dynamics in real time. These distinctly human capacities cannot be replicated by AI systems, no matter how sophisticated.

In civic contexts, skilled human experts must remain the final source of accountability—safeguarding the democratic process and ensuring that civic input is treated with accuracy and integrity.

Toward a Human-Led, AI-Assisted Democracy

There is much we don’t know about how AI will impact society. The risks are real and numerous. But AI also presents a monumental opportunity to transform how we govern, how we learn, and how we make decisions that can deliver the broadest public benefit when the full range of needs and aspirations are considered.

When deployed thoughtfully, AI is not a threat to democracy—it’s an invitation to deepen it. To shift from one-way communication to shared decisionmaking. To realize governance with the public, not just for them. And to build institutions that listen, reflect, and respond to the people they serve.

We can imagine AI as a set of superpowers for democracy: facilitators able to engage thousands of voices; researchers able to synthesize insights in days, not months; public servants able to act with greater clarity and confidence. In this future, humans and AI can enable better public choices than either could alone.

A Vision of Success: By 2030, we should consider AI-enhanced civic engagement as routine as online voting registration—trusted, accessible, and demonstrably effective. Success would mean participation rates that reflect community demographics rather than just the loudest voices, policy decisions clearly informed by comprehensive public input, and citizens who feel genuinely heard and see their contributions reflected in governance.

Realizing this vision will require serious investment in pilots, open-source tools, civic infrastructure, and the people who design and steward these systems. Most of all, it will require keeping communities, residents, and public servants at the center of the decisions that shape their lives—and holding that commitment as our North Star.

The Path Forward: In the next two to three years, three priorities should guide this work: First, expanding pilot programs to move beyond one-off experiments toward systematic integration across multiple government levels and policy domains. Second, building public infrastructure—investing in open-source tools, secure data systems, and training programs for public servants. Third, and concurrently, establishing evaluation standards, metrics, and methodologies to assess when AI-enhanced engagement genuinely improves democratic outcomes.

This opportunity won’t wait. Other nations and nonstate actors are already experimenting with AI in governance—some with democratic values at the center, others without. The next five years will determine whether AI becomes a tool for deeper democracy or deeper division. The choice, and the window to make it, is ours.

Carnegie does not take institutional positions on public policy issues; the views represented herein are those of the author(s) and do not necessarily reflect the views of Carnegie, its staff, or its trustees.