AI’s rapid advances are intersecting with a deepening global democratic crisis. This year, global democratic conditions declined to 1985 levels, with autocracies outnumbering democracies worldwide. Meanwhile, threats are building from deepfakes, AI-enabled election interference, and related social and geopolitical tensions.
But AI also brings potential innovations to shore up democracy, through programs and interventions that seek to strengthen processes and governance. This paints a complex landscape for advocates working to maximize benefits from the technology’s interactions with democratic institutions while minimizing harms.
One critical barrier to realizing AI’s promise and mitigating its harms is largely absent from headlines: the gender trust gap. The 2025 Carnegie California AI Survey corroborates global findings that women trust AI less than men and are less enthusiastic about its adoption. This finding, alongside a forthcoming Carnegie mapping of the intersections of AI and democracy, reveals how closing the gendered AI trust and adoption gap could pay dividends for democracy. Interventions aiming to leverage the technology to improve representative governance and democratic institutions will require social trust and buy-in to maximize their potential. Without deliberate strategies to identify and address gender gaps in trust and use of AI, its potential to support democracy may remain under-realized.
Mind the AI and Gender Gap
The Carnegie California survey, fielded at a time of rapid AI advances and emerging regulatory agendas, revealed relatively minimal partisan and geographic divides on questions of AI and its impact on workforces, the economy, and government. But across wide-ranging questions relating to the economy, workplaces, and government, women revealed significant divergences in their preferences from men.
The survey found that 36 percent of men think AI will make the Californian economy better, compared with only 18 percent of women. In nearly all survey questions asked about the economy and work, more women reported that they “don’t know” their opinion than men, revealing degrees of hesitancy or potential knowledge gaps along gender lines.
Meanwhile, women we surveyed also report using AI less. More men (26 percent) reported using digital tools to access information about health services than women (21 percent). And although 41 percent of men we surveyed think AI will support them as informed voters and citizens, only 25 percent of women feel the same way.
Divisions also exist over government use of AI. A 2024 Ernst & Young survey found that 64 percent of federal and 51 percent of state and local U.S. government employees report using AI applications daily or several times a week, noting use for tasks such as border patrol, drone manufacturing, and biometric data collection. But our survey revealed concern about government use that spans the political spectrum, with 67 percent of California Democrats and 62 percent of Republicans opposing federal government use of AI in decisions affecting them. Gender differences are more pronounced: Men were roughly twice as likely as women to support government use of AI for personal or community decisions (16 percent versus 8 percent) and for safety and emergency services (23 percent versus 15 percent).
More Californians (46 percent) support a new California policy to advance AI literacy in schools than oppose it (25 percent). But within this, more men (53 percent) support mandated AI education than women (38 percent). Support for the policy is slightly higher among Democrats (53 percent) than Republicans (48 percent), though differences here and among subregions of the state are small. And women are less inclined (24 percent not too interested, 22 percent not at all interested) than men (12 percent and 19 percent) to be offered training and courses in AI in the future at work.
Carnegie’s AI Survey Findings in Context
The answer as to what is driving lower adoption and higher anxiety among women remains a topic requiring further research. Among potential concerns women hold, biased data used to train AI can reproduce or worsen gender discrimination in hiring, healthcare, and policing. Generative AI can also be weaponized to create deepfake sexualized images and videos, which disproportionately target women and girls. Women are also overrepresented in jobs more likely to be automated by AI.
Meanwhile, despite these specific threats, women are not at the decisionmaking table shaping the development and regulation of AI. Worldwide, studies suggest that women hold only 8 percent of CEO positions in the top 100 technology companies. They comprise less than a third of the AI workforce and only 18 percent of AI researchers. Women are also underrepresented in AI training programs, representing only 28 percent of global enrollments.
Carnegie California’s findings validate existing research while presenting new insights into the specific risks for democracy. Past global studies have shown that women adopt generative AI technologies at a rate 25 percent lower than men. As of April 2025, just 38 percent of Claude users were women. And a 2025 Pew Research U.S. study that found that women are twice as likely to be concerned, instead of excited, about the use of AI in daily life compared to men.
But there is some indication that AI use is increasing among younger generations of women. Research from Deloitte in 2024 found that women’s adoption of generative AI has tripled over the past year, outpacing men’s growth rate of 2.2. It predicts that use among women in the United States would equal or exceed that of men by the end of 2025, and while updated results are not yet available, our California survey suggests adoption gaps may be persisting longer than hoped.
What the Gender and AI Gap Means for Democracy
Building on Carnegie California’s research mapping interventions at the nexus of AI and democracy, we identified a diverse and active field of AI-driven interventions where gender trust gaps can shape the potential for meaningful democratic gains.
AI is increasingly shaping democracy in some fundamental and some unexpected ways. While some identify past concerns over the influence of deepfakes on elections as “overblown,” threats from synthetic content and AI-driven influence campaigns remain serious. Visible influence efforts have been documented in elections in Romania, Nigeria, South Africa, and others.
Our mapping finds that AI affects democracy not only through elections, but also through areas that influence democratic institutions and outcomes, including economic factors. As AI reshapes labor markets, women’s participation—which is linked to stronger democratic outcomes—could be hindered by the AI adoption gap, limiting its potential to boost democracy.
Meanwhile, use of AI tools by government and citizens can help strengthen the feedback loop between voters and candidates through “broad listening” and “demos scraping” tools and methods aimed at gathering and analyzing citizen preferences and enabling dialogue and improvement. They can improve voter outreach via registration systems, chatbots, and translation services, and can enhance polling and forecasting in places with limited infrastructure. Civil society and watchdog groups are also using AI for monitoring, fact-checking, detecting deepfakes or digital attacks, and to help facilitate prebunking efforts to proactively combat misinformation before it spreads.
AI is increasingly being applied to public policy and service delivery, with early initiatives showing promise in improving efficiency, responsiveness, and citizen engagement. California’s state government is utilizing generative AI in new pilot efforts to reduce highway congestion, improve roadway safety, and support call centers. The state has partnered with chipmaker Nvidia in workforce training programs. Tools such as Singapore’s AI-facilitated citizen support Ask Jamie chatbots demonstrate AI’s role in enhancing public services. Global AI for Good efforts, supported by major tech firms, are funding health, agriculture, and disaster-response applications in developing regions, though critics warn of risks to local autonomy and exploitation.
AI is also being integrated into deliberative democracy efforts aiming to expand citizen consultation, improve policy responsiveness, and help overcome polarization and gridlock. Platforms such as Pol.is and Remesh, alongside newer initiatives such as Engaged California and the French Citizens’ Assemblies, are experimenting with AI for translation, moderation, and sensemaking that uses machine learning and other tools to extract insights from large amounts of qualitative data to benefit efforts to enable more direct citizen participation and engagement with government.
Plugging the Gender and AI Gap: An Imperative for Democracy Agendas
More research is needed to understand the drivers of gendered trust and adoption gaps and how they intersect with AI’s emerging influence on democracy. Without initiatives to better identify the concerns driving women’s mistrust, efforts to bring technology solutions to support democracy are unlikely to succeed. Solutions must be holistic and require deeper research and consensus building on interventions, considering wider social trust and adoption gaps.
Efforts to criminalize and mandate the removal of AI-generated sexualized deepfakes, such as the Take It Down Act in the United States and similar mandates in a new EU directive, provide one important area for legislative action. Education can promote safe use and narrow gender usage gaps, but deeper work is needed to address the root causes of differences in trust and adoption. School-based and professional AI education may help to develop more skilled pipelines to enable the growth of AI adoption. Countries including the United States, Estonia, China, Saudi Arabia, and the United Arab Emirates have recently developed national policies and programs to expand AI education. California recently passed a new mandate to expand AI literacy across subjects. AI-powered chatbots developed in Central America aim to support the region’s victims of gender-based violence. Some AI standards such as UNESCO’s Ethics Recommendations include mandates for gendered AI assessments to detect bias and discrimination, and concentrated efforts in civic and policy spaces can help protect women, including human rights defenders, from sexist attacks.
Our survey makes the stakes clear: Democracy cannot afford to ignore gendered AI gaps. Addressing these and other social trust disparities as a democracy issue is essential to harness AI’s benefits, mitigate its risks, and ensure inclusive democratic resilience.
- +3
- Ian Klaus,
- Mark Baldassare,
- Rachel George,
- Scott Kohler,
- Marissa Jordan,
- Abigail Manalese