Source: iStock (metamorworks)

article

Can Democracy Survive the Disruptive Power of AI?

AI models enable malicious actors to manipulate information and disrupt electoral processes, threatening democracies. Tackling these challenges requires a comprehensive approach that combines technical solutions and societal efforts.

Published on December 18, 2024

Since the recent popularization of powerful generative artificial intelligence (AI) systems, there are growing fears that they will impact and destabilize democracies in unforeseen ways. These emerging technologies, made famous by large language models (LLMs) like OpenAI’s ChatGPT chatbot, refer to algorithms that can produce new content based on the data they have been trained on. They can write text and music, craft realistic images and videos, generate synthetic voices, and manipulate vast amounts of information. While generative AI models hold tremendous potential for innovation and creativity, they also open the door to misuse in various ways for democratic societies. These technologies present significant threats to democracies by enabling malicious actors—from political opponents to foreign adversaries—to manipulate public perceptions, disrupt electoral processes, and amplify misinformation.

With increased use of AI-generated content and a cohort of countries moving toward digital authoritarianism by embracing AI-supercharged mass surveillance, the stakes could not be higher. Beyond generally introducing more complexity into the information environment and allowing the faster creation of higher-quality content by more people, generative AI models have the potential to impact democratic discourse by challenging the integrity of elections and further enabling digital authoritarianism. But this is just one facet of a larger issue: the collision between rapidly advancing AI technologies and the erosion of democratic safeguards. The intersection of digital authoritarianism and AI systems—from simpler AI technologies to the latest state-of-the-art LLMs—empowers autocratic governments both domestically and in their foreign interference tactics, presenting a key challenge for twenty-first-century democracy.

The core of the problem lies in the speed and scale at which AI tools, once deployed or weaponized on social media platforms, can generate misleading content. In doing so, these tools outpace both governmental oversight and society’s ability to manage the consequences. The intersection of generative AI models and foreign interference presents a growing threat to global stability and democratic cohesion. As these systems generate highly persuasive text, they enable states and nonstate actors to propagate disinformation and malicious narratives at scale. Amid the evolution of AI technologies, a comprehensive approach that combines technical solutions and societal efforts is crucial to combat such emerging threats effectively.

Democracies Under Siege From AI

AI advancements are occurring at such a scale and speed that it is almost impossible for any government, company, or individual to predict future trajectories or how they will reshape societies. Since 2022, more than 15 billion images have been created using text-to-image algorithms, and with the launch of OpenAI’s DALL-E 2—an AI system that can create realistic images and art from a description in natural language—people are generating an average of 34 million images per day. Generative AI models played a role in the 2024 U.S. presidential race, with AI-generated fake images and deepfakes flooding social media platforms. Deepfakes are synthetic media in which AI techniques replace a person in an existing image or video with someone else’s likeness or generate a brand-new image of a person.

Deceptive pictures, videos, and audio are rapidly proliferating because of the rise and misuse of generative AI tools and fake news websites. AI-generated synthetic content has permeated the U.S. political sphere, where it is often shared by high-profile figures like U.S. President-Elect Donald Trump and his allies, who repeatedly promote AI-created memes and deepfakes. A case in point: Trump reposted an AI-generated fake image of singer Taylor Swift endorsing his election campaign, which she never did. Democrats also posted AI-made fake photos of Trump being arrested. Such fakes could result in wide-reaching and immensely damaging instances of misinformation and disinformation.

Meanwhile, deepfake audio clips of British Prime Minister Keir Starmer and Slovakia’s opposition head, Michal Šimečka, ignited social media controversies when they spread rapidly before fact-checkers exposed them as fabrications. The destructive power of deepfakes also hit home in Türkiye when a presidential candidate withdrew from the May 2023 election after explicit AI-generated videos went viral. In Argentina’s October 2023 presidential election, both leading candidates deployed deepfakes by creating campaign posters and materials that mocked their opponents—tactics that escalated into full-blown AI memetic warfare to sway voters.

Thus, the impact of generative AI models is likely to depend on how they are used by political opponents and featured on social media—that is, how they are introduced into an already complex information environment, where many variables inform the way AI-generated content will be received. While some dismiss this content as another form of political satire, the relentless barrage of AI-generated misinformation and disinformation will likely increase voter confusion, create false perceptions of candidates, and fuel cynicism toward the entire electoral process. Female politicians, especially, face a much greater threat of deepfakes than their male counterparts because of gender disinformation, sexualized targeting, and societal biases, which amplify reputational harm, online harassment, and the emotional toll, eroding public trust in women’s leadership.

These highly persuasive, live replicas of a person’s appearance, voice, and style are becoming less expensive to produce, and they will empower domestic and foreign malicious actors in novel ways. For instance, malign actors can easily leverage chatbots to spread falsehoods across the internet at record speed, using the power of machine translation to propagate messages regardless of the original language. Such tools are being harnessed to disseminate disinformation, create memes, radicalize individuals, and promote extremist agendas, all while confirming existing biases and fostering a sense of community among like-minded users. In this AI memetic warfare, anonymous participants share explicit instructions on how to use AI image generators like DALL-E 3 and Midjourney to create extremist memes, including guidelines for crafting propaganda.

Besides the impact of such memes on the information environment, the ability to produce convincing content amplifies the risk of psychological manipulation, creating a cognitive challenge for democratic societies. As AI-generated content floods online platforms, social media algorithms enable its viral spread, often without adequate scrutiny or fact-checking. The psychological impact is profound as individuals struggle to distinguish between authentic and manipulated information. If even the most sophisticated media outlets, government agencies, and tech companies find it difficult to separate real and AI-generated content, how can local communities and the broader public be expected to do so?

Such patterns pose profound challenges to democracy. Importantly, generative AI is the first technology that can understand language and produce content autonomously—areas once exclusive to humans. These applications can be combined to automate the entire chain of synthetic content production, distribution, and amplification. This advancement poses emerging epistemic risks: the production of biased or misleading knowledge and the distortion of understanding and trust in information sources. As AI technologies begin to produce and shape public knowledge autonomously, there is a threat that this knowledge becomes self-referential—a recursive by-product of AI models themselves rather than a reflection of evolving human knowledge.

This growth in AI-generated content, coupled with the increasing difficulty of identifying it as machine made, has the potential to transform the public sphere via information overload and pollution. The more polluted the digital ecosystem becomes with synthetic content, the harder it will be to find trustworthy sources of information and to trust democratic processes and institutions. Moreover, because AI models are trained on past data, they reflect existing societal biases and risk perpetuating them in the content they generate. In essence, overreliance on AI-generated content risks creating an echo chamber that stifles novel ideas and undermines the diversity of thought essential for a healthy democracy.

A Surge in AI-Enabled Foreign Interference?

In the long term, this erosion of trust could make democratic systems more susceptible to external interference and less resilient against internal divisions that foes can easily exploit. This is where AI-enabled digital authoritarianism becomes even more dangerous. While authoritarian regimes refine their use of digital technologies to control populations domestically, democratic nations face the challenge of safeguarding their electoral integrity against AI-driven disinformation campaigns by foreign adversaries. Often orchestrated by states or their proxies, such campaigns could tip the scales in closely contested elections, influence voter turnout, or manipulate key swing demographics.

This kind of foreign information manipulation and interference (FIMI) has become a potent weapon against democracies and now has the potential to be amplified by generative AI. The European External Action Service defines FIMI as a “pattern of behaviour that threatens or has the potential to negatively impact values, procedures and political processes. Such activity is manipulative in character [and] conducted in an intentional and coordinated manner.”

FIMI actors are quick to experiment with newly available generative AI capabilities to produce synthetic media and refine their digital tools. With generative AI–enabled advancements, digital authoritarianism is entering a new phase both domestically, by reinforcing autocracy and surveillance, and externally, by enabling foreign interference operations. Unlike traditional tools of repression, such as overt censorship, propaganda, or physical coercion, generative AI models allow for more sophisticated manipulation of information and public perceptions, both at home and abroad. For instance, China is advancing its generative AI technologies and beginning to lead in their adoption globally, confirming the country’s progress in this competitive field.

China is not the only authoritarian regime that is employing digital technologies and AI to bolster the state’s authority. Through its Digital Silk Road initiative, China has become an exporter of digital authoritarianism and a major digital infrastructure provider to developing and authoritarian states that seek cost-effective digital advancements. As a result, instances of digital authoritarianism can be observed in Bangladesh, Colombia, Ethiopia, Guatemala, the Philippines, and Thailand, to name a few countries, suggesting that this model of authoritarian politics is spreading. Moreover, research indicates that states including Iran, Russia, and Venezuela are purposefully experimenting with and weaponizing generative AI to manipulate the information space and undermine democracy.

In the EU’s Eastern neighborhood, countries like Georgia, Moldova, Romania, and Ukraine face a deluge of hybrid threats and AI-generated disinformation campaigns aimed at destabilizing societies, disrupting electoral processes, and derailing people’s democratic aspirations. That is why hybrid threats in the Eastern neighborhood have spurred the EU to adopt a more coordinated strategy against FIMI alongside the efforts of strategic partners like the United States.

Meanwhile, the near-monopoly power of Western big tech companies that dominate the field of AI has also affected democratic governance processes. These firms’ ownership of social media platforms, big data analytics, and the right to moderate content has given them an outsize influence over public discourse. What is more, recent concerns over TikTok’s algorithmic influence on elections highlight the platform’s role in political manipulation. For instance, Romanian authorities have called for the app to be suspended amid suspicions that its algorithm amplified content that was favorable toward a far-right, pro-Kremlin presidential candidate. With AI-driven targeting and limited transparency over content sponsorship, TikTok’s ability to sway public opinion highlights the power of digital platforms to undermine democratic processes.

Moreover, governments and political actors have become increasingly dependent on data-driven corporate practices that blur the line between citizens and consumers, commodifying public discourse. With access to vast amounts of data, such actors can leverage nuanced knowledge of citizens’ economic, political, and cultural preferences to anticipate and influence their political choices. Implemented on a large scale, practices of this kind are known as demos scraping—employing AI and other automated tools to continuously collect and analyze citizens’ digital footprints, from browsing habits to social media interactions. This sophisticated profiling enables not only targeted political messaging but also the dissemination of tailored information.

By combining such practices with generative AI tools, malicious actors can craft convincing narratives that exploit individual biases, preferences, and vulnerabilities, making propaganda more effective and harder to detect. Indeed, research has shown that ChatGPT, Gemini, Grok, and other AI chatbots can replicate harmful narratives from authoritarian regimes when prompted. A study by news monitoring service NewsGuard revealed that such bots are amplifying Russian misinformation and often fail to recognize disinformation sources. This demonstrates how easily AI chatbots can be co-opted to disseminate disinformation.

The Way Forward

To address the challenges posed by generative AI to democratic processes, a multifaceted approach is crucial. In this respect, regulatory and governance tools that target deepfakes, AI-generated disinformation, and foreign interference are imperative. Relying on self-regulation by tech giants is insufficient, as history has shown in other industries, like social media. That is why governments must enact robust policies to mitigate the creation and proliferation of such synthetic content and hold corporations legally and financially accountable.

For instance, policymakers should consider the trade-offs of AI content watermarking—a technique that embeds a unique, detectable signature within AI-generated content, marking it as machine made. Visible watermarks promote transparency but may disrupt artistic intent, while digital watermarks, hidden in metadata, are subtler but easier to tamper with. Policymakers must navigate these choices, weighing transparency and usability against risks of misuse by malicious actors. The implementation of watermarking still faces significant technical challenges of accuracy and robustness, leaving developers and policymakers grappling with how to create reliable tools and establish standards and regulations. That is why, without robust legislation, corporations and individuals are unlikely to prioritize content provenance tools, watermarking techniques, and authenticity systems as solutions for verifying digital content.

The global, interconnected nature of online content suggests that broader, harmonized standards across jurisdictions may be necessary for effective multilateral governance. The G7 has called on companies to develop reliable mechanisms like watermarking. Meanwhile, the EU’s AI Act imposes obligations on AI providers and deployers to ensure the transparency, detection, and tracing of AI-generated material. Similarly, California’s Digital Content Provenance Standards bill, which is supported by industry leaders like Adobe, Microsoft, and OpenAI, proposes to mandate watermarks for AI content—a move that could set a standard for digital content authenticity.

Other interventions, such as legislation that targets election-specific deepfakes, technological solutions, and voter education initiatives, are also imperative. Addressing the challenges posed by AI-generated content will require coordination across a wide range of stakeholders, including governments, AI companies, social media platforms, and users. Tech companies also have a central role in developing authenticity and provenance tools to detect and trace the origins of AI-generated content. Microsoft’s Content Integrity tools, already available to political campaigns, help organizations verify the authenticity of their content and combat the risks of disinformation.

However, enforcement remains a significant hurdle if deepfakes are challenging to detect and source. Using AI to fight AI offers some promise, but many detection tools are not publicly accessible to avoid further empowering domestic extremists and foreign adversaries. Public awareness and understanding of content provenance systems remains limited, making widespread adoption difficult. Turning ordinary people into digital Sherlocks by burdening them with the task of identifying deepfakes is especially problematic. Even trained experts can struggle with detection, and overreliance on AI to spot deepfakes can lead to misplaced confidence in technical solutions.

The race between deepfake generation and detection remains fiercely competitive, with advancements in the former often outpacing the latter. Online platforms play a crucial role in spreading deepfakes, amplifying their reach and severity. Such platforms could benefit from mandating detection software, enhancing transparency in detection and labeling, and slowing content circulation to mitigate deepfake risks.

Yet, reliance on detection technology alone may be insufficient without regulatory oversight and public digital literacy initiatives. Enhancing AI and information literacy is vital, while widespread education in digital skills is urgently needed. For instance, Google’s prebunking campaign—an initiative that Google describes as preventative debunking—aims to counter online misinformation by educating voters on manipulation techniques. The campaign includes short, neutral videos that illustrate tactics like decontextualization and scapegoating to help audiences build mental defenses against misleading content. This preventative approach, pioneered by Google’s Jigsaw unit, builds on Cold War–era inoculation theory to foster resilience against propaganda through prior exposure.

In terms of governmental initiatives, Finland is among the leading countries in the EU when it comes to spearheading digital and AI skills programs. Finland provides a notable example of how to prioritize education for a human-centric society in the age of AI. National AI literacy programs like Finland’s could bolster technical know-how and foster critical thinking about AI, enhancing resilience against misinformation and disinformation.

Without such public and private efforts that combine governance, technical, and educational interventions, societies will remain vulnerable to domestic extremists and foreign adversaries. A comprehensive, whole-of-society, and interdisciplinary strategy is thus essential to outpace malicious actors, safeguard the integrity of democratic elections, foster digital competence, and reinforce national security.

Carnegie Europe is grateful to the Patrick J. McGovern Foundation for its support of this work.

Carnegie does not take institutional positions on public policy issues; the views represented herein are those of the author(s) and do not necessarily reflect the views of Carnegie, its staff, or its trustees.