Digital Democracy in a Divided Global Landscape
research

Digital Democracy in a Divided Global Landscape

A global shift is taking place. Leaders recognize that tech innovation equals power, and they are marshaling their resources accordingly. Countries are working to create technological advantages for themselves at the expense of digital cooperation across borders.

Published on May 28, 2025

Introduction

In February 2025, global leaders and tech moguls gathered in Paris for the Artificial Intelligence (AI) Action Summit, a confab co-hosted by French President Emmanuel Macron and Indian Prime Minister Narendra Modi meant to galvanize debate about how the world should address the growing relevance of AI technologies. Over 1,000 participants representing more than one hundred countries gathered in Paris’ Grand Palais to rub shoulders, network, and debate the finer points of model weights, inference scaling, and proprietary models. It was also an opportunity for the audience to hear from U.S. President Donald Trump’s administration about its vision for AI.

Vice President JD Vance did not mince his words. He delivered a tough message centered on American primacy: “The United States of America is the leader in AI, and our administration plans to keep it that way,” he informed the crowd.1 According to Vance, America under Trump’s leadership would use all the tools at its disposal to preserve its technological advantages and would resist efforts by other countries and jurisdictions, such as Europe, to regulate its technology. Countries would be forced to choose between using U.S.-designed technology or siding with authoritarian competitors (namely China) that weaponize AI software to “rewrite history, surveil users, and censor speech.” Leaders who were interested in making deals with Washington (and who offer concessions) would be rewarded, and those who did not play ball would face punishment. Vance’s remarks represented a starkly transactional view of international relations—one in which shared values and mutual interests are cast aside for bottom-line objectives.

These ideas deviated from eighty years of alliance building adopted by multiple U.S. presidents to secure America’s interests through cooperation and collaboration. As recently as last year, President Joe Biden launched a “digital solidarity” strategy intended to bind countries together, arguing that “all who use digital technologies in a rights-respecting manner are more secure, resilient, self-determining, and prosperous.”2 But while Vance’s speech represented a sharp break in U.S. policy, his remarks aptly captured an emerging global reality; it is not just the United States that sees digital policymaking as an interest-based tool of “realpolitik negotiation.”3 And it is not just Trump who is making an explicit play to put his country’s interests first at the expense of the international system. Many other nations are pursuing similar measures, whether openly acknowledged or not.

Take China, for example. In contrast to Vance’s remarks, Chinese Vice Premier Zhang Guoqing’s address in Paris was far more conciliatory. He emphasized that Beijing wants to make sure that frontier technology is not controlled by a few corporations or a handful of countries. He outlined a vision to create “a community with a shared future for mankind,” where China positions itself as a reliable partner to help countries advance their respective priorities.4 But few people were fooled. Beijing has pumped billions of dollars into subsidizing critical industries and developing formidable tech champions, including Huawei and Alibaba. It has leveraged the Made in China 2025 program at home and the Digital Silk Road overseas to build up its technological capacities and grow its influence. When Beijing has faced resistance, its leadership has not hesitated to use coercion to get its way, whether forcing Korean conglomerate Lotte Group to exit the Chinese market after South Korea announced the deployment of America’s Terminal High Altitude Area Defense (THAAD) missile defense system in 2016 or revoking the trade licenses of two leading Canadian exporters of canola seed in response to Canada’s 2018 detention of Meng Wanzhou, Huawei’s chief financial officer, at the urging of the United States.5

These changes extend beyond China; a global shift is taking place. Countries are reluctant to work across borders and in service of shared concepts and common standards relating to digital technology. The internet is fragmenting into multiple “splinternets,” shifting from an open, globally connected web to a “collection of isolated networks controlled by governments.”6 Individual countries are erecting digital walls—enacting their own rules governing how platforms can operate, determining which online speech is permissible, and deciding which digital services and products are allowed. Digital solidarity is out. Tech sovereignty is in. Leaders recognize that tech innovation equals power, and they are marshaling their resources accordingly.

To make sense of these changing dynamics, the Carnegie Endowment for International Peace has assembled ten essays drawn from members of our Digital Democracy Network spanning from Thailand and Türkiye to Nigeria, South Africa, and Uganda.

A first set of essays analyzes how local actors are navigating the new tech landscape. Lillian Nalwoga explores the challenges and upsides of Starlink satellite internet deployment in Africa, highlighting legal hurdles, security risks, and concerns about the platform’s leadership. As African nations look to Starlink as a valuable tool in closing the digital divide, Nalwoga emphasizes the need to invest in strong regulatory frameworks to safeguard digital spaces. Jonathan Corpus Ong and Dean Jackson analyze the landscape of counter-disinformation funding in local contexts. They argue that there is a “mismatch” between the priorities of funders and the strategies that activists would like to pursue, resulting in “ineffective and extractive workflows.” Ong and Jackson isolate several avenues for structural change, including developing “big tent” coalitions of activists and strategies for localizing aid projects. Janjira Sombatpoonsiri examines the role of local actors in foreign influence operations in Southeast Asia. She highlights three motivating factors that drive local participation in these operations: financial benefits, the potential to gain an edge in domestic power struggles, and the appeal of anti-Western narratives.

A second set of essays explores evolving applications of digital repression. Irene Poetranto argues that understanding government restrictions of online content requires looking beyond legal regulations to examine the technical aspects of internet controls. Through a study of Indonesia’s content blocking requirements, she demonstrates that the different tools used by internet service providers to filter online speech implicate free expression and access to information in different ways. ‘Gbenga Sesan’s article tracks the harms of internet shutdowns across the globe. He argues that disconnection from the internet creates unique difficulties for populations that traditionally rely on internet access for educational, economic, and interpersonal purposes. It is critical, Sesan emphasizes, that stakeholders “pay attention to disconnected citizens” alongside broader unconnected populations. Steven Feldstein and McKenzie Carrier analyze the “AI-first” strategy of the U.S. Department of Government Efficiency (DOGE). They draw comparisons between Elon Musk’s remaking of Twitter and DOGE’s ongoing disruption of the U.S. federal bureaucracy. DOGE’s agenda, they caution, sheds light on how the deployment of AI tools and automated technologies can “destroy institutions, wipe out accountability, and enable corruption to flourish.”

A third set focuses on national strategies and digital sovereignty debates. Arindrajit Basu cautions against the Trump administration’s shift away from the principle of “digital solidarity” in its foreign policy. He argues that if a key goal for the United States is to counter China’s influence among developing countries, it would be sensible for the Trump administration to “pursue initiatives that resonate internationally while also advancing America’s core interests.” Iginio Gagliardone’s examination of Kenyan gig workers and South Africa’s data sovereignty debate sheds light on “pathways for resistance, negotiation, and adaptation in the pursuit of AI sovereignty.” He argues in favor of “networked sovereignty”—creating cross-border collaborations and governance structures among African nations to strengthen the continent’s ecosystem and trajectory.

A fourth set explores pressing tech policy and regulatory questions. Luca Belli’s article examines the intersection of cybersecurity and AI. He argues that AI has transformed the cybersecurity landscape, increasing the frequency, impact, and sophistication of cyber attacks. Belli uses Brazil as a case study to explain how shortcomings in AI and cybersecurity regulations leave nations vulnerable to cyber attacks. In response, he outlines how “sound management of information and infrastructure, good stakeholder coordination, and solid capacity-building” can strengthen nations’ cyber resilience. Akin Unver describes the development of the Foreign Information Manipulation and Interference (FIMI) framework, which has become the dominant method in Canada, the European Union, and the United States to analyze trends in the information space. FIMI, he argues, improved on prior methods for countering foreign influence operations by systematizing “early detection, data collection, and countermeasures architecture.” However, he highlights several obstacles to further developing FIMI, such as the mismatched threat landscape among countries, access restrictions enacted by tech platforms, and architectural differences across platforms that inhibit responses.

These viewpoints illuminate emerging questions, new debates, and unresolved dilemmas in the tech domain. They highlight the challenges new technologies pose to governance, politics, and society. And they are meant to help policymakers connect local and regional insights with international discourse.

Acknowledgements

The Carnegie Endowment for International Peace thanks the Charles Stewart Mott Foundation for the support that has made the establishment of the Digital Democracy Network possible. The authors alone are responsible for the views expressed.

Notes

“Glocalizing” Digital Propaganda: Why Domestic Influence Actors in Southeast Asia Embed Geopolitical Narratives in Their Campaigns

As great power competition intensifies, information warfare has become a key component of geopolitical strategy.1 Numerous policy papers, academic studies, and expert interviews highlight the dangers posed by malign foreign influence operations (FIOs) conducted by “threat actors” who oppose the Western-led liberal international order, such as China, Iran, and Russia.2 While FIOs can involve a diverse set of actors, including democratic governments, this essay focuses on large-scale, covert influence efforts by foreign authoritarian states to sway public opinion, strategically disseminate disinformation, and manipulate behaviors in targeted populations.3

Much of what is known about FIOs comes from their role in high-profile events like the 2016 and 2020 U.S. presidential elections, the Brexit referendum, and various elections in European Union countries, where right-wing parties have gained momentum in recent years.4 For example, in early September 2024, ahead of the November U.S. elections, the Department of Justice charged two employees working for Russian state media network RT with paying an American company to produce and spread politically divisive videos, sowing “discord and chaos in the United States.”5 Meanwhile, the China-linked influence operation known as “Spamouflage” employed inauthentic online personas to impersonate American voters to cast doubt on the legitimacy of American democracy.6

But FIO campaigns are also spreading hyper-partisan narratives across Africa, the Asia-Pacific, and Latin America, where geopolitical influence is also fiercely contested.7 Specifically in Southeast Asia, Beijing-backed influence actors are reportedly active in countries involved in disputes over the South China Sea, particularly the Philippines, in part aiming to challenge U.S. influence in the Pacific.8 These actors have also leveraged historical ties between China and governments in Cambodia, Indonesia, Thailand, and Vietnam to shape public perceptions on multiple issues, including hailing the effectiveness of China’s COVID-19 management and its vaccines and supporting Russia’s war against Ukraine.9

A common assumption in the existing analyses of FIOs is that foreign states impose malign influence campaigns on passive local populations. This view, however, oversimplifies a complex reality. Local actors actively shape the characteristics and impact of FIOs. This author’s ongoing research on conflict-driven online propaganda in Southeast Asia highlights three reasons why local influence actors and netizens exploit information drawn from geopolitical narratives.

Motivated by Financial Incentives

First, in countries such as the Philippines where the industry of online influence has flourished, domestic influencers and trolls for hire can be financially motivated to promote pro-China narratives. It is a lucrative industry: online trolls in the Philippines reportedly earn around $515 to $1,715 a month.10 One report indicated that Beijing-funded outlets have recruited local journalists and trolls who were financially struggling.11 Business elites who have a “dependency relationship” with Beijing have reportedly funded pro-China campaigns as well.12

In some cases, pro-Beijing platforms and influencers who amplify narratives aligning with China’s geopolitical interests in the region may be compensated through micro-targeted ads rather than direct funding from China. After the Philippine government successfully challenged China’s territorial claims in the South China Sea through the Permanent Court of Arbitration (PCA) in 2016, China rejected the ruling.13 By 2018, pro-China fan pages on Facebook in the Philippines were pushing narratives in support of Beijing’s refusal of the PCA’s decision. These pages represent a network of China-backed Filipino actors: pseudo think tanks (such as the Institute for Integrated Development Studies, or IIDS), social media personalities, media outlets (such as the Manila Times and Sonshine Media Network International), and associations (such as the Philippines-China Friendship Club).14 Together, they form a pro-China ecosystem, where geopolitical articles and opinions are published in aligned outlets and amplified across different networks.15 Each post that goes “viral” generates between $20 and $70, depending on the number of views.16 Not all viewers support these posts' pro-China stances, but the sensational titles, serving as “clickbait,” can garner engagement even from netizens critical of China.

Seek an Edge in Domestic Power Struggles

Second, exemplifying the concept of “glocalization,” which describes the convergence of globalization and local politics, domestic influence actors often leverage geopolitical narratives to gain an edge in domestic power struggles, particularly during policy shifts, elections, and mass mobilization efforts.17 This has been clear in the Philippines, where former president Rodrigo Duterte and his daughter, Philippine Vice President Sara Duterte, have benefited from sophisticated influence campaigns incorporating geopolitical narratives.18 When Duterte was president, influence actors endorsed his domestic and foreign policies, including his controversial war on drugs and his pivot toward China—a stark departure from the Philippines’ traditional alliance with the United States.19 To rally domestic support for this dramatic policy shift, pro-Duterte accounts appeared to join forces with pro-China actors to frame Duterte’s pivot to China as a move promoting regional peace and independence from the United States as a former colonial power, while simultaneously attacking critics.20 A notable example was Sass Sasot, a prominent pro-Duterte blogger who disseminated false claims that challenged the PCA’s 2016 decision and aligned with China’s arguments.21

Geopolitical narratives also played a role in the Philippines’ 2022 presidential election and the subsequent power struggle among political elites. Pro-Duterte influencers weaponized pro-Russia, anti-United States, and pro-China disinformation to target opposition candidate Leni Robredo.22 They framed her as a weak leader, in contrast to Duterte’s strongman image, and as a puppet of Western powers whose pro-Ukraine stance could provoke Chinese aggression against the Philippines. As Sara Duterte was Ferdinand “Bongbong” Marcos Jr.’s running mate, his 2022 campaign aligned with the Dutertes’ foreign policy stance, reinforcing skepticism toward the United States. For instance, in March 2022, an old video clip of Marcos Sr., the late dictator, resurfaced. In the clip, he expressed frustration over the mutual defense treaty with the United States, arguing that in a crisis American assistance to the Philippines would be delayed by the need for congressional approval, lamenting, “That means delay, while we are dying there.”23 However, after the election, tensions between Marcos Jr. and the Dutertes emerged, and influence actors who supported each leader began trolling one another online. In response to Marcos Jr.’s reaffirmation of Philippine ties with the United States, a domestically produced deepfake clip surfaced in July 2024, portraying Marcos’s foreign policy as war-mongering.24 (Marcos Jr. dismissed the video and countered it by launching official “anti-fake news” initiatives.)25

In Thailand, since the 2014 military coup, the political establishment has increasingly endorsed anti-United States, pro-Russia, and pro-China attitudes and wielded geopolitical contestation among the three countries in coordinated campaigns to discredit the opposition party and suppress dissent. Initially, coordinated fan pages framed Western criticism of the coup as foreign interference and a plot to undermine the monarchy.26 This narrative gained traction during the youth-led protests in 2020 and 2021, with pro-establishment accounts and mainstream media alike accusing protesters of being backed by the West and calling for them to be arrested as traitors for selling out their country.27 This rhetoric diverted attention from the domestic grievances driving the protests and stoked nationalism to justify crackdowns on activists.28 During the 2023 election, Thai pro-establishment influencers and outlets employed the same rhetoric, accusing the opposition party, Move Forward, of receiving funding from the CIA as part of a broader effort to consolidate U.S. hegemony in Southeast Asia.29 This conspiracy theory sought to reinforce the party’s image as unpatriotic. Despite these allegations, the party secured the largest share of popular votes; however, the Thai Constitutional Court subsequently dissolved the party, ruling that it intended to topple the monarchy.30

In Malaysia and Indonesia, geopolitical narratives have been entangled with xenophobia. Both countries are major destinations for Rohingya refugees fleeing genocide in Myanmar. However, public sympathy toward the refugees waned during the COVID-19 pandemic, as resources became strained. In the lead-up to Indonesia’s 2024 national election, “coordinated” campaigns—in the words of the UN refugee agency—circulated online rumors accusing Rohingya refugees of taking advantage of local communities, culminating in a mob attack on a refugee shelter in Aceh province in December 2023.31 Candidates supportive of the Rohingya were also targeted online.32 In a bizarre twist, as the unfolding Israel-Hamas conflict gained attention, some netizens in Indonesia and Malaysia began associating Rohingya refugees with “Zionists” accused of occupying native lands.33 The irony of this conspiracy theory is glaring, given that the Rohingya are Muslim. Yet, influence accounts pushed the narrative that the Rohingya are not “real” Muslims, using this xenophobic rhetoric to stigmatize and discredit political figures who support the refugees.34

Appeal of Anti-Western Narratives

Third, many local actors find anti-West narratives promoted by foreign influencers appealing because they resonate with “shared sentiments” about the West’s declining legitimacy.35 In most Southeast Asian countries, which were formerly colonized by European powers or the United States, political elites and segments of the population embrace nationalism rooted in a mix of sovereignty and skepticism toward Western imperialism.36 This sentiment grew stronger after the U.S.-led war on terror and amid ongoing support for Israel’s war in Gaza, fueling anti-U.S. sentiments particularly in Muslim-majority countries such as Malaysia and Indonesia.37

The narrative of “Western hypocrisy” in the region aligns well with Russia’s standard accusations that the West exploits human rights and democracy as a facade to entrench its global dominance.38 This framing conveniently justifies Russia’s invasion of Ukraine. Kremlin-backed online propaganda has tapped into local discontent by portraying solidarity with “Muslim victims” of Western imperialism, despite Russia’s own crackdown on Muslim minorities.39 This message has resonated with netizens in Malaysia and Indonesia, many of whom view Russia as an alternative superpower standing up to the West.40 Rather than seeing Ukraine as a victim of Russian aggression, many netizens have adopted Russia’s narrative that Ukraine provoked the conflict.41 In Thailand, pro-establishment fan pages have reworked this narrative, portraying Ukraine as a historical part of Russia and framing the war as Russia defending its sovereignty.42 Once again, FIOs are at play, but local discontent with the U.S.-led global order also fuels this wave of online “participatory propaganda.”43

Conclusion

As much as foreign states orchestrate influence operations, local actors actively exploit these campaigns for their own purposes. Sometimes, their motivations are economic or political, but other times, they are ideologically driven to engage in anti-West propaganda. Analyses sounding the alarms about the dangers of FIOs often overlook these on-the-ground dynamics, mistakenly assuming that the foreign campaigns automatically translate into geopolitical setbacks. Without a preexisting ecosystem of local influence operations, domestic conditions that make populations receptive to FIO narratives, and local support or opposition to great power policies, FIOs would have less influence.44 Tackling the impact of FIOs requires a deeper understanding of these domestic factors and the local contexts in which such campaigns operate.

Notes

Counter-Disinformation Funding in the Global Majority Is Broken—Here’s How to Fix It

Imagine that you lead a respected legal watchdog somewhere in the Global Majority—the countries that encompass most of the world’s population, besides the United States, those in Europe, and several in East Asia. Like many civil society organizations in your country, you are preparing for upcoming national elections and rely on donors in the Global North for funding. To your frustration and surprise, these donors push you into supporting interventions copy-pasted from abroad, such as fact-checking and media literacy campaigns—a far cry from your bread-and-butter work on legal advocacy. What’s more, they require you to share your data with other civil society organizations working in a coalition using a cumbersome tool that requires significant investments in money, time, and staff training to use. As the elections approach, it becomes clear that coalition members, instead of playing to their strengths, are engaged in redundant work that reaches the same audience but for diminishing returns. Worse still, even if the project is seen as successful, you may have to lay off staff when the grant ends because of the lack of postelection urgency from funders.

These real experiences were shared in a Global Majority knowledge exchange project organized by the Global Technology for Social Justice Lab (GloTech) at the University of Massachusetts Amherst.1 In 2023 and 2024, the lab convened three workshops for a total of ninety-three civil society leaders in the Global Majority, interviewed seventeen key players in election counter-disinformation coalitions, and held a follow-up survey, which received twenty-five responses. The resulting report is a critical look at the top-down flow of money and ideas from North to South, alongside insights for better ways of working.2

The Problems with Funding Today

Too often, there is a mismatch between the priorities of Global North funders and the preferred organizational strategies of activists on the ground. Strategies cannot be operationalized without funding, and so the agendas of Northern funders too often dominate local priorities. Perhaps nowhere is this more evident than in fact-checking and media literacy initiatives, which have boomed over the past decade. For instance, according to the Duke Reporters’ Lab, the number of fact-checkers around the world more than doubled between 2016 and 2023.3 Activists in the Global Majority worry that overreliance on fact-checking and media literacy contributes to tropes about “dumb,” brainwashed voters and that philanthropic support for these projects has taken too many cues from big tech companies at the expense of activist- and community-driven approaches.4

The resource imbalance between academic researchers in the Global Majority and their better-funded Northern counterparts also means that evidence-driven approaches reflect donor priorities, rather than local ones. Consider, for example, a July 2023 review of studies including randomized control trials (RCTs) of counter-disinformation interventions.5 It included 155 studies, more than 80 percent of which took place in Global North countries. The authors concluded that more support is needed for empirical studies of disinformation in Global Majority countries as well as for studies comparing Northern and Majority contexts. But in the absence of such studies, funders are using this limited evidence base to inform their agendas. If funders want to see RCTs in the Global Majority, they should incorporate them into the programs they support—but they should not ignore existing scholarship that is not based on RCTs.

The power imbalance between Global North donors and aid recipients in the Global Majority is a tectonic force shaping the landscape in which activists work. This top-down arrangement traps local activists in ineffective and extractive workflows. Some interviewees in the Global Majority knowledge exchange project complained of Global North research partners poaching their staff and of grants requiring the use of software and data collection to refine approaches for use in other countries. Global Majority civil society leaders expressed wariness about extractive arrangements where local harms and horrors are collected, gathered, and decontextualized for tool development and advocacies elsewhere. As one interview participant said, “We are not your f—ing case study!”

How to Fix It

When we asked participants to envision an agenda by and for the Global Majority, the answers we received revealed common themes.

First, many observers in Global Majority countries see disinformation as an accountability issue, not a problem resulting from a deficit of good information or media literacy. They focus on accountability for players at all levels within a putrefying digital public square overrun by profit-driven clickbait and disinformation as a commercial service. This includes tech companies, whose underinvestment in content moderation they hope to expose and reverse. It also includes politicians who have leaned into new platforms and relationships with influencers to stoke voter anger and spread anti-establishment messages. More can be done to expose the conflict entrepreneurs and shed light on the many regulatory gray areas in social media infrastructure that politicians and influencers exploit to their political and commercial advantage.

The second theme is that exposing disinformation’s sources requires deep investigations, often combining online and offline methods to identify both the principals and agents of a given campaign. The focus on debunking disinformation displaces this desire for exposure and accountability, and the mostly online, open-source intelligence techniques for social media monitoring and detection of inauthentic activity which many donors encourage do not fully substitute for investigative journalism or ethnographic research.

The third theme is that many activists wish they could do more community dialogue and outreach on the ground. “It’s harder to find funding for trust-building campaigns at the grassroots,” one interview participant told us. “Funders are obsessed with tools that are scalable. It’s not sexy to do community dialogues.” But activists feel this kind of granular work is important to build trust with communities outside of major metropolitan areas and diminish the impact of disinformation in ways fact-checks from afar cannot.

More important than any one strategy or approach is the need for structural change in the way civil society coalitions are created and sustained. It is possible, through more inclusive, bottom-up approaches, to unleash the Global Majority’s creative capacity. We identify three main ways that civil society in the Global Majority and their philanthropic supporters in the Global North can do so:

  1. Encourage “big tent” coalitions. Instead of structuring coalitions around shared processes, tools, and approaches, embrace diversity and create spaces for civil society to exchange priorities and knowledge. Our research found that in the Philippines, many activists felt shoehorned into fact-checking projects that were not their specialty and that put them into competition with their peers in a crowded field. In Brazil, on the other hand, civil society entered into diverse partnerships that included issue area groups like Greenpeace and approaches ranging from community outreach to advertising reform. This allowed civil society to reach broader audiences and play to the strengths of individual coalition members.
  2. Redouble efforts to localize government-funded aid projects. One research professional told us that technology and democracy efforts are notoriously “ten years behind” on efforts to award more projects directly to local implementing partners rather than to large international development organizations based in the Global North. A February 2024 study similarly found that the number of local awards provided by the U.S. Agency for International Development (USAID) fell “far below” its goal of 25 percent, even before the agency was dismantled by President Donald Trump’s administration in early 2025.6
  3. Support Global Majority knowledge creation and guard against extractivism. As mentioned above, the mismatch between research on the Global North and the Global South makes it difficult to create evidence-based programs that reflect local contexts and realities. Our work showed that activists and researchers recognize that they have much to gain by working together: researchers gain practical insights from activists, who in turn benefit from research findings when designing programs. However, this reciprocal relationship is hampered by the lack of opportunities and trust—practitioners fear extractive research arrangements, and there are too few initiatives to bring the two sides together. Funders can promote more productive, trusting relationships by supporting opportunities for repeat exposure, such as projects that integrate researchers into project implementation and academic fellowships for practitioners.

Acknowledging the Realities Under the Trump Administration

Trump’s administration has not been sympathetic to the need for more localized aid and increased autonomy for Global Majority activists. On the first day of his second term, Trump issued an executive order freezing U.S. foreign development assistance and reviewing it for “consistency” with his foreign policy, leading to chaos across the international development sector as career professionals struggled to determine what work could continue, what work could be salvaged with a pause, and what would happen to implementing partners who rely on program funds for their salaries.7 The administration subsequently dismantled USAID entirely, eliminating huge swaths of the U.S. workstreams dedicated to combating misinformation and disinformation altogether. A close reading of the Project 2025 chapter concerning USAID suggests that the administration might instead pivot to include a more securitized focus on countering “malign influence” from adversaries—a priority that has led the United States to run its own influence operations in countries where USAID funded counter-disinformation work.8 Under the Trump administration, U.S. foundations will need to consider playing a bigger role in this space. They should start by committing to respect the viewpoints of local scholars and activists who call the U.S. focus on information integrity against foreign influence “a war that doesn’t deal with our problems.”9

In short, analysts should push back on the disempowering frames that depict the Global Majority as a digital dystopia of unfathomable and extreme technological harms that could be solved by importing tools and concepts from the Global North. Rather, partners in the Global North should engage Global Majority civil society as innovative civic entrepreneurs who are designing meaningful solutions to problems as they exist on the ground.

While the future of U.S. government aid in this area is dim and uncertain at best, other donors should commit to localizing more programs. Global Majority civil society leaders also have a chance to seize new opportunities for self-determination given the vacuum of leadership in tech accountability left wide open by the United States.

Notes

When AI Meets Cybersecurity: Framing Brazil’s Information Security and AI Challenges

Artificial intelligence (AI) has transformed the cybersecurity landscape over the past decade, leading to an increase in the frequency, impact, and sophistication of cyber attacks. While organizations can leverage AI to enhance their cyber defenses, detect cyber threats, and improve decisions about how to react, cyber criminals can also exploit the technology to launch targeted attacks at an unprecedented speed and scale, bypassing traditional detection measures.

Indeed, the increasing use of AI systems in a wide range of processes in various critical sectors—such as health, justice,1 and autonomous vehicle management—creates numerous new, and sometimes unpredictable, risks and can open new avenues in attack methods and techniques.2 Such risks are maximized when AI is deployed for automated decisionmaking, leading legislators around the world, including in Brazil, to consider appropriate risk regulations aimed at AI systems.3

This essay argues that considerable work is needed to support the implementation of existing and proposed cybersecurity and AI frameworks. Such effort is particularly necessary through the adoption of technical standards able to specify and give meaning to highly vague formulations that are typically adopted by AI regulatory frameworks to define cybersecurity risk management provisions. Notably, the essay focuses on the Brazilian context to explore how the country is dealing with the emerging threats and opportunities presented by the intersection of AI and cybersecurity, a set of issues that Brazil—and any other country—needs to consider seriously to be able to build its AI Sovereignty.4

AI and Cybersecurity: A Complicated Relationship

The relationship between AI and cybersecurity is dynamic, affecting defensive, offensive, or adversarial capabilities.5 While there is already a wide body of research on the technical aspects of AI and cybersecurity, remarkably scarce research exists on the interactions of AI and cybersecurity from a regulatory and governance angle. To start, it is important to distinguish between defensive AI and offensive AI. Defensive AI usually leverages machine learning and other AI techniques to enhance the cybersecurity and resilience of computer systems, networks, and databases, and to protect individuals by shielding them against cyber threats.6 From this perspective, AI systems can increase the effectiveness of security controls aimed at protecting specific assets, for instance through automated malware analysis, active firewalls, and automated cyber threat intelligence operations.7

In contrast, offensive AI, also known as AI-powered cyber attacks, involves the use of AI to launch malicious activities, enhancing attackers’ ability to detect and exploit vulnerabilities, develop new cyber attack types and strategies, or automate the exploitation of existing vulnerabilities.

A Paradigm Shift

The integration of AI capabilities constitutes a watershed moment in the development of cyber threats, significantly augmenting the efficacy, scope, scale, and precision of malicious cyber operations. This evolution marks a paradigm shift in the cybersecurity landscape, fundamentally altering the nature of both offensive and defensive strategies.

First, the democratization and increased sophistication of AI tools enables cyber criminals to automate and refine their attacks, making them more effective, dynamic, and difficult to detect. Machine learning algorithms, for instance, can analyze vast amounts of data to identify vulnerabilities in systems and networks, enabling attackers to exploit these weaknesses with greater precision. Automated phishing campaigns can be tailored to individual targets based on data harvested from the target’s social media accounts and other sources. This personalization increases the likelihood of the target falling for the phishing scam, as the messages appear more convincing and relevant. Critically, AI-enhanced malicious attacks now represent the top emerging risk, according to the latest version of the periodic Gartner study dedicated to risk monitoring, because “the relative ease of use and quality of AI-assisted tools, such as voice and image generation, increase the ability to carry out malicious attacks with wide-ranging consequences.”8

Second, AI is likely to expand the scope of cyber threats by allowing attackers to increase the scale of their operations with minimal human intervention. For example, attackers can use AI-powered botnets to implement massive distributed denial-of-service (DDoS) attacks, shutting down the targeted website, server, or network with a large volume of traffic. Ransomware attacks—when an attacker infects a targeted device with malware and threatens to deny the victim access to their device or release sensitive data if the victim does not pay the demanded ransom (although the payment does not guarantee data recovery, as obviously there is no enforceable contract with cyber criminals and data decryption entirely relies on their “good faith”) are also becoming more widespread because of AI, leading to the emergence of a thriving global industry of ransomware-as-a-service (RaaS).9 In this context, AI is lowering barriers to entry for attackers and increasing the ease and availability of ransomware, resulting in high costs associated with recovery and extended downtime.10

Third, AI systems can substantially increase attackers’ ability to analyze complex datasets and recognize patterns, thus allowing them to execute highly targeted and precise attacks. For example, AI can be used to identify high-value targets within organizations and tailor attacks to their specific roles and responsibilities. AI can also allow cyber criminals to create realistic audio and video impersonations known as deepfakes, which can be used in social engineering attacks to manipulate individuals into divulging sensitive information or authorizing fraudulent transactions.11 In a memorable case of an elaborate deepfake scam, a finance worker at a multinational firm was duped into paying $25 million to fraudsters who had lured him into a fake emergency call.12

Fourth, the increasing sophistication of deepfakes can be used to orchestrate disinformation campaigns for both financial and political purposes. These technologies pose a novel cybersecurity threat to democratic processes by enabling malicious actors to undermine information integrity at an unprecedented scale. The current democratization of AI implies much greater and easier access to AI systems that, until just a few years ago, were only accessible to researchers and highly specialized companies or governmental actors.13 This process leads to an enormous expansion of the attack surface, both in terms of potential perpetrators and potential vulnerabilities and attack strategies that can be used.

Importantly, AI-driven cyber attacks have acquired a dynamic nature; they can adapt to changing defensive measures, making detection and mitigation more challenging. By using machine learning capabilities, attackers can alter malicious software in real time to avoid detection by traditional antivirus systems. For instance, AI-enhanced polymorphic or metamorphic malware can mutate its features or automatically “re-code” itself when it propagates to evade pattern matching detection systems that are traditionally deployed as security solutions. Furthermore, AI systems can be used to quickly identify and exploit zero-day vulnerabilities before patches can be developed and deployed.14

Crucially, defenders are also increasingly employing AI-based systems to detect cyber threats and vulnerabilities and rapidly respond, for instance by leveraging AI to identify software bugs and self-patch them. However, within a sort of cybersecurity arms race, attackers are also leveraging AI to outmaneuver these defenses. In a situation where both sides continuously refine their techniques, defensive AI systems must evolve rapidly to detect new attack patterns and anomalies, while policy and governance framework must be crafted to mitigate risks and facilitate communication, collaboration, and coordination among cybersecurity stakeholders.

Understanding the Brazilian Context

Despite relevant advancements in recent years, the regulation of AI and cybersecurity in Brazil is highly fragmented, limited, and poorly implemented. By adopting multiple cybersecurity-related sectoral regulations, Brazil has improved in several international rankings that assess cybersecurity readiness.15 But regulatory oversight and cybersecurity implementation remain patchy because such processes are the responsibility of many different and uncoordinated entities, including sectoral regulators, private and public computer security incident response teams, and the military.16

Critically, Brazil does not have a general cybersecurity law, nor a cybersecurity agency, which represents an unforgivable deficiency in 2025. The top institution responsible for cybersecurity governance and policy proposal is the Institutional Security Cabinet (GSI in its Portuguese acronym) of the Brazilian presidency. However, the GSI’s remit is limited to the federal administration, restricting the scope of its reach. Importantly, in December 2023, Brazil adopted a new National Cybersecurity Policy and established a new multistakeholder National Cybersecurity Committee,17 known as “CNCiber,” of which the author of this essay has been appointed a member.18 Among the tasks of CNCiber is the elaboration of a proposal for a new national cybersecurity strategy and a new body for cybersecurity governance and regulation.

Indeed, one of the reasons for Brazil’s fragmented cybersecurity regulatory landscape is the lack of a unique institution responsible for coordinating the various dimensions of cybersecurity. At this moment, Brazil does not have an actionable cybersecurity strategy allowing the country to organically tackle the multiple—and mounting—cyber threats it faces nor a cybersecurity agency able to assess the ways in which AI technologies are impacting such threats.

Furthermore, only limited AI regulation exists, primarily under the purview of the Brazilian National Data Protection Authority (ANPD). In this context, the Brazilian National Congress is currently considering dedicated legislation to regulate AI, which would include cybersecurity obligations related to AI systems. (At the time of publication, legislation was still pending and the rapporteur of a new Special Commission for AI, established by the Chamber of Deputies, had promised to alter the bill.)19

Information Security?

Information security is an essential dimension to both AI and cybersecurity. In Brazil, the ANPD is tasked with enforcing the Brazilian General Data Protection Law (LGPD) and ensuring that organizations comply with data protection obligations.20 Data security is a fundamental principle set by the LGPD, aimed at ensuring that personal information is protected against unauthorized access, loss, alteration, damage, or destruction. Importantly, the LGPD explicitly establishes a security-by-design obligation for data controllers and processors, who need to implement security measures that the data subject “can expect” to demonstrate that personal data processing activities are regularly undertaken.

To comply with the LGPD, data processing agents—that is, the individuals or entities responsible for defining how personal data are processed in a given organization and implementing such decisions—are supposed to implement solid information security solutions, such as establishing an information security policy, raising awareness and capacity, and establishing technical measures to build data resilience. Without these, data processing should be considered irregular. In practice, however, data security compliance is poor at best. In the first four years after its inception, ANPD did not adopt the minimum data-security standards that it was empowered to enact in accordance with LGPD article 46.1, and its oversight is limited to receiving communications about data breaches without providing any solutions.

While the ANPD has a potentially enormous role to play in establishing data security regulations aimed at avoiding cybersecurity incidents, it has instead spent its energies on regulating the communication of such events to the public, providing guidance only on how the tragedy must be communicated instead of about how to avoid it. Indeed, Brazil ranks second globally for cyber attacks, which have exploded in number and sophistication because of the adoption of AI systems together with frequent data leakages and a “thriving” black market for personal data.21

A more proactive approach has been adopted by the Ministry of Management and Innovation, through its Ordinance SGD/MGI No. 852, which established the Privacy and Information Security Program (PPSI).22 PPSI is designed to enhance cybersecurity in the Brazilian public administration by providing guidance on data governance, encouraging projects and adaptation processes aimed at increasing cybersecurity maturity, resilience, effectiveness, collaboration, and intelligence. However, the Brazilian Court of Auditors has recently assessed that the implementation of PPSI is at an alarmingly low level, noting gross lack of compliance.23

While the LGPD and PPSI are essential information security pillars, they are not sufficient on their own. It is essential that a new cybersecurity strategy and a cybersecurity agency, to be proposed by the National Cybersecurity Council, provide guidance on how to specify information security criteria applicable to all entities, with particular regard to providers of essential services, critical infrastructures, and all entities managing categories of sensitive information that are not personal.24 Furthermore, a future Brazilian cybersecurity agency should establish cooperation agreements, and ideally an effective communication and coordination mechanism, with the ANPD and the other sectoral regulators to ensure a harmonized cybersecurity approach.

What Is an “Appropriate” Way of Regulating AI?

It is important to emphasize that both cybersecurity and AI are quintessentially multidimensional. Indeed, the effective regulation of AI risks and digital technology cybersecurity relies on the understanding that both AI and digital technologies are systems based on the interconnection of data, software, and hardware. Risks and vulnerabilities are inherent to both the elements that compose the systems and the ways such elements interact. The success of both cybersecurity and AI governance depends on having a good understanding of how the different components of digital and AI technologies interplay, how they are utilized, and what are the vulnerabilities in their use and deployment.25

Sound management of information and infrastructure, good stakeholder coordination, and solid capacity-building are therefore essential for both AI and cybersecurity regulation. However, in Brazil, each dimension or component of both AI and cybersecurity is currently regulated by multiple entities with limited or no coordination. While Brazil is in the process of developing a new AI framework, there are several concerns about the way in which the framework proposes to regulate cybersecurity aspects of AI and foster coordination among sectoral regulators.

For one, all versions of Brazil’s proposed AI framework—including the last one available at the time of this writing—have included a considerable amount of vaguely worded cybersecurity provisions, such as obligations to “perform tests to evaluate appropriate levels of security” of AI systems (see article 18.c).26 “Appropriate” and “adequate,” along with “reasonable,” are every lawyer’s favorite adjectives because they can mean virtually anything. While such language is essential to preserve normative flexibility, with no further guidance this can easily turn into legal uncertainty, which is the opposite of what new regulations should bring.

Clarifying and specifying these flexible provisions will require considerable technical knowledge. It is not a coincidence that the EU AI Act delegates this task to technical standardization bodies.27 However, this solution has raised concerns from human rights advocates who claim it constitutes a delegation of regulatory power to private and poorly accountable standardization bodies with scarce knowledge about fundamental rights’ risk posed by AI systems.28

To address these challenges, the Brazilian AI bill proposes to establish an AI governance and regulation system, where all sectoral regulators would come together under the leadership of the ANPD “to regulate and classify high risk AI systems” considering, among other things, “the high potential for systemic harms, such as to cybersecurity, and violence against vulnerable groups” (see article 15.VII that associates these two rather different risks for unspecified reasons).

The idea of a coordination system is promising, but the bill fails to articulate how it would function in practice and, most worryingly, who would deal with the cybersecurity dimensions of AI. Additionally, it seems risky to entrust the leadership of the system to an overstretched organ that barely manages to cope with fulfilling its current mission. To think that the ANPD, under its current structure, can effectively lead a new system of such relevance and magnitude, and effectively guarantee AI cybersecurity seems overly optimistic.

Conclusion

The relationship between AI and cybersecurity presents significant and transformative developments. While it has empowered malicious actors to conduct more impactful, far-reaching, and precise attacks, it has also underscored the importance of proactive and adaptive cybersecurity strategies. Indeed, the integration of AI into offensive and defensive cyber capabilities demands a fundamental shift in cybersecurity strategies.

In this context, fostering collaboration between government entities, private sector organizations, and research institutions is essential for Brazil—and all other states—to address the challenges posed by AI in the cybersecurity domain. The adoption of a multistakeholder approach is critical to understand the cyber threats landscape and develop effective regulations, standards, governance, and capacity-building mechanisms. Indeed, these elements are key to implementing robust cybersecurity measures and promoting innovation in defensive AI technologies to cope with mounting AI-driven cyber attacks.

Unfortunately, despite some advancements, the current Brazilian approach does not seem capable of confronting effectively the mounting number and complexity of cyber threats. It is vital that considerable resources be allocated to support an effective multistakeholder cooperation that needs to be enshrined in the future strategic and institutional framework adopted by Brazil. This will not only increase the quality of policymaking with evidence-based solutions but, more importantly, will enable inter-stakeholder coordination to implement cybersecurity measures in an agile and effective fashion.

In this perspective, the establishment of a robustly resourced Cybersecurity Agency must be seen as an imperative for Brazil, enabling the country to comprehensively assess how both existing and emerging technologies can either bolster or compromise cybersecurity. Considering the increasing reliance of our critical infrastructure, essential services, and societal functions on AI systems, neither Brazil nor any other country can afford to operate without considering the cybersecurity of AI systems an utmost priority.

Notes

Toward a Transatlantic Information Defense Framework

The Foreign Information Manipulation and Interference (FIMI) framework is starting to become the dominant method in the European Union (EU), the United States, and Canada to analyze dynamics in the information space—replacing loaded and tired terms like disinformation, propaganda, and fake news.1 The FIMI framework was developed and systematized by the European External Action Service (EEAS) in 2022 to serve as an integrated toolbox to pool EU resources for tracking, monitoring, and mitigating foreign influence operations and channel these resources into a coherent, EU-wide defensive mechanism.2

FIMI refers to coordinated efforts by foreign state or non-state actors to influence political, social, or economic outcomes in a target country by deliberately manipulating or distorting information or communication processes.3 Unlike disinformation, which focuses solely on the spread of false or misleading content, FIMI encompasses a broader range of activities, including the strategic amplification of true but contextually misleading information, suppression of critical narratives, and manipulation of social platforms to exploit existing divisions. It also differs from cyber attacks, as it primarily targets perception, trust, and decisionmaking processes rather than the integrity or functionality of digital systems. The FIMI framework is not just a new way of approaching old problems; it systematizes an elaborate and iterative early detection, data collection, and countermeasures architecture that incorporates a unified lexicon (techniques, tactics, and procedures), an integrated foreign influence monitoring and data collection interface, and a coherent repertoire of actions scalable at the EU-level and translatable across member state languages.4

While the FIMI framework is by no means the first attempt to address foreign information manipulation, its reach goes beyond the confines of Brussels. It is now one of the main joint frameworks used by the EEAS and the NATO Hybrid Center of Excellence (COE), with the latter actively relying on FIMI’s interface to conduct its own monitoring of foreign influence.5 In 2024, the United States began drawing from the EEAS’s FIMI framework as a model of international cooperation to counter foreign influence, including developing a pilot project between the U.S. State Department and the EEAS focusing on the Western Balkans as a flashpoint of Russian influence operations.6 A month later, the State Department launched The Framework to Counter Foreign State Information Manipulation—a diplomatic mechanism to coordinate joint efforts with allies.7

In 2024, there was also greater convergence between the EEAS’s and Canada’s Rapid Response Mechanism (RRM), operating under the G7 framework and the 2022 Strategic Partnership Agreement.8 The RRM has begun to adapt some of the tools from the EEAS framework, most important of which is the DISARM Framework, an information warfare escalation ladder that tracks organized manipulation before it reaches viral proportions.9 Similar coordination mechanisms are being developed with Australia and Japan that focus on China and use the EEAS’s FIMI framework for a joint defense in Southeast Asia.10

The current momentum of the FIMI framework across EU partner countries suggests that a broader allied information defense initiative could be in the works. Indeed, the reason why the EEAS’s FIMI framework has become so popular so quickly is that it includes a robust attempt to establish a common information defense lexicon, as well as shared monitoring interfaces that are easily adaptable by allied countries.

However, 2024 also laid bare a number of obstacles to further developing FIMI. The four most difficult to resolve are discussed below.

My FIMI Is More Important Than Your FIMI

The threat landscape for FIMI varies significantly across the United States, Europe, and East Asia, reflecting each region’s geopolitical priorities and constraints. As countries focus on their own imminent and pressing dangers, it becomes difficult to coordinate priorities across allies, rendering an effective prioritization of resources difficult from a diplomatic standpoint.

In Europe, the Russian FIMI threat is particularly urgent given geographical proximity and historical tensions. Russia’s campaigns focus on destabilizing the EU’s cohesion, challenging NATO, and influencing public opinion on energy dependency and security policy. In contrast, China’s influence in Europe has primarily been economic and diplomatic, though there is growing concern about its covert influence activities. In the latest EU Disinfo Lab conference in Riga in October 2024, only one of the dozen panels had a speaker focusing on China, with the rest exclusively focusing on Russia, demonstrating the discrepancy between partners.11

For Japan and South Korea, FIMI threats are predominantly centered on regional tensions with North Korea and China. North Korea’s tactics include cyber and influence operations targeting South Korea, while China’s operations often seek to sway public opinion on security issues, maritime rights, and economic relations. These varied threat levels mean each region brings different priorities to a unified FIMI framework, potentially complicating consensus about which country’s FIMI threat will be addressed first.

DIMI is Equally Important as FIMI

A significant obstacle to a unified FIMI defense framework is the presence in some countries of domestic stakeholders and interest groups that are directly connected to foreign influence actors. These actors make up a substantial portion of the domestic information manipulation and interference, or DIMI, ecosystem.

In the United States, some organizations and public figures promote narratives that align with the interests of foreign state actors, sometimes because of financial or strategic ties. For example, conservative media outlets and influencers associated with the Tennessee-based media company Tenet reportedly received funding linked to Russian state-backed media outlet RT and amplified pro-Kremlin viewpoints.12 Similarly, groups such as the National Rifle Association (NRA) have been scrutinized for past alleged associations with Russian officials who reportedly sought to cultivate influence within conservative circles in the United States.13

In Europe, several political parties, especially those on the far-right, have reportedly maintained ties with Russian entities. For instance, the French National Rally, led by Marine Le Pen, reportedly received a loan from a Russian bank; critics argued that the loan contributed to the party’s pro-Russia stance, especially on issues like sanctions and EU-Russia relations.14 In Italy, the far-right League party, led by Matteo Salvini, has faced allegations of Russian connections, including that Salvini allegedly met with Russian officials to discuss potential funding.15

In Australia and New Zealand, economic ties with China have led to concerns about Beijing’s influence over local politics and businesses. Former Australian senator Sam Dastyari resigned amid controversies surrounding his links to Chinese donors and public statements that aligned with Beijing’s positions.16 In New Zealand, the dairy and tourism sectors heavily depend on Chinese markets, leading to apparent reticence among some business leaders and political figures to publicly challenge China over disinformation and its assertive foreign policies.17 These cases illustrate how direct and indirect ties between domestic actors and foreign states complicate efforts to form a unified framework to counter FIMI, as business or political stakeholders with interests that align with foreign governments may resist or undermine anti-FIMI measures.

Domestic entanglements with foreign actors, especially when those domestic actors gain political influence or governmental positions, complicate the creation of a joint FIMI framework. They can spark internal resistance, dilute commitments to anti-FIMI initiatives, and raise trust issues among framework members, who may be concerned about domestic actors in allied nations leaking sensitive information to foreign influence campaigns. This means that varying contours and prerogatives of DIMI can impair allied cohesion against FIMI and lead to a miscoordination of efforts and policies aimed to address foreign interference.

The API Problem and Data Unavailability

Many platforms—such as Facebook, X (formerly Twitter), and TikTok—have tightened Application Programming Interface (API) access in recent years, often citing privacy regulations, data protection concerns, or proprietary interests. These restrictions limit researchers’ ability to retrieve crucial data on misinformation trends, bot activity, and network interactions in real time. Additionally, the high costs associated with API access on some platforms put it out of reach for many academic or public interest researchers.

Data availability is further restricted by platform policies that limit access to certain kinds of user-level or engagement data, particularly for researchers outside the United States. These limitations can make country-specific FIMI research exceptionally challenging. Without comprehensive datasets, researchers are often forced to rely on incomplete or inconsistent data, reducing the accuracy and impact of their findings. These limitations also make it difficult for researchers to collaborate across countries on joint FIMI projects, as data disparities can create inconsistencies in analytical methods and findings. The absence of standardized, affordable, and accessible data pipelines directly impairs the ability to detect and counteract foreign interference across diverse regions, hindering a globally unified approach to FIMI defense.

Platform Architecture

Platform architecture significantly influences the spread and success of different FIMI tactics, creating challenges for coherent, cross-platform research and response initiatives. Each social media platform has a unique architecture—encompassing content algorithms, user interaction features, and moderation policies—shaping how information is amplified or suppressed. For example, TikTok’s recommendation-heavy feed and short video format make it an ideal venue for highly engaging, visually oriented disinformation, while X, with its open, real-time feed, is often used for rapid dissemination of breaking narratives or coordinated hashtag campaigns. Facebook’s groups and communities foster echo chambers where disinformation can incubate within specific interest clusters, creating more isolated yet resilient pockets of influence.

This diversity in platform architectures makes it challenging for a multi-country FIMI research initiative to adopt a uniform data collection and countermeasure approach. Researchers now have to tailor their data collection techniques to each platform’s unique features, making cross-platform comparisons difficult and creating methodological inconsistencies. As mentioned, platform-specific data limitations—such as closed APIs or restricted user-level data—can further fragment research efforts, leading to gaps in understanding how disinformation campaigns migrate across platforms and regions.

How to Build a Truly Transatlantic FIMI Framework

To move forward in building a cohesive transatlantic framework for countering FIMI, there are several ways to streamline operational collaboration and address existing structural obstacles.

First, given the divergent threat landscapes across the United States, Europe, and Asia, a centralized threat prioritization protocol should be implemented to identify and allocate resources to shared FIMI concerns. For example, an EU- and U.S.-led FIMI task force could systematically assess FIMI campaigns based on severity, immediacy, and cross-border impact. To enhance focus and responsiveness, the task force could leverage AI-driven analytics to classify and triage threats, identifying high-risk operations (for instance, Russian interference in EU elections or Chinese influence in the Asia-Pacific region) and deploying response teams accordingly. Under U.S. President Donald Trump, this coordination will likely be even more difficult, taking into account his ongoing policies that cut funding and data access for U.S.-based researchers and institutes working on FIMI.18

Second, effectively confronting domestic entanglements with foreign influence actors demands enhanced transparency alongside regulatory heft. U.S. president Joe Biden made considerable progress in this area. For example, near the end of his term, his team proposed updating the Foreign Agents Registration Act (FARA) to better track and disclose financial or ideological ties between domestic entities and foreign states.19 Similarly, his administration robustly supported the U.S. State Department’s Global Engagement Center, which was designed to serve as the hub of a global information resilience effort and funded research initiatives aimed to build synergies with Europe and beyond over countering information manipulation.20 But Trump and his allies have taken a different tack, criticizing disinformation programs as contracting out “censoring real medical voices with real expertise that put real Americans’ lives in danger.”21 Trump’s reelection has imperiled these programs, resulting in their defunding and closure.22

Third, to address the API problem, the EU and other likeminded partner nations could establish a cooperative, standardized API access framework, with agreed-upon levels of data accessibility tailored to FIMI research needs. This could involve a data-sharing consortium involving platforms like Facebook, X, and TikTok, allowing qualified researchers and intelligence agencies access to FIMI-relevant datasets across borders. The consortium could also negotiate reduced API access fees for approved research projects, democratizing access for academic and public-interest researchers. Princeton University’s Accelerator initiative, which aims to create a joint repository of data on the information environment to foster international research on digital media, is a major step in the right direction and a model to draw from for multi-country research projects focusing on FIMI.23

Finally, to support cohesive transatlantic action, a formal allied information defense pact should be established, centered around a unified information manipulation detection and attribution lexicon and operational standards. This pact would require member countries to standardize key terms, methodologies, and response protocols to ensure alignment in tracking and countering FIMI threats. A common FIMI lexicon would ensure that all participants share a clear understanding of foreign adversary techniques, tactics, and procedures—making it easier to coordinate and compare data across diverse contexts. During the Trump administration, the bulk of this effort will likely fall to Europe, which will have to find ways to cooperate on research, funding, and data collection without a full contribution by the United States.

Notes

Techno-Legal Internet Controls in Indonesia and Their Impact on Free Expression

Countries around the world are increasingly enacting or amending laws and regulations to control the internet. These regulations often require information intermediaries—such as internet service providers (ISPs) and social media platforms—to block or restrict access to certain types of content. Governments typically enforce these mandates through coercive mechanisms, including threats to revoke companies’ licenses, arrests, or prosecutions. As ISPs and platforms operate within state jurisdictions, they must implement these controls at the behest of national governments.

Indonesia provides compelling evidence of how regulatory frameworks shape state control over online content. Similar to the trend seen in other countries, Indonesia has introduced laws and regulations that require ISPs and platforms to enforce content restrictions using broad and ambiguous criteria such as “misinformation,” “fake news,” and “hate speech.”1 This development has raised  significant concerns about the impact of such measures on free expression.2 In 2023, Freedom House reported that Indonesia was one of “forty-one governments [that] blocked websites with content that should be protected under free expression standards within international human rights law,” highlighting the global relevance of this approach.3

Given the key roles of ISPs and social media platforms in internet infrastructure, understanding the full scope of state-directed internet control requires more than just analyzing legal texts. It also demands technical investigations into how these intermediaries implement laws at the infrastructural and technical levels. Such an analysis would also uncover the potential long-term consequences of internet controls on users’ abilities to access information and engage in free expression.

This essay addresses that gap by examining Indonesia’s use of domain name system (DNS) redirection as a method of internet censorship. By analyzing how ISPs enforce the country’s internet control mandates, the essay sheds light on the broader implications of government-imposed controls, including their potential long-term effects on access to information and online freedoms.

How Indonesia Controls Internet Content

Indonesia is among many countries that control the internet through legal and technical mechanisms. For example, Russia has passed laws that facilitate state-directed internet control while imposing technical obligations on information intermediaries.4 The country’s internet regulator, Roskomnadzor, enforces these laws and has issued a detailed set of technical recommendations for ISPs to filter or block online content.5 Noncompliance results in sanctions,  such as fines.6

Like Russia, Indonesia has established laws and technical guidance over the years to control information online.7 The implementation of controls, such as content blocking, in Indonesia is decentralized. That is, although the government sets guidelines about what content should be blocked—for example, through the official block list called “Trust+Positif,” or “Trust Positive”—technical implementation has traditionally been left to ISPs’ discretion.8 In other words, the Indonesian government does not currently operate a nationwide technical filtering system like China’s so-called “Great Firewall” filtering system.9

Since at least 2008, Indonesian ISPs have implemented government-directed blocking against so-called “negative” content, a term used to describe material deemed defamatory or objectionable (or violating social or moral norms).10 Laws such as the Electronic Information and Transaction (EIT) law, which contains provisions on defamation, and the Law on Pornography are commonly cited to justify internet controls. Both laws have been criticized for being vague and overly broad and selectively enforced against human rights activists, journalists, and government critics.11

With over 1,000 ISPs operating in Indonesia as of 2024, many privately owned, researchers have found various internet filtering devices and software and content control practices.12 In 2024, the Internet Monitoring Action Project (iMAP) reported over 210,000 instances of confirmed website blocking in Indonesia.13 Then, as it is now, content targeted for blocking on the government’s Trust Positive block list included those that contain political and religious issues and those related to sexuality and gender, such as LGBTQ websites.14

Many ISPs in Indonesia implement internet filtering by tampering with websites’ domain name system (DNS), a method also employed in other Southeast Asian countries.15 DNS is key to the internet’s functioning because it translates domain names (such as carnegieendowment.org) to internet protocol (IP) addresses (such as 199.15.213.232), allowing internet-connected devices to find or communicate with one another.16 DNS servers, such as Google Public DNS, which, as of 2024, was the largest public DNS server available for free worldwide, perform the translation of domain names to IP addresses for the general internet globally.17 DNS tampering is “an umbrella term used to describe various forms of DNS interference” that affect information flows online.18 For example, Indonesian ISPs have used DNS hijacking to perform website blocking since the early 2000s.19 When this occurs, accessing a particular domain name results in an  intentionally incorrect response or IP address; for instance, instead of the page that was requested, users receive a block page or a page stating that the domain name does not exist. Internet filtering using DNS hijacking is straightforward for ISPs to implement and is therefore used widely by ISPs in Indonesia and elsewhere. In addition, testing conducted by iMAP researchers in 2023 uncovered that some Indonesian ISPs used TCP/IP and HTTP blocking methods.20

The deployment of various filtering systems and techniques by Indonesian ISPs in response to the country’s broad and vague laws have contributed to inconsistencies in content blocking. For example, the Ministry of Communication and Digital Affairs, Indonesia’s internet regulatory authority, has expressed concerns to ISPs since the early 2010s that many pornographic websites remain accessible despite the requirement to block them.21 These concerns led the Indonesian government to announce in 2015 that ISPs must adopt specific technical requirements to filter online content. Former minister Rudiantara also declared that the government was in “the final stage” of creating its own DNS server (called the “National DNS” or “DNS Nasional”), which network operators would have to “synchronize with” to perform filtering.22 In other words, once the National DNS was in place, Indonesian ISPs would cease using global public DNS servers like Google Public DNS.

The 2014 establishment of the National DNS, known as Trust+Positif, means that ISPs in Indonesia have to redirect all DNS traffic from their customers to that DNS, which contains a database of banned websites.23 As a consequence, attempts by internet users to access websites listed in this database are blocked. The government argued that the mandatory use of the National DNS by Indonesian ISPs was necessary to prevent access to pornography.24 However, the Trust Positive database included websites focused on human rights issues, LGBTQ content, and political criticism.25 Applying content filtering through the National DNS system was tantamount to restricting freedom of expression and silencing dissent.26

The Citizen Lab Uncovers a New Technique: DNS Redirection

As will be shown in a forthcoming report, Citizen Lab researchers conducted a study in 2024 to uncover how Indonesian ISPs are fulfilling the government’s requirement to synchronize with the National DNS. Using measurement testing, they found that two networks belonging to Telkom and Fastnet ISPs had begun the synchronization process using a technique known as DNS redirection.27 DNS redirection is unique because, unlike other filtering methods, users can no longer use a public DNS resolver, such as Google or Cloudflare, to access restricted content. Consequently, local users seeking blocked content have far fewer circumvention options.

Although DNS redirection is a known practice in network or traffic management, the use of this technique for filtering purposes is newly discovered. For example, as of November 2024, no studies had been published about using DNS redirection for internet censorship. Furthermore, the Open Observatory of Network Interference project, which provides tools to volunteers for measuring and documenting internet filtering worldwide, did not include testing for DNS redirection on its platform as of 2024, which meant that its prevalence was unknown.

Conclusion

Indonesia’s implementation of internet controls is illuminating for several reasons. First, it showcases how the Indonesian government, like the Russian government, uses legal and technical methods to harmonize controls across many information intermediaries operating in the country. This approach to internet controls presents challenges because, unlike legal frameworks that are more discernible to the public, technical methods are less visible and require specific knowledge or expertise to understand. More funding and support are needed to research these strategies and bolster collaborative efforts between digital rights groups and internet control analysts.

Second, as this case study demonstrates, techniques like DNS redirection can be difficult for average users to circumvent. Digital rights activists and scholars must pay particular attention to how controls implemented through internet infrastructure or via technical means implicate free expression and access to information. Moreover, as governments experiment with different technical methods to control the internet, more research is needed to detect novel methods that inhibit online information flows and develop circumvention practices against them.

Finally, despite the guidance issued by the Indonesian government regarding its preferred use of DNS redirection, Citizen Lab research found that, as of 2024, most Indonesian ISPs implemented blocking through whichever method they saw fit. A potential reason is that DNS redirection is more costly and challenging for ISPs to implement than other forms of DNS tampering. Information intermediaries are often responsible for internet control implementation, and technical mandates to block, surveil, or reroute internet traffic may be communicated by the government only to ISPs and technical communities. Therefore, advocacy against state-directed controls should involve partnerships with ISPs and other intermediaries. As state efforts to control the internet will likely continue, examining emerging techno-legal control tactics is crucial to understanding their impact on civil liberties and developing mitigation strategies for protecting users’ rights.

Notes

A Case for the Disconnected: Focusing on the Unconnected Alone May Not Help Bridge the Digital Divide

The world is becoming more connected. As of April 2025, 5.64 billion people were connected to the internet.1 This reflects steady increases, with the number of those online growing from 2.77 billion in 2014. However, growing global connectivity rates do not account for a troubling pattern: although people are gaining access to internet infrastructure, their ability to use it is increasingly limited by governments. State deployment of internet shutdowns is on the rise.2 These shutdowns have significant consequences for citizens everywhere. This essay explores the impact of internet shutdowns and emphasizes the importance of accounting for disconnected people.

Shutdowns Do Not Help Anyone

The broader societal costs of internet shutdowns include economic losses; disruptions to education, healthcare, and communication; and potential human rights violations. These harms outweigh any theoretical benefits governments use to justify shutdowns.3 Shutdowns are not merely disruptions; they are deliberate tools of control. They often serve as stark illustrations of how authoritarian regimes wield digital repression to stifle dissent, suppress information, and curtail freedoms.

For instance, Myanmar experienced significant internet restrictions following the military coup in February 2021.4 The monthslong shutdowns targeted mobile internet services and specific social media platforms, affecting approximately 54 million citizens.5 The prolonged disconnection had severe implications, including hindering access to critical information, disrupting business operations, and isolating citizens from the rest of the world.6 In 2023, the estimated cost of Myanmar’s shutdowns totaled over $745 million.7

India has seen similar shutdowns, though with a more targeted geographic focus on conflict-prone areas like Jammu and Kashmir. In 2023, the country recorded the most internet shutdowns globally, with eighty-four incidents affecting millions of people. These shutdowns, though often justified by security concerns, resulted in disruptions to daily life, no demonstrated positive impacts on security scenarios, and significant economic losses to the tune of over $31,554,106,041 that year.8

The news remained grim in 2024. According to a report by the digital rights group Access Now, 2024 was the worst year on record for shutdowns.9 The report counted “296 shutdowns in 54 countries,” which “continues a sharp uptick in the number of total shutdowns after what was already a devastating, record-setting year in 2023.” The leading driver of shutdowns was conflict, with “103 conflict-related shutdowns in 11 countries.” In these cases, militaries “deliberately turned to internet shutdowns” both in times of active fighting and as a tactic to control populations. More than 209 shutdowns, or 71 percent of the global total, affecting millions of citizens, centered in four countries: Myanmar, India, Pakistan, and Russia.

As of December 2024, Comoros, Gabon, Mauritania, Mozambique, Mauritius, and Pakistan had restricted access to the internet because of elections. Comoros started the year with an internet disruption when violent protests followed President Azali Assoumani’s reelection in January.10 For twenty-two days in July, Mauritania blocked mobile internet access following presidential elections and protests calling for a rejection of the results.11 Mauritius shut down the internet multiple times—on October 25, November 3, and November 4—following protests over a disputed election.12

Accounting for the Disconnected

Given the rise in disruptions, disconnections, and full shutdowns, it is important to be precise about three categories of people: connected, unconnected, and disconnected individuals. Connected populations enjoy regular access to the internet. Unconnected citizens have never had access because of barriers such as the lack of infrastructure, affordability, or digital literacy. The disconnected are those who once had access but are temporarily or permanently cut off from the internet. This group often faces more severe repercussions during shutoffs because their lives and livelihoods might have heavily relied on internet connectivity.

Being disconnected from the internet may be more detrimental than never having been connected, as the psychological impact of having something taken away is often more profound than being denied access in the first place.13 This concept can be understood through the lens of behavioral economics, particularly the theory of loss aversion, which suggests that people experience losses more intensely than gains. When individuals or communities are disconnected from the internet, they lose access to communication channels, educational resources, familial and/or social connections, and economic opportunities, leading to frustration, anxiety, and a sense of isolation.14

Amid the internet shutdowns in Myanmar, students could not continue their education online, businesses relying on digital platforms suffered losses, and citizens were cut off from accessing crucial information and communicating with loved ones.15 The abrupt disconnection led to a state of uncertainty and helplessness, highlighting the impact of being disconnected compared to those who were never connected.

Economic, Developmental, and Human Rights Consequences

The economic implications of internet shutdowns are profound. Experts estimate that in 2024 alone, internet shutdowns cost the global economy over $7.69 billion in forgone revenue.16 The Internet Society’s methodology for measuring the economic impact of internet shutdowns considers the impact on gross domestic product (GDP) per capita, employment, inflation, likely foreign direct investment, the age dependency ratio, and the fraction of the population residing in urban areas, among others.17 In Kashmir, for example, the 2019 internet shutdown led to estimated economic losses of $2.4 billion over 213 days.18

Shutdowns also impede development, because internet access is a critical tool for innovation, education, and healthcare. Disconnection can halt the progress of digital initiatives and set back developmental goals. During the COVID-19 pandemic, internet access became essential for remote work and online education. Shutdowns in various parts of the world during this period exacerbated the challenges faced by students and professionals, who already faced limitations in how they could access learning or perform their work. This period further highlighted the developmental setbacks caused by disconnections.

Finally, internet shutdowns raise significant human rights concerns. The right to access information is enshrined in international human rights law, and arbitrary shutdowns violate this right. The United Nations has repeatedly emphasized that restricting internet access undermines many associated rights.19 It argues that shutdowns can suppress freedom of expression, hinder free assembly, and limit access to emergency services. Indeed, shutdowns have been used during times of political unrest to stifle dissent and control political expression, infringing on citizens’ rights to information and free speech.

Conclusion

Internet shutdowns have far-reaching consequences—disrupting lives, economies, and societies. The unique harms suffered by disconnected individuals, who lose access to the services they once had, highlights the importance of preserving connectivity. As the world becomes more interconnected, ensuring consistent and equitable access to the internet should be a priority for all stakeholders. While the new Pact for the Future—approved by the United Nations during the September 2024 Summit of the Future—focuses on ensuring that the remaining 2.6 billion unconnected individuals obtain internet access, it is critical that stakeholders also pay attention to disconnected citizens.20 If the consequences of shutdowns and the livelihoods of disconnected individuals are not recognized, well-intentioned efforts may just entail pouring water into a leaking vessel while assuming the world is on track.

Notes

“America First” Meets “AI First”: Insights from DOGE

With stunning momentum, the Donald Trump administration has initiated a deep-reaching effort to remake the U.S. government. It has dismantled long-standing government institutions, ordered mass layoffs of civil service workers, and instituted steep funding cuts across multiple sectors.

The instrument behind this institutional upheaval is the Department of Government Efficiency (DOGE). Conceived of by tech billionaire Elon Musk, DOGE is an advisory entity created by executive order at the outset of Trump’s tenure. The boundaries of its influence are nebulous, and its mandate is ill-defined beyond the general notion of achieving greater efficiency in government operations.1 In its quest to achieve this aim, DOGE is undertaking a more radical experiment—using artificial intelligence (AI) tools to supercharge the remaking of the U.S. government. Unrestrained by any clear limits on its powers, DOGE has been inserting itself across government institutions, ordering massive, invasive changes, and strong-arming any opposition to its demands.

Much of DOGE’s activity is shrouded in opacity—the product of purposeful efforts to withhold information and stonewall legislative and public inquiries.2 Nonetheless, DOGE already provides a glimpse into how AI technologies can distort governance and offers a chilling lesson for citizens in other countries about the destructive impact of powerful technologies deployed in the service of an anti-institutionalist and illiberal political agenda.

DOGE’s and MAGA’s Shared Ideology

It could be easy to dismiss DOGE as an instrument within the Trump administration’s broader conservative agenda. But even as DOGE serves the Make America Great Again (MAGA) movement’s purposes, Musk and his team have brought their own set of motivations to Trump’s remaking of the federal government.

DOGE is rooted in a techno-libertarian mindset that fundamentally believes that societies can operate better if freed from bureaucratic encumbrances.3 The idea is not to replace one form of government power with another. Rather, the goal is to remove government restrictions as much as possible by replacing bureaucracy with machines, using algorithms and computer analysis to make rapid decisions, eliminating unnecessary regulatory barriers that hinder innovation, and promoting economic and individual liberty while scaling down human involvement to the absolute minimum.

MAGA takes a different approach. Its aim is not to free society from the government. Rather, it is to maximize executive power in service of conservative values.4 Elite institutions should be dismantled, immigrants deported, political opponents punished, and the economy rebooted in a nationalistic and protectionist direction. (This latter aspect is antithetical to techno-libertarians and explains why in the midst of Trump’s global tariff war, Musk disparaged Peter Navarro, Trump’s top trade adviser, as “dumber than a sack of bricks” and called for “zero tariffs” between the United States and Europe.)5

The composition of DOGE’s staffing reflects these distinctive camps.6 One grouping consists of first term Trump officials and conservative lawyers deeply rooted in the MAGA agenda. They include individuals such as DOGE spokesperson Katie Miller, who, along with her husband Stephen Miller, are reportedly viewed inside Trump’s inner circle “as glorified babysitters for Musk, tasked with ensuring he stays within bounds.”7 Silicon Valley figures comprise a second faction, including tech leaders, engineers, and financiers with close ties to X (formerly Twitter) and SpaceX. They have little history with the MAGA camp; instead, their involvement reflects DOGE’s techno-libertarian underpinnings and the centrality of Musk’s leadership.

Despite these distinctions, MAGA and DOGE overlap on many of their aims. Moreover, their deregulatory agenda is not new. Long before Trump, U.S. conservatives had formulated a right-wing agenda that hinged on slashing government agencies and curbing regulations. Trump has been a willing enabler of these ideas. At the beginning of his first term—when he promised to “drain the swamp” and kicked off a multi-month hiring freeze on federal employees—his hostility to the bureaucracy knew few bounds.8 Later on, he dismantled institutional guardrails, demeaned the federal workforce, and used his position to enrich himself, while undermining institutional checks on his power.9 He is following the same playbook the second time around—handing out prominent positions to political allies while ensuring that his family members reap financial rewards from the presidency.10

Trump has also initiated an even more sweeping deregulation agenda. The DOGE apparatus and Silicon Valley’s technology have emerged as ideal instruments for implementing this vision. As Eryk Salvaggio describes in Tech Policy Press, “shifting the conversation to the technical is a way of locking policymakers and the public out of decisions and shifting that power to the code they write.”11 By crafting a narrative that links AI technologies with greater governmental efficiency, DOGE has cleared the path for the MAGA team to run roughshod over concerns about security, privacy, and democratic accountability in favor of speed and disruption, and ultimately regulatory dismantlement.

Reports have emerged about DOGE employees feeding data on employees, civilians, and funding into AI systems for analysis to make decisions about government staffing cuts and funding.12 Musk-affiliated political appointees are pushing to develop AI “coding agents” to automate processes such as agency finances.13 Government agencies are reportedly using AI tools to “catch and revoke” the visas of foreign nationals who appear to support Hamas, a dramatic expansion in the machine-enabled policing of conduct and speech.14

These efforts reflect an emergent reality: the symbiosis between Musk’s “AI-first strategy” and Trump’s MAGA agenda.15 While DOGE’s tech-based dismantlement strategy appears unprecedented, this is not the first time that Musk has attempted to radically remake an organization via the deployment of powerful technologies. His experience transforming X illustrates the stakes involved.

Lessons from Twitter

In 2022, Musk sent a text message to then Twitter CEO Parag Agrawal. It read: “What did you get done this week?”16 The message came as Musk maneuvered to join the company’s board and amid a clash with Agrawal over Musk’s criticisms of Twitter’s operations. Just days later, Musk purchased Twitter, assumed a leadership role, and set the ball rolling for the platform’s complete overhaul.

Three years later, on February 22, 2025, federal workers received an email from the U.S. Office of Personnel Management (OPM), titled simply: “What did you do last week?” The email demanded that federal workers send OPM five bullet points summarizing their accomplishments by the following Monday, or risk being fired. Musk initially warned on X that failure to respond would “be taken as a resignation.”17

This rhetorical echo was not the only parallel between Musk’s reorganization of X and the current DOGE context. After Musk completed his purchase of the company, he set out to cut its workforce. In short order, he laid off nearly 80 percent of X’s 7,500 employees.18 He warned the remaining staff that their employment was contingent on their “hardcore” participation in the company.19 These instructions were conveyed in an email titled, “A Fork in the Road,” the same subject line used in an OPM email three years later to encourage federal workers to resign from the government.20 X reeled in the aftermath of these changes. Fired individuals sued, some remaining workers quit, and “the platform suffered numerous major outages and technical glitches.”21 It became a shell of its former self—its ad revenue fell over 55 percent between 2022 and 2023, it had lost 23 percent of its U.S. users by February 2024, and by October 2024, its stock valuation had plummeted to almost 80 percent of its value when Musk purchased it.22 (Its value has risen in 2025 due to Musk’s pivot to AI, but it remains to be seen whether its value will hold.)

Musk’s management of X reflected his belief that human oversight could be eliminated from automated tools with little drop-off in productivity and huge increases in efficiency. It was a gamble he was happy to take even if there were setbacks along the way. In late 2022, Ella Irwin—Twitter’s vice president of trust and safety at the time—told the public that the company would prioritize automated content moderation.23 She emphasized that Musk believed the company had hindered itself by relying on people and that it would reduce manual reviewing processes in favor of machine-based ones. In the ensuing years, X leaned heavily on AI systems for content moderation, but the outcomes were poor. As programs, rules, and staff dedicated to preventing violent speech and misinformation were purged, the company saw marked declines in enforcement actions against hateful speech.24 Concerns grew about the error-prone nature of X’s automated reviewers and their potential to produce biased results. Instead of changing course, Musk doubled down on AI tools. He incorporated his xAI chatbot “Grok” into the X platform, adding a direct link to allow users to conduct queries.25 While Grok’s generation of vulgar, political, or violent outputs proliferated, Musk stayed committed to the AI pivot, treating X as “a private testing ground for his AI ambition.”26

There was also another dynamic at play. Take, for example, Musk’s firing of company staff responsible for overseeing global content moderation and his dismantling of the Trust and Safety Council independent advisory group, which monitored hate speech and harassment on the platform.27 Theodora Skeadas, who co-managed the council, told us that Musk’s actions demonstrated a “lack of respect for human staffing.”28 She outlined how the changes to X undermined workers’ “capacity to do work and entirely ended programs,” with particularly harmful consequences for “marginalized political groups” and “civic integrity” around elections. And she described how Musk’s belief that “fewer people make for more efficient systems and processes,” as well as his demands for total loyalty, cultivated a “culture of intimidation and fear” within the company. DOGE, she reflected, is “absolutely a parallel” to X in its approach to staffing. For Musk, relentlessly pursuing cost-efficiency was a far greater priority than ensuring his products operated in an ethical or trustworthy manner.

Finally, Musk’s leadership at X embodied his commitment to Silicon Valley’s “move fast and break things” mentality. The phrase—stemming from a 2012 Mark Zuckerberg letter—champions the idea that the speed necessary for successful innovation inherently comes at the cost of breaking things along the way.29 This concept, often linked with the process of “creative destruction,” in which obsolete predecessors are dismantled in order to build from the ground up, underpinned Musk’s management of his other companies.30 When SpaceX experienced one failed launch after another in the firm’s early days, Musk pushed hard to continue despite the safety risks and costs. When glitches were uncovered in Tesla’s Autopilot system—resulting in at least thirteen fatal crashes—Musk was dismissive, saying he had a “moral obligation to deploy it even though you’re going to get sued and blamed by a lot of people.”31 Likewise, as he reshaped X, the technical failures, operational disruptions, and backlash resulting from his widespread terminations and impractical expectations—such as demanding the closure of an entire data center in mere months—appeared to confirm his inclination to pursue reckless change regardless of the consequences.32

How Is DOGE’s Agenda Playing Out?

Based on Musk’s stewardship of X, what can be expected from DOGE? First, Musk’s team has leaned hard into Silicon Valley’s creative destruction mantra in its bid to remake the federal government. Examples of this are manifest. Just as Musk purged X of most of its employees, he has been driving personnel and funding cuts throughout the federal bureaucracy. In the first months after Trump’s inauguration, DOGE led efforts to institute “zero based budgeting” throughout the government, proposing to take all spending to zero and then rebuild from the ground up.33 Under DOGE’s guidance, Trump froze trillions of dollars in grants and loans, dismantled key departments and agencies, and fired thousands of workers, from probationary employees to inspectors general and senior military attorneys.34

These efforts have relied heavily on technological tools. At the Department of the Treasury, for example, workers are reportedly using AI filters to block grant proposals that include terminology related to diversity, equity, and inclusion (DEI).35 The U.S. Army is deploying the “CamoGPT” AI tool to review materials for DEI-related language as it seeks to purge this content.36 But DOGE has used AI to make far more complex and high-stakes decisions as well. At the Department of Education, the DOGE team has reportedly fed sensitive data into AI systems to make choices about which programs to slash.37 (DOGE staff reportedly uploaded Education Department reports into its AI system and asked the algorithm to flag “inefficiencies” that were then incorporated into proposals for reducing staffing and funding.)38 Tasking AI with such subjective tasks is unproven and risky. Not only is AI software liable to produce unpredictable errors and biased results, but these factors are compounded by DOGE’s haste to generate results and its willingness to flout guardrails and established procedures.

Similar to X, DOGE’s upheaval is also creating significant turbulence with few meaningful results. One former Pentagon official describing DOGE’s wider involvement in the Defense Department said, “They’re not really using AI, they’re not really driving efficiency. What they’re doing is smashing everything.”39 As a result, regular tasks require more time, eroding productivity. In the meantime, DOGE is saddling civil servants with inconsequential administrative requirements. “These new directives are not only wasting government manpower and taxpayer dollars. They’re also resulting in worse services for Americans,” writes Catherine Rampell for the Washington Post.40 A good case in point is the Social Security Administration (SSA), where Trump’s firing of over 12 percent of the agency’s staff has sent it into a free fall.41 Its phone lines have experienced multi-hour wait times, frequent website crashes have prevented Americans from accessing their accounts, and spending freezes have deprived the remaining workers of basic office supplies. Similar reports of beleaguered and confused operations have emerged across the government, including in the Internal Revenue Service (IRS) and the Bureau of Land Management.

As DOGE gets deeper into its dismantlement of the U.S. government, the second phase of its strategy is coming into view. Once again, Musk appears to be borrowing from his X playbook by laying the groundwork for the mass automation of scores of governmental functions previously carried out by civil servants. In a recent interview with Senator Ted Cruz, he zeroed in on the “source code” as the essential foundation of the state.42 “Well, the government is run by computers. So you’ve got essentially several hundred computers that effectively run the government,” Musk told him. “Because all you’re doing is asking a human who will then ask another human or ask another human, and finally, usually, ask some contractor who will ask another contractor to do a query on the computer.” To be sure, AI technology already plays a role in federal processes. But these tools have largely been confined to basic functions, such as using chatbots to expedite agencies’ data analysis or help local governments navigate regulations.43 Musk’s vision of automation is far starker: cut human-to-human interactions to the bone and replace what he believes are redundant civil servants with AI-powered computers.

One government official told the Washington Post it may be that the “end goal is replacing the human workforce with machines” altogether.44 Or as New Yorker writer Kyle Chayka argues, while “government run by people is cautious and slow by design,” this DOGE “machine-automated version will be fast and ruthless, reducing the need for either human labor or human decision-making.”45

Take, for instance, the General Services Administration (GSA), where Thomas Shedd, a former Tesla engineer, was installed to run the Technology Transformation Services division. He is already implementing plans to use coding agents to automate the GSA’s analysis and finance functions. But Shedd aspires for more. GSA reportedly aims to expand its AI chatbot software, “GSAi,” to automate functions across other federal agencies.46 As one GSA employee suggests, the program could be “used to plan large-scale government projects, inform reductions in force, or query centralized repositories of federal data.”47 In this vision, there is little room for human input—government functions are planned, crafted, and implemented from the ground up by machine intelligence.

It remains to be seen whether DOGE will accomplish its maximalist goals, but at a minimum, it will disrupt human judgment by instilling risky and illiberal uses of technological tools. In the area of surveillance, for example, Secretary of State Marco Rubio has launched a “Catch and Revoke” effort that draws upon AI tools to evaluate the social media accounts of student visa recipients.48 Resulting assessments have already led to erroneous deportations and punitive measures against students. The administration has also expanded its digital monitoring program, a partnership with a private prison operator and digital surveillance company GEO Group, that currently tracks 180,000 migrants and has been instrumental in the arrests of hundreds of migrants.49 Trump’s team has also proven willing to turn its AI surveillance inwards to monitor its own employees. According to Reuters, Environmental Protection Agency supervisors received information that DOGE would use AI to surveil government staff, “looking for language in communications considered hostile to Trump or Musk.”50

DOGE’s methods will likely give rise to privacy abuses and data violations as well. At OPM, reports have emerged about DOGE workers gaining “the ability to delete, modify or export the personal information of millions of federal workers and federal job applicants.”51 At the Treasury Department and the SSA, DOGE has gained access to millions of citizens’ highly sensitive data, leading a federal judge to block DOGE’s access to SSA systems citing privacy law concerns.52 And, in the IRS, DOGE has reportedly brought in operatives to develop a “mega API” to consolidate the agency’s data into a single place.53 (Presently, IRS data is compartmentalized into dozens of specialized systems, and workers are only granted access on a need-to-know basis.) One IRS worker warned that this integration would create an “open door controlled by Musk for all Americans’ most sensitive information with none of the rules that normally secure that data.”54

Conclusion

The Trump administration’s use of DOGE as a battering ram to carry out its goal of rapidly remaking of the federal government is a cautionary tale for other countries. While recent reports suggest that Elon Musk is taking a step back from his DOGE responsibilities, there is little question that the initiative will continue. DOGE’s short track record spotlights the tremendous risks involved. AI tools can easily be instrumentalized to destroy institutions, wipe out accountability, and enable corruption. Other democracies ought to take heed of the United States’ failure to insulate itself against private business interests and unregulated technological ascendency.

For countries where there already is a predisposition to abuse the instruments of government power for political or personal gain, the DOGE project presents a master class in how powerful technological tools can be deployed—in a matter of weeks—to undermine an accountable bureaucracy and replace it with something far less functional or resistant to abuse. As leaders mirror the illiberal rhetoric and far right ideological agenda coming out of the White House, it is likely that DOGE’s model will be replicated in other places and states.

The United States has long held itself out as a model of democratic norms. It is an advanced democracy and has a long history of adherence to the rule of law. But DOGE’s techno-maximalist agenda is testing the limits of America’s democracy.

Notes

The United States Should Re-embrace “Digital Solidarity”

Speaking to an audience of the world’s leading cybersecurity professionals in May 2024 at a global information security conference in San Francisco, then U.S. secretary of state Antony Blinken announced that America’s new “North Star” for digital and cyber foreign policy would be the principle of “digital solidarity.”1 Taking cues from a Lawfare essay by Pablo Chavez, the United States International Cyberspace and Digital Policy Strategy that was released at the RSA Conference framed digital solidarity as a “willingness to work together on shared goals, help partners build capacity, and to provide mutual support” while recognizing the importance of using technology in a rights-respecting manner.2

Eight months and an election later, in February 2025, Vice President JD Vance struck an entirely different chord with his remarks at the Paris AI Summit.3 While Vance’s speech largely garnered attention because of its barefisted castigation of the European Union’s regulatory approach, his speech also laid down the basic contours of U.S. cyber and digital foreign policy under the Donald Trump administration. In line with the administration’s broader retreat from multilateral and multi-stakeholder cooperation writ large, Vance clearly signaled a shift away from digital solidarity. Straight off the block, he noted, “The United States of America is the leader in AI, and our administration plans to keep it that way,” an individualistic comment that hinted at the administration’s prioritization of competition over cooperation on questions of global AI governance. Like previous U.S. administrations, he highlighted the dangers of ideological bias in AI systems and their potential misuse by authoritarian countries (such as China), but rather than provide incentives for countries to partner with America, he issued a stark warning, saying, “partnering with them means chaining your nation to an authoritarian master that seeks to infiltrate, dig in, and seize your information infrastructure.”

The speech envisaged a world driven by U.S. influence on account of its technological prowess and brute material power. The gloves are finally off. Engaging with the United States will happen only on America’s terms and, as Ukrainian President Volodymyr Zelensky found out in the Oval Office, dissent will come with a price.4

At the end of the summit, the United States again grabbed headlines when it refused to sign the final declaration because of its references to regulation, again a clear body blow to international cooperation and a shift away from implementing the frame of digital solidarity.5

Why Digital Solidarity Works

When the Joe Biden administration first introduced the concept of digital solidarity, it marked a critical departure from prior approaches to cyber issues.6 Fifteen years ago—as exemplified in Secretary of State Hillary Clinton’s 2010 remarks on internet freedom—the United States took for granted that the pendulum of global internet governance would swing toward openness and liberal values.7 Unsurprisingly, this vision never quite materialized. Instead of embracing openness, governments subsequently constrained internet access within their territorial boundaries through measures that restricted cross-border flows of data.8 Nation-states weaponized the internet for electoral interference and informational manipulation purposes.9 Domestic censorship measures also arose.10 At the same time, geopolitical and ideological challengers like China increased their influence in the digital sphere, both through the development of global digital infrastructure and in shaping norm-making forums.11

The Biden administration’s late 2024 refocusing of internet governance around the concept of digital solidarity offered a valuable conceptual frame to explain how U.S. thinking could evolve to respond positively and productively to the modern digital landscape. It compelled policymakers to go beyond the “democracies versus autocracies” pitch and accept that America’s vision of cyberspace governance would not be adopted by all countries.12 It was a useful way for the United States to build a larger coalition of countries against China by signaling that the United States was not coming to the table with a rigid and ideological vision of the internet but rather was willing to work on select issues, such as cybersecurity standards, secure supply chains, and capacity building, with different countries.

Just because the Trump administration is tacking in a new direction does not mean it cannot incorporate elements of the digital solidarity agenda that overlap with its own priorities. The administration should consider supporting two areas of digital policymaking: offering better and cheaper alternatives to China’s products that also protect digital rights in their design; and using international institutions to shape the rules and guardrails for various technologies.

Reframing Digital Solidarity for the Trump Administration

First, as Vance articulated in Paris, a key goal for the United States is to counter China’s influence among developing countries. As such, it would be sensible for the Trump administration to pursue initiatives that resonate internationally while also advancing America’s core interests. Empirical research shows that the developing world’s approach toward partnerships with advanced economies is pragmatically driven by domestic interests, security stakes, and developmental needs, rather than ideological or geopolitical alignment.13 For example, India was quite comfortable acquiring information and communication technology (ICT) products from Chinese tech giant Huawei before a physical conflict occurred between Chinese and Indian soldiers on their disputed border. This caused India to reassess its strategy toward Chinese tech products and restrict Chinese applications and equipment from its core technological periphery.14 Similarly, in Southeast Asia, Huawei leverages its capacity-building efforts and the cost effectiveness of its products to retain a significant presence in the region despite territorial disputes over the South China Sea.15 While there are concerns over Chinese surveillance, policymakers and the general public in countries like Indonesia feel strongly that the Five Eyes are no better on this front.16

Amid great power competition, the overriding interest of emerging powers is to acquire necessary infrastructure, human resources, and capital from countries across the ideological spectrum based on quality, cost effectiveness, and geopolitical risk.17 The implication is that to compete with China, at the bare minimum, the United States must provide better and cheaper alternatives that do not undermine digital rights.

America’s efforts to promote the Open Radio Access Network (O-RAN) is a good illustration.18 O-RAN is a non-proprietary telecommunications networking system that acts as an alternative to Huawei’s closed models. U.S. diplomacy has focused on partnering with and providing financial resources to universities, government departments, and telecom companies in developing countries such as India, Indonesia, and the Philippines to adopt O-RAN.19 Openness is a value that developing countries have long prioritized in building and deploying technologies. However, the jury is still out on whether O-RAN can fulfill its original vision. Some experts argue that O-RAN has underperformed and failed to make a dent in Chinese vendors’ 5G market share.20 Others maintain that O-RAN is technically sound and could become commercially viable once 6G is rolled out.21 In short, O-RAN is an intriguing option that has made real efforts to account for and engage with the interests of the developing world. Rather than pursue coercion or one-off transactions, the Trump administration could adopt and expand upon this model, identifying rights-respecting technological solutions that offer an attractive value proposition to third countries and investing in them to drive a wedge against China’s efforts.

Second, before the Trump administration fully disengages from international organizations and multilateral frameworks, it should carefully weigh the consequences of doing so. Within a rapidly evolving and contested international order, working through international institutions to set common rules of the road on the governance of cyberspace reinforces America’s interests. Trump’s retreat from global governance institutions and withdrawal of funding to organizations working on digital rights and democracy issues only enables adversaries to further an alternate state-centric vision for the internet.22 The United States would be better served continuing to find common ground with other countries and establishing technology guardrails to address global challenges, while endorsing and sustaining its own vision of the internet.

Under Biden, U.S. officials led efforts to forge consensus on global digital governance anchored by principles of fairness, accountability, transparency, safety and security, data privacy, and human oversight.23 For instance, in 2024, the UN General Assembly adopted by consensus a U.S.-brokered resolution on forging “safe, secure and trustworthy” artificial intelligence (AI).24 The resolution addressed not only common safeguards for AI but also spoke to closing digital divides and developing data governance—themes that appeal to developing countries.

While it is too early to make an informed assessment of the Trump administration’s technology foreign policy doctrine, early signs very clearly suggest that it does not believe in the joint setting of norms and standards through multilateral processes, instead prioritizing deals-based mercantilism.25 In the technology sphere and otherwise, this would be harmful to America’s reputation and interests in the long-run.

A Word of Hope 

Even if the Trump administration abandons the principles of digital solidarity, other countries must continue to respect and celebrate networks and coalitions of civil society actors who support, engage with, and demonstrate solidarity with the work of their peers worldwide. The #KeepItOn coalition coordinated by the nongovernmental organization Access Now, for example, works with civil society groups, media, and lawyers around the world to challenge internet shutdowns through litigation and raising public awareness.26 Civil society organizations around the world, including Human Rights Watch and Amnesty International, have collaborated to resist the deployment of facial recognition technologies in public spaces to conduct surveillance.27 Carnegie’s Digital Democracy Network also provides a platform for individuals to engage with scholars and activists from other parts of the world and apply lessons learned to their own research and advocacy.

Digital solidarity through such transnational coalitions fosters mutual understanding, support, and information exchange in the service of shared goals. Even if governments neglect this vision, actors in civil society and academia should continue to build these bridges.

Notes

Carnegie does not take institutional positions on public policy issues; the views represented herein are those of the author(s) and do not necessarily reflect the views of Carnegie, its staff, or its trustees.