A photo taken on April 1, 2025 shows AI letters on a laptop screen (R) next to the logo of Deepseek AI application on a smartphone screen in Frankfurt am Main, western Germany.
Source: Getty
paper

China’s AI Policy at the Crossroads: Balancing Development and Control in the DeepSeek Era

The competing imperatives of control and growth have shaped Chinese AI policy since top leadership began paying close attention to AI in 2017, evolving cyclically with China’s self-perception of its relative technological capabilities and economic position.

Published on July 17, 2025

Introduction

The release of DeepSeek-R1 in early 2025 transformed the global artificial intelligence (AI) landscape overnight, with the model demonstrating capabilities that placed Chinese models squarely at the global frontier. Seemingly surprised by their developers’ success, China’s leaders have responded with a newly found confidence. They have invited leading AI pioneers to high-level Chinese Communist Party (CCP) meetings,1 encouraged local governments to accelerate AI deployment across critical infrastructure,2 and promised to create and improve China’s AI laws and policies.3 This shift represents a substantial departure from China’s self-perception in the period immediately following ChatGPT’s November 2022 release. At that time, China prioritized economic stimulus and regulatory flexibility to help Chinese companies catch up to the cutting-edge systems produced by U.S. companies, such as OpenAI and Anthropic.

China’s recent policy evolution reflects a fundamental pattern in the CCP’s strategic thinking: when China perceives itself as technologically vulnerable, it leverages technology as an engine for economic growth; when it feels strong, it reasserts control through heavy-handed ideological measures. These competing imperatives of control and growth have shaped Chinese AI policy since top leadership began paying close attention to AI in 2017, evolving cyclically with China’s self-perception of its relative technological capabilities and economic position.

DeepSeek’s rise has placed this cycle at an inflection point: while China’s leaders have regained confidence in the country’s AI capabilities, China’s lackluster economy threatens the financial foundation for continued success. It marks the first time since 2017 that the main factors driving Chinese policymaking—its technological confidence and broader economic growth—have moved in opposite directions. This unfamiliar situation will test the leadership in new ways and make it harder than ever for observers—inside and outside of China—to anticipate Beijing’s moves.

To help anticipate where Chinese AI policy may go in the future, this paper traces China’s cyclical trends across four distinct eras: the ambitious “Go-Go Era” (2017 to early 2020) of minimal regulation and massive investment; the restrictive “Crackdown Era” (2020 to late 2022), when the CCP reasserted control over tech companies; the pragmatic “Catch-Up Era” (2022 to early 2025) that loosened restrictions to boost economic growth; and the current “Crossroads Era,” triggered by DeepSeek’s breakthrough, which has both accelerated AI adoption and positioned China for global leadership in AI governance, but has raised questions over the economic viability of China’s AI strategy.

Understanding these policy cycles is essential as China’s AI capabilities now genuinely compete at the global frontier. How China regulates its cutting-edge AI development will directly shape international technology standards, governance frameworks, and economic competition worldwide. It is imperative for governments, companies, and civil society actors to understand the forces driving Beijing’s approach and where it may lead next. This paper therefore focuses on the key Chinese ministries driving governance decisions, the evolution of major policies in response to domestic and international developments, and the growing influence of global AI conversations on China’s regulatory approach. It emphasizes the most significant developments that capture China’s shifting priorities and emerging governance structures.

The Go-Go Era (2017 to Early 2020)

The release of China’s “New Generation Artificial Intelligence Development Plan” (NAIDP) in 20174—setting forth China’s ambition to become a global leader in AI by 2030—marked the beginning of a period characterized by ambitious goals, substantial resource investment, and minimal regulation. The plan emerged from China’s broader economic strategy during this period, which emphasized supply-side structural reforms, what it called new drivers of growth, and the implementation of the Made in China 2025 industrial policy. With gross domestic product (GDP) growth targets still hovering around 6.5 percent and the trade war beginning in 2018,5 China’s leadership viewed AI as another frontier where state-directed development could help close the technological gap with the United States.

The Go-Go Era

Dates: 2017–Early 2020

Signature Policies: National AI Development Plan

Key Agency: Ministry of Science and Technology

Key Economic Indicators: Chinese AI equity investment reaches 48% of the global total in 2017; steady annual GDP growth around 6.5% throughout the period

Key Events: State Council’s mass innovation and entrepreneurship promotion initiative; local governments’ AI development policies

Why It Matters: The Go-Go Era was characterized by the government’s focus on rapid AI development through state planning, heavy investment, and minimal regulation. It was marked by both central and local governments initiatives to speed up AI innovation and position China as a global AI leader, driving the mobilization of resources across ministries, local governments, and tech firms.

China’s approach to AI was informed by its broader success with state-led development models in emerging technologies, such as 5G, where companies like Huawei and ZTE had advanced significantly. While 5G achievements reinforced the viability of industrial policy and state investment, China’s push into AI also stemmed from a strategic recognition of AI’s cross-sectoral potential.6 At the time, Chinese leaders primarily understood AI through specific applications—such as computer vision for surveillance, speech recognition, and industrial automation—rather than as a transformative platform technology. The potential of generative AI models was not yet apparent to policymakers, making AI seem more like traditional technology sectors China had successfully competed in.

The NAIDP served as a catalyzing force, mobilizing resources across government ministries, research institutions, local governments, and private companies. The Ministry of Science and Technology (MOST) took the lead in coordinating implementation, establishing advisory committees,7 and channeling resources toward both foundational research and practical applications. This represented a traditional science and technology development approach, with content control and information management agencies, such as the Cyberspace Administration of China (CAC), playing only a marginal role during this period. The financial backing came not just from state funding but also substantial private sector investment and venture capital (VC) from both Chinese and international investors. Chinese equity funding for AI startups surged from 11 percent of global investment in 2016 to 48 percent in 2017, surpassing the United States’ 38 percent share.8

Crucially, the 2017 NAIDP built upon an already substantial government VC infrastructure, first established in 2005. Between 2013 and 2018, an average of 238 new government VC funds were created annually before the AI focus intensified.9 The NAIDP’s innovation was not creating new financial mechanisms but redirecting the existing government VC system toward AI-specific targets. Government VC funds were geographically dispersed compared to private VCs concentrated in coastal regions,10 enabling the state to channel AI investment into inland areas where private capital was absent. This geographic reach became crucial for the comprehensive AI development the NAIDP envisioned.

The regulatory and VC environment reflected China’s perception that it was in a technological catch-up phase, prioritizing rapid innovation and deployment over early regulatory control. While the technology was genuinely new and regulatory frameworks had not yet developed, the permissive approach articulated by the NAIDP was driven by the belief that China needed to prioritize rapid development to maintain its national competitiveness.

The State Council actively promoted “mass innovation and entrepreneurship,”11 and local governments became key actors in implementing the AI agenda.12 Provinces and cities launched their own AI strategies, built research parks, and offered generous subsidies to attract companies and top talent—competing with one another to become national AI hubs.13 China’s leading tech companies—Alibaba, Baidu, and Tencent—made significant investments in AI capabilities, both aligning with government priorities and pursuing their own strategic interests. Because Chinese companies were operating in a domestic market actively shielded from major Western competitors by regulatory and censorship policies, they were able to scale AI applications rapidly while also seeking to extend their technological footprint globally.14

This growth-focused approach enabled rapid development across multiple AI applications. The government simultaneously began widespread deployment of AI-powered surveillance systems, demonstrating early confidence in computer vision capabilities. Chinese companies achieved notable successes in facial recognition, voice recognition, and other narrow applications that would later become central to both economic development and social control.

Toward the end of the Go-Go Era, however, preliminary concerns about the party’s ability to control AI technology began to emerge. The CCP started expressing worries about how recommendation algorithms could influence the party’s control over Chinese citizens’ information environment, while deepfake technology threatened to undermine information integrity through fabricated yet authentic-looking content.15 These concerns remained largely theoretical during this period, with only preliminary regulatory responses. However, they presaged a shift toward a more assertive governance approach as China’s technological capabilities matured and political priorities emphasized greater control.

By early 2020, the foundations for a new regulatory paradigm had emerged. The benevolent, pro-commercial tech environment was about to change as China laid the groundwork for its later shift toward algorithmic governance and comprehensive data oversight—developments that would define the next phase of AI policy.

The Crackdown Era (2020 to Late 2022)

The AI policy ecosystem changed dramatically in early 2020, ushering in a new era focused on hardline crackdowns and ideology-laden, control-focused actions. This shift reflected broader political priorities under Chinese President Xi Jinping’s consolidating leadership, including the Common Prosperity campaign targeting tech wealth inequality,16 the Dual Circulation strategy emphasizing domestic self-reliance,17 and heightened focus on political control surrounding the party’s one hundredth anniversary in 2021.18 Crucially, China’s strong economic performance—with GDP growth rebounding to 8.4 percent in 202119—made regulatory crackdowns seem economically feasible, while high tech sector valuations suggested the economy could absorb significant disruption to achieve political objectives.

This regulatory assertiveness was enabled by China’s growing confidence in its domestic AI capabilities. Many in the party believed that China had largely caught up with—and perhaps even surpassed—the United States in key areas of AI. With Chinese companies dominating global facial recognition markets and achieving clear leadership in computer vision, the move toward strict regulation reflected a belief that control would not hamper technological advancement. The period coincided with China’s rigid Zero COVID policy and reflected a broader pattern of assertive state intervention across multiple sectors. The government showed little hesitation in decimating industries worth tens of billions of dollars, such as the for-profit tutoring sector,20 demonstrating its willingness to prioritize ideological control over what it perceived as short-term economic costs.

The Crackdown Era

Dates: 2020–Late 2022

Signature Policies: Regulation on Recommendation Algorithm Information Services; Regulation on Deep Synthesis Internet Information Services

Key Agency: Cyberspace Administration of China

Key Economic Indicators: China’s GDP growth dips to 2.3% during the first COVID-19 lockdown in 2020 before rebounding to 8.4% in 2021

Key Events: Multiple rounds of pandemic lockdowns; tech crackdown on companies like Alibaba and Didi

Why It Matters: The onset of COVID-19 lockdowns and crackdowns on major tech companies characterized the CCP’s shift toward the ideological control that defined the Crackdown Era. The Cyberspace Administration of China dominated AI’s regulatory scene as Chinese leadership’s growing confidence in the country’s AI capabilities and the introduction of landmark regulations on recommendation algorithms and deep synthesis technologies, aimed at ensuring the centrality of content control in AI governance.

The tech sector became a primary target of this regulatory assertiveness in what became known as the tech crackdown. The CCP systematically reasserted its power over China’s leading technology companies through anti-trust investigations, stricter regulations, record-breaking fines, and, in some cases, the acquisition of board seats by state entities.21 High-profile cases included the last-minute suspension of Ant Group’s IPO,22 a $2.8 billion fine levied against Alibaba for monopolistic practices,23 and cybersecurity investigations into ride-hailing giant Didi Chuxing immediately after its U.S. listing.24

This assertive regulatory approach was enabled by genuine confidence in China’s domestic AI capabilities. During the Crackdown Era, Chinese companies SenseTime, Megvii, and CloudWalk dominated global facial recognition markets,25 while companies such as Hikvision achieved clear leadership in computer vision applications for surveillance.26 China had become the world’s top producer of AI research publications,27 Alibaba and Baidu had developed their own AI chips,28 and autonomous vehicle projects increasingly permeated daily life.29 This track record created confidence within the party that strict regulation would not significantly hamper technological advancement and might even enhance it by directing innovation toward strategically valuable applications.

During this era, the CAC emerged as the dominant force in tech regulation, expanding far beyond its original internet content management role to encompass cybersecurity, data governance, and algorithm regulation. In China’s fiercely competitive inter-bureaucratic environment, the CAC successfully positioned itself at the forefront of tech oversight while MOST remained important but clearly secondary. The CAC’s rise was facilitated by the practical reality that AI applications had achieved massive scale: hundreds of millions of users were experiencing algorithmic content curation daily through platforms including Douyin, Weibo, Taobao, and Jinri Toutiao, with algorithms determining what news, entertainment, and shopping content people consumed.30 Those algorithms could curate content to strengthen ideological control—but initially, customization enabled by recommendation algorithms threatened the party line.31

In response, the CAC developed two significant AI regulatory frameworks that reflected the party’s information control priorities. The “Regulation on Recommendation Algorithm Information Services” from March 2022 imposed extensive requirements including algorithmic transparency, user controls over personalization, and requirements that content promote “positive energy” and socialist values.32 The “Regulation on Deep Synthesis Internet Information Services” from January 2023 required labeling of AI-generated content, real-name registration for deepfake services, and approval processes for synthetic media applications.33 Both initiatives reflected the party’s concerns about AI’s potential to disrupt established information control mechanisms:34 recommendation algorithms could replace officially sanctioned top stories with personalized content, while deep synthesis, China’s term for deepfakes, threatened information integrity through fabricated yet authentic-looking content.

The two regulations had some practical impacts, such as pushing platforms to add features that allowed users to turn off algorithmic recommendations.35 But many of the provisions were too vague to be enforced, such as a requirement to “actively transmit positive energy.”36 More importantly, these measures laid the groundwork for future regulations. They created the mandatory algorithm registration system37 that would be adapted and refined for China’s generative AI regulation. And they gave CAC regulators hands-on experience dealing with AI companies and their products, learning what information was useful and what interventions were technically feasible.

The Crackdown Era established content control as the paramount priority in China’s approach to AI governance. By creating regulatory mechanisms focused primarily on controlling AI’s impact on information flows rather than the kind of safety or ethical considerations that were bubbling up in the West, the CCP laid the groundwork for a distinctive Chinese approach to AI governance—one that would be challenged by the sudden emergence of ChatGPT and other generative AI models. This confidence, and the general push for control, informed a growing discussion about a potentially comprehensive national AI law.38

The Catch-Up Era (Late 2022 to Early 2025)

The debut of OpenAI’s ChatGPT in November 2022 marked a pivotal moment in China’s approach to AI governance. The chatbot’s capabilities demonstrated that Chinese AI development, despite years of massive investment and favorable rhetoric, had fallen behind American innovations.

This technological shock coincided with China’s most severe economic challenges in decades. GDP growth plummeted to 3.1 percent in 202239[AB1] —its second-lowest level in at least forty years. Youth unemployment rose to a devastating 21.3 percent in June 2023 before the government temporarily stopped publishing the data.40 The property sector crisis deepened with Evergrande’s default and Country Garden’s distress, as property investment fell 10 percent.41 Local governments faced debt crises so severe many could not pay salaries.42 By 2023, foreign investment turned negative,43 deflation risks emerged, and consumer confidence remained depressed. Compounding these challenges, the government faced public protests against its Zero COVID lockdown policies in November and December 2022. These so-called white paper protests, grounded in frustration over deteriorating economic conditions and the harsh social control restrictions during the pandemic, revealed cracks in social stability that made economic recovery even more urgent.44

These broader economic pressures were compounded by AI-specific obstacles. U.S. semiconductor export controls imposed in October 2022 threatened future AI development in China by restricting access to advanced graphics processing units (GPUs) crucial for training large models. The actual impact of these controls would eventually prove modest because of prior Chinese stockpiling of chips and NVIDIA’s decision to create new chips compliant with the controls. But in late 2022, no one knew this, and the controls appeared to pose a major—even existential—risk to Chinese AI development.

These twin pressures, technological competition and economic necessity, triggered a significant shift in China’s regulatory approach to AI. The country’s leadership was ready to push policy levers designed to close the capabilities gap, even if it meant giving up some control.45

The Catch-Up Era

Dates: Late 2022–Early 2025

Signature Policies: Interim Measures for the Management of Generative Artificial Intelligence Services; Draft Basic Security Requirements for Generative AI Services

Key Agencies: Cyberspace Administration of China, National Development and Reform Commission

Key Economic Indicators: GDP growth slows to 3.1% in 2022, and bounces back to 5.4% in 2023; youth unemployment hits 21.3% in June 2023

Key Events: Response to launch of ChatGPT; white paper protests; perceived effects of U.S. export controls

Why It Matters: At the end of 2022, China was stunned by ChatGPT’s release, constrained by its deepening economic slowdown, and faced with active pushback against its ongoing lockdown. With the need to jumpstart its economy, it recalibrated toward a relatively more pro-development and industry-responsive approach. Notably, China’s landmark Interim Measures for the Management of Generative Artificial Intelligence Services was softened following expert and corporate pushback. The party empowered the National Development and Reform Commission to lead broader efforts to align innovation policy with national development goals and international AI governance trends.

In the key case of China’s generative AI regulation, regulators initially followed their established control-focused playbook. In April 2023, the CAC released a draft generative AI regulation imposing extraordinarily strict requirements on AI developers, including the effectively impossible standard of ensuring “truth, accuracy, objectivity, and diversity” in all training data and outputs.46

However, the draft regulation generated unprecedented pushback from China’s AI policy community. Chinese scholars and researchers held roundtables and published articles suggesting specific changes to soften overly restrictive provisions.47 Major tech companies, particularly Baidu, which had released a limited, invite-only version of its Ernie Bot large language model (LLM) a month before the draft generative AI regulation was released, quietly lobbied for similar modifications.48 When the final version was published in July 2023—the Interim Measures for the Management of Generative Artificial Intelligence Services49—it reflected many of these experts’ suggestions. Technical standards bodies associated with the CAC, particularly TC260, then created detailed technical standards documents—the most important of which was the Draft Basic Security Requirements for Generative AI Services.50 While still highly burdensome, they generally provided developers with clearer compliance pathways. This evolution was captured in the title of an analysis by leading Chinese AI scholar Zhang Linghan,51 using a turn of phrase often articulated in CCP circles: “Failing to develop is the greatest threat to security.”52 Amid this broader pro-development push, China delayed its plans to establish a comprehensive national AI law, instead pursuing a path focused on incremental changes to its existing AI regulatory architecture.

During China’s relative regulatory loosening, the National Development and Reform Commission (NDRC) ascended as a central AI policy coordinator, challenging the CAC’s dominance. The NDRC was listed as the second signatory on the final generative AI regulation, following the CAC and ahead of five other ministries and agencies.53 In China’s system, bureaucratic influence is measured partly by signature order on major regulations and which ministry coordinates implementation—these rankings determine budget allocations and career advancement.

Traditionally focused on economic planning, the NDRC began playing a more prominent role in balancing security concerns with development imperatives—in AI and beyond. Its growing role in AI policy manifested through attempts to institutionalize mechanisms designed to promote cross-departmental policy harmonization—what the party calls macro-policy orientation consistency evaluations.54 These evaluations required that new regulations undergo assessments for their potential economic impacts, particularly on business sentiment. While the CAC continued to own critically important policies for the definitions and execution of content control, the NDRC’s rise reflected the party’s recognition that pure control-focused policies were economically unsustainable and that economic recovery was more critical than ideological purity.

During this period, the Chinese state also began engaging more actively in international AI governance discussions, including those focused on frontier AI safety. In October 2023, China released its Global AI Governance Initiative,55 presenting principles for AI development and governance. A few weeks later, Chinese representatives participated in the UK-hosted AI Safety Summit at Bletchley Park in November 2023.56 For China, building a governance regime on its own terms was still paramount. But international conversations were beginning to increasingly shape China’s governance structures. Whereas “AI safety” in China before the Catch-Up Era had largely focused on content control, a growing number of senior academics and experts closely connected to government had begun to articulate the need to address catastrophic risks,57 such as loss of human control.

Although some of China’s international engagement suggested it was warming up to more safety-focused measures, progress was slow and largely limited to its scientific expert community. In May 2024, the Chinese Ministry of Foreign Affairs engaged in high-level official talks on AI with the United States in Geneva.58 While the U.S. delegation brought some of its most technical AI minds to talk about AI safety, the more foreign affairs–focused Chinese delegation had other priorities, and the talks stalled. Famous Western scientists continued to call on China to establish its own AI Safety Institute to match those in the United Kingdom and United States.59 The decision document of China’s July 2024 Third Plenum—the once-in-five-year gathering of senior CCP officials that sets the country’s economic and social direction—included a notable reference to AI safety. Specifically, the party called for the country to “establish an AI safety supervision and regulation system.”60 While some interpreted the statement as evidence that China would do more to formalize its frontier AI safety work within government, months followed without clear signs of a government-connected AI safety-focused institution.61

By the end of 2024, China’s AI ecosystem had regained significant momentum. Companies such as Alibaba and Tencent were developing increasingly sophisticated open-source models. Over the course of the year, a small, unknown startup connected to the Chinese hedge fund High-Flyer would demonstrate substantial performance improvements that would put it on course to challenge the top models coming out of America.

The Crossroads Era (Early 2025 to Present)

The emergence of DeepSeek’s groundbreaking AI model in early 2025 marked the beginning of a new phase in China’s AI development and governance.62 DeepSeek-R1 demonstrated capabilities that placed it squarely at the global frontier, with performance comparable to the most advanced models developed by American companies. From the CCP’s perspective, this achievement has seemingly signaled that China had largely closed the generative AI gap that had opened with ChatGPT’s debut, restoring confidence within China’s leadership about the country’s technological competitiveness. Some early signs hint at a familiar pattern: renewed strength breeds renewed control impulses. The key question is whether economic constraints will moderate this cycle.

In China’s previous eras of AI policymaking, the two main factors that influence Beijing—technological confidence and economic growth—moved in tandem, teeing up relatively clear and predictable regulatory choices. During periods when the economy boomed and AI companies thrived, the obvious choice was for China to prioritize internal control measures. Conversely, when the economy floundered and AI companies lagged, China knew it needed to loosen the regulatory reins. Now, for the first time in recent years, the two factors have become delinked: Chinese AI companies look strong while the overall macroeconomic picture remains weak. This combination has created a new test for Chinese decisionmaking.

The Crossroads Era

Dates: Early 2025–Present

Signature Policies: Measures for Labeling of AI-Generated Synthetic Content

Key Agencies: TBD

Key Economic Indicators: VC investments in Chinese AI startups decline by almost 50% year-on-year in Q1 2025

Key Events: Release of DeepSeek-R1; launch of China AI Safety and Development Association (CnAISDA)

Why It Matters: DeepSeek’s breakthrough in early January 2025 renewed the CCP’s confidence in China’s AI ecosystem, catalyzing the Crossroads Era. Preliminary evidence suggests the government could move toward stronger control measures, with the party asserting state control over AI outputs, user data, and safety frameworks. But with the aspiration to integrate AI into the real economy and leverage AI for economic growth, and with ongoing economic fragility, the party faces a novel challenge in its balancing act between control and growth.

Even as China reckons with increasingly capable generative AI systems, it has returned to a core focus on domestic economic diffusion into what it describes as the “real economy.” It seeks to achieve this through its AI+ initiative63—a spin-off of its previous Internet+ campaign. Announced at the 2024 Central Economic Work Conference, the initiative aims to digitally transform traditional sectors through AI integration into applications and critical infrastructure.64 So far, AI+—which enjoys meaningful support from key Chinese industry actors—is not a specific policy that lays out a plan of action and delineates responsibilities to different ministries. Instead, it is more like a policy meme, a rallying cry that signals the central leadership’s ambitions and makes it easier for actors across the wider Chinese system to mobilize resources in pursuit of the overarching goal. Doctors have used DeepSeek to aid in treatment plans; hotel groups are leveraging DeepSeek to enhance their customer service offerings; and officials are utilizing it to aid in the search for missing individuals.65

Local governments are keen to put their own spin on the central government’s AI+ directives, just as they did during the Internet+ campaign. Liaoning Province, for example, has said it plans to achieve 100 billion yuan in AI industry scale by 2027 through a Shenyang-Dalian “dual core” as a direct response to AI+;66 the province aims to leverage the city of Shenyang’s heavy industrial base and Dalian’s tech capabilities to create AI-enabled manufacturing hubs. These provincial-level initiatives are reflective of local governments translating Beijing’s strategic signals into concrete targets and resource commitments tailored to their industrial strengths.

While DeepSeek’s advances have contributed to renewed technological optimism in parts of China’s policy and research communities, encapsulated by developments like AI+, this momentum has emerged within a challenging economic landscape. Rather than sparking a wave of private investment, venture capital funding for Chinese AI startups declined by nearly 50 percent year-over-year in the first quarter of 2025,67 reflecting investor wariness amid sluggish growth, regulatory scrutiny, and geopolitical uncertainty. In response, state-affiliated actors have taken the lead: major state-owned enterprises and local governments are accelerating projects to adopt and deploy DeepSeek’s models68 within their own work despite billions of dollars in debt burdens.69 The surge of new projects is driven in part by political incentives, as provincial authorities seek to attract investment and demonstrate administrative competence. This explosion of AI activity and adoption by state-owned enterprises and local governments mirrors the aftermath of China’s 2017 NAIDP, when the same actors rushed to procure and promote AI applications to demonstrate their ability to advance the national leadership’s priorities.70

The sustainability question—and whether China will be able to replicate its successful domestic technological diffusion model from the 2010s in AI—becomes particularly acute when examining the economics of frontier AI development. Training and operating advanced models require massive computational resources and energy consumption that strain both financial and physical infrastructure.71 Unlike the Internet+ era, where mobile payments and e-commerce generated clear revenue streams,72 many AI applications struggle to demonstrate immediate profitability, raising questions about whether current investment levels can be maintained without concrete economic returns. In the short term, China’s economic vulnerabilities could be exacerbated by the country’s ongoing trade war with the United States. In the first half of 2025, the Chinese public strongly supported resisting what it sees as economic coercion.73 If this resistance and the trade war continue, it could impose material, economic harms on an already struggling domestic Chinese economy. China’s AI ecosystem may also be feeling the acute impact of limited access to sufficient compute in the wake of ongoing U.S. export controls—including greater restrictions on H20 chips.74 DeepSeek-R2, for example, may be delayed due to a lack of access to enough high-end chips.75 This lack of hardware could stifle China’s ambitions to diffuse DeepSeek models throughout Chinese society—and throttle China’s economic growth engine.

Nonetheless, the emergence of high-quality open-source models such as DeepSeek and Qwen could significantly alter this economic calculus. It would be economically untenable for each Chinese province to train frontier models from scratch. But the presence of cost-effective, open-source foundation models could allow localities to build domain-specific applications at a fraction of the cost, especially if Chinese companies continue to make substantial cost-efficiency gains for its frontier AI models. Viewed holistically, the uncertain economics of frontier AI development could ultimately complicate or enable China’s attempts to diffuse AI nationally.

Given China’s renewed technological confidence, it remains to be seen whether China will return to control-focused impulses that recur cyclically in Chinese policymaking, despite these underlying economic challenges. So far, this shift has been most visible in the government’s intensified oversight of DeepSeek, where Zhejiang provincial authorities now reportedly screen investors before they meet with company leadership and have instructed headhunters to cease talent recruitment efforts targeting the firm.76 The restrictions extend to individual employees, with some DeepSeek staff reportedly surrendering their passports because they have access to information that could be classified as state secrets.77 These company-specific measures not only reflect broader sectoral controls, including directives for China’s leading AI researchers to avoid travel to the United States to prevent inadvertent sharing of strategically sensitive information. They also reflect companies’ increasingly cautious approach to staying aligned with the preferences of the central government, which increasingly views the technology through a stronger national security lens.

Beyond DeepSeek, China’s AI industry has been more widely exposed to growing control measures so far this year. China’s regulators formally adopted new rules in mid-March of 2025, known as “Measures for Labeling of AI-Generated Synthetic Content.”78 These rules will require service providers to “label” AI-generated content starting in September.79 These include both explicit labels, such as visible “AI-generated” disclaimers, and implicit labels, meaning metadata tags embedded in the file. In addition, in a move spearheaded by the CAC, China plans to launch digital IDs for identity verification.80 These IDs will reduce company access to user information, instead providing more centralized, large swaths of user data to the government.81 These initiatives point to a broader strategy of centralizing control over AI outputs and data flows, positioning the state as the primary gatekeeper for both innovation and information in China's post-DeepSeek AI ecosystem.

But it remains to be seen how aggressively China will pursue a strategy of control given its overall economic uncertainty. In addition, the current balance between development and security could shift if geopolitical tensions increase or if domestic stability concerns rise to the fore.

From the perspective of Chinese AI developers, the country’s renewed technological confidence presents a double-edged sword. On one hand, this greater attention can lure substantial state-backed financial support and offer tech company leaders a platform to directly influence the most important policymakers shaping the tech policy environment. On the other hand, when Chinese leaders have felt more technologically secure, they have been more willing to impose stricter controls on technology companies. For example, while many Chinese AI companies had relatively wide latitude to dictate their policies in areas including frontier AI safety, they may now be more hesitant to unilaterally deviate from implicit norms emerging from the party apparatus. In addition, greater focus on companies by the party can result in unwanted attention that hurts Chinese companies’ ability to attract international investment and earn market share abroad. Many of China’s AI companies have struggled to raise capital without access to U.S. investment82—which has become even more limited with U.S. Treasury Department rules regulating outbound investment to China.83

China appears increasingly confident that it has sufficient capabilities to control the content its LLMs put out. Specialized CAC teams conducted compliance audits with a strong focus on ensuring high rates of appropriate responses to queries regarding politically sensitive information.84 This is reflected, for example, in the CAC approving applications for its algorithm registry at an accelerating rate: 238 generative AI services were filed with the CAC in 2024, up from 64 in 2023.85

The slight regulatory easing that has occurred over the past two years could also reflect a reevaluation of the threat posed by LLMs within the broader context of online speech controls. An LLM provides unique responses to each individual user’s query, making it a one-to-one medium, as opposed to a one-to-many broadcast medium like social media. If an LLM slips and accidentally provides individual users with disapproved insights about Taiwan or Tiananmen, those slip-ups do not fundamentally threaten the party’s ideological control in the way that viral social media posts do.

In the Crossroads Era, frontier AI safety appears to be a rising concern, with Xi warning in a Politburo study session that AI could bring unprecedented risks and challenges (“带来前所未遇风险挑战”).86 Around the time of DeepSeek-R1's release, sixteen companies announced domestic AI safety commitments that share substantial similarities to commitments made by companies from around the world at the May 2024 AI summit in Seoul.87 A few weeks later, the Chinese AI Safety and Development Association (CnAISDA) was launched on the sidelines of the Paris summit.88 CnAISDA describes itself as China’s counterpart to AI Safety Institutes and credibly claims strong support from the Chinese government.89 And, in this era, China has more concretely pursued potential AI safety mitigation measures, such as technical monitoring, risk warning, and emergency response systems.90 These ideas have been expounded on in the context of building out the AI+ initiative: some Chinese scholars pointed out that the “addition” of new capabilities will need to be complemented with the “subtraction” of unsafe practices and ungoverned experimentation.91 This dialectical addition-subtraction framework, frequently used in policy discussions on benefits (addition) and risks (subtraction), could indicate that discussion around frontier AI risks is spreading more broadly throughout the Chinese AI policy ecosystem.

The Crossroads Era thus presents a unique inflection point in China’s cyclical AI governance pattern. Unlike previous cycles where economic strength clearly enabled control or economic weakness demanded growth-focused policies, this era features technological confidence emerging against a backdrop of economic fragility. China’s leadership may face a critical choice: whether to prioritize the control impulses that historically accompany technological strength—potentially constraining the very innovation that enabled DeepSeek’s breakthrough—or maintain the pragmatic flexibility that characterized the Catch-Up Era. That decision may well determine not only China’s AI trajectory but the broader global competition for AI leadership.

Next Steps for China’s AI Policy

In building its AI policy regime, China will once again attempt to strike what it perceives to be a balance between control and growth. The country has shrunk its capabilities gap with the United States and demonstrated greater willingness to exert control over its top AI researchers. But the economic engine needed to power increasingly capable models and diffuse the technology domestically faces substantial threats from low investor confidence and restricted access to high-end hardware produced by the United States and its allies and partners.

The next few months will offer an early indication of China’s approach to navigating these control and growth imperatives. China has already signaled it intends to accelerate the development and improvement of “relevant laws, regulations, policy systems, application specifications, and ethical guidelines.”92 China’s economy will directly shape the speed and scope of these changes: if China’s economy improves, it may feel better positioned to impose more control measures. In the interim, how China navigates these competing imperatives will help determine its ability to shape global AI markets and governance, with enormous national power at stake.

Acknowledgments

We thank Pavlo Zvenyhorodskyi, Research Analyst in the Technology and International Affairs Program, for research assistance which greatly enhanced this paper. The writers acknowledge the use of LLM tools for preliminary desk research and clarificatory editing.

Notes

Carnegie does not take institutional positions on public policy issues; the views represented herein are those of the author(s) and do not necessarily reflect the views of Carnegie, its staff, or its trustees.