Source: Getty
commentary

What a Chinese Regulation Proposal Reveals About AI and Democratic Values

Despite its authoritarian origins, the draft offers lessons for building a truly democratic framework.

Published on May 16, 2023

Last month, China’s internet regulator released its proposal for regulating generative AI systems such as chatbots. The proposal is stocked with provisions that are straight off the wish list for AI to support supposed democratic values, from prohibitions on discrimination to legal liability for developers. Yet the regulation is clearly intended to strengthen China’s authoritarian system of government. By examining the paradoxes inherent in the proposal, developers, analysts, and governments can draw out lessons for building a truly democratic framework for AI.

Protecting Individuals While Consolidating State Control

Certain parts of the draft regulation would make real progress in shielding millions of people from potential harms of AI, if enforced in a uniform and meaningful way. Privacy requirements would prohibit profiling based on user activity and the retention of data that could lead to practices such as reidentification. Transparency requirements would make it easier to identify potential or actual rights violations stemming from AI systems. Accountability and redress measures would allow individuals to notify companies when content is generated that infringes on their likeness or privacy. All of these requirements bring to mind principles that are often promoted as supporting democratic values.

At the same time, the draft demonstrates an obvious intention to strengthen government control over China’s technology and information ecosystems. Under the rules, generated content would be required to “reflect the Socialist Core Values.” Content that contributes to “subversion of state power” would be banned. The draft’s vague language would give regulators substantial leverage to impose their will on tech companies. Requirements are focused on the private-sector actors developing and deploying generative AI systems; absent is any description of government AI use.

The draft regulation stakes out strong positions on two of the most contested questions in the AI governance debate. It strongly favors protecting society (and the government) from risks posed by AI systems and tech companies, rather than applying a laissez-faire governance approach that gives the private sector substantial latitude in developing new AI products. And, on the question of whether developers or the government should lead in defining rules for AI, the regulation comes down squarely on the side of shifting power to the government.

Democratic Values Transcend High-Level Principles

On paper, there’s little real international disagreement on the high-level AI governance principles described in China’s draft regulation. In fact, many of the regulation’s proposed measures map neatly onto the AI principles described in the consensus international AI policy instruments that are often sold as affirming democratic values, such as transparency, respect for individual rights, fairness, and privacy.

The devil is in the details, of course. Coming in at a mere 2,300 characters, the draft regulation has little room to give precise guidance on what nebulous phrases like “false information” or “harm to . . . physical and mental health” actually mean. By engaging mostly at the level of broad principles, it leaves substantial room for discretion in interpretation and enforcement.

China’s draft regulation illustrates the extent to which high-level policy guidance and principles obscure the implementation decisions and trade-offs that determine whether AI is developed and used in ways that affirm democratic values. The U.S. Blueprint for an AI Bill of Rights, for instance, contains similar high-level principles describing transparency, remediation, and fairness. But U.S. government agencies implementing those principles, driven by different governance structures and oversight mechanisms, will likely produce different outcomes than China’s enforcement arms. Likewise, an initial discussion draft of the Council of Europe’s proposed Convention on Artificial Intelligence, Human Rights, Democracy and the Rule of Law contains few high-level principles that are themselves controversial. But the convention’s proposed follow-up mechanism might allow signatories to call each other out when specific implementation steps don’t faithfully reflect the values they were intended to codify, allowing the convention to do a more fulsome job in strengthening democratic values.

China’s Approach to AI Geostrategic Competition

Many in the United States argue that AI is so integral to economic and military power that efforts to constrain or reshape its development would be a major strategic blunder. China’s recently proposed regulations, however, suggest that it does not share this view.

Two of the most notable features of the draft are the strength and lack of nuance in many of its provisions. An article on bias prohibits providers from “engag[ing] in content generation that is discriminatory based on a user’s race, nationality, sex, etc.” This is an impossibly tall technical demand even if developers understood how “discriminatory” was meant to be interpreted. Another article requires that generated content be “true and accurate.” Missing are precise definitions of “discriminatory,” “true and accurate,” and other nebulous, know-it-when-you-see-it concepts with obvious political and cultural overtones.

In defining strict prohibitions on certain types of content, the proposed requirements strike at the very heart of the modern machine learning paradigm that powers systems like ChatGPT. Under this paradigm, models learn to replicate statistical patterns in datasets so massive that engineers have little hope of ensuring that sexist, racist, or otherwise objectionable patterns are not learned along with the essential linguistic structure of language. Technical advances that would allow engineers to guarantee that generative AI systems will never produce undesirable outputs could be many years away—if they ever materialize.

This forces China into a difficult trade-off between speeding the deployment of generative AI systems and maintaining a tight grip on its technology and information ecosystems. Interpreting the proposal’s strict requirements literally would all but prohibit the deployment of a technology that, if boosters are correct, could generate significant economic value. But by allowing developers to only loosely follow regulators’ strictures, the government would be letting content slip through its controls that could harm individuals or weaken its grip on the information environment.

Some portions of the regulation suggest a more flexible approach to enforcement of the regulation than most of the provisions suggest. For example, one article provides developers three months to ameliorate rule-breaking content. But China’s release of these rules, even in draft form, demonstrates its skepticism of the theory that broader geostrategic imperatives demand the unbridled development of generative AI systems.

Implementing Democratic Values in AI

What lessons should states wanting to genuinely promote democratic values draw from this proposal?

First, the draft regulation illustrates that many of the consensus AI principles that are often promoted as affirming democratic values can just as easily be used to protect authoritarian states. Ensuring that AI systems serve broader democratic goals requires not only technical proposals but also strong institutions and oversight mechanisms.

Lawmakers should create transparency measures that allow civil society bodies and the public to understand how regulators use enforcement discretion. Regulators and institutions of accountability need additional resources to double down on their mission of protecting consumers, individuals, and communities: building public trust in both institutions and AI itself will require demonstrating that applications live up to high rhetorical principles such as “fair,” “accountable,” and “human-centered.” Follow-up mechanisms in international agreements could provide a forum for democratic countries to critique one another’s practices, akin to treaty bodies for human rights instruments. Governments and companies should create broad and inclusive processes to collect input from diverse publics about what they want from new technology.

Second, many of the most consequential uses of AI systems, from the provision of critical services to policing and surveillance, are located within governments. Regulations that apply only to the private sector, like China’s, completely ignore some of the strongest potential negative impacts on liberal and egalitarian ideals of democracy. Recent U.S. efforts to increase transparency and oversight of government AI use (though currently lacking in implementation) and measures applicable to high-risk government AI uses (such as in the EU’s proposed AI Act) are welcome developments in this regard. But debate over how rules for AI should apply to law enforcement—for example in the U.S. Blueprint for an AI Bill of Rights and proposed EU AI Act—demonstrate that these issues are rarely clear cut.

Finally, the draft regulation illustrates potential flaws in the narrative of geostrategic competition surrounding AI. An embrace of AI systems that erode individual privacy, perpetuate discrimination, and widen inequality might only undermine public trust in industry and government rather than furthering the United States’ technological strategic advantage. Promoting AI systems that propagate hate speech and misinformation would reduce the abilities of democratic societies to have reasoned and respectful deliberation.

Shaping the development of AI so that it does not weaken these fundamental pillars of longer-term strategic competitiveness will require domestic action. For example, steps to ensure that existing laws, such as for liability and discrimination, are not made less effective by the opaqueness and technical complexity of AI could reduce the externalization of AI’s harms. Funding for sector-specific regulators to keep pace with the impacts of algorithms in their domains of competence could increase the government’s capacity to develop flexible and effective regulation when it is needed. A combination of government R&D funding and regulatory measures could incentivize the types of AI research that would broadly serve society.

Success in this wider conception of AI strategic competition requires careful attention to implementation details, oversight of the government’s use of algorithms, and intelligent domestic regulation. Democratic countries should ensure that they do not engage in a short-term race to advance the technical development of brittle and unregulated AI systems that would produce only a Pyrrhic victory.

Carnegie does not take institutional positions on public policy issues; the views represented herein are those of the author(s) and do not necessarily reflect the views of Carnegie, its staff, or its trustees.