Steve Feldstein
{
"authors": [
"Steve Feldstein"
],
"type": "other",
"centerAffiliationAll": "dc",
"centers": [
"Carnegie Endowment for International Peace"
],
"collections": [
"Democracy and Governance"
],
"englishNewsletterAll": "democracy",
"nonEnglishNewsletterAll": "",
"primaryCenter": "Carnegie Endowment for International Peace",
"programAffiliation": "DCG",
"programs": [
"Democracy, Conflict, and Governance"
],
"projects": [],
"regions": [],
"topics": [
"Political Reform",
"Democracy",
"Technology"
]
}Source: Getty
How Artificial Intelligence Is Reshaping Repression
Around the world, AI systems are showing their potential for abetting repressive regimes and upending the relationship between citizen and state, thereby accelerating a global resurgence of authoritarianism.
Source: Journal of Democracy
In early 2018, one of Malaysia’s key security forces made a startling announcement. The Auxiliary Force, a branch of the Royal Malaysia Police Cooperative, had entered into a partnership with the Chinese company Yitu Technology to equip the Force’s officers with facial-recognition capabilities. Security officials will be able to rapidly compare images caught by live body cameras with images from a central database. The head of the Auxiliary Force explained that this use of artificial intelligence (AI) was a “significant step forward” in efforts to improve public security. He also noted that his agency planned eventually to enhance the body-camera system so as to enable “real-time facial recognition and instant alerts to the presence of persons of interest from criminal watch lists.”1
Neighboring Singapore soon followed suit, declaring its plans to launch a pilot camera-installation project with the end goal of embedding facial-recognition technology on every public lamppost. The project is ostensibly aimed at facilitating “crowd analytics” and assisting with antiterror operations. Privacy advocates such as the Electronic Frontier Foundation have warned that this technology will enable governments to target political opponents and suppress free expression, but their protests have been to no avail.2
Meanwhile in April 2018, AI startup CloudWalk Technology, based in the Chinese city of the Guangzhou, reportedly signed a deal with Zimbabwe’s government to provide facial-recognition technology for use by state-security services and to build a national image database. CloudWalk is also known for supplying facial-recognition and identity- verification technology to police forces in China’s Xinjiang region, one of the most heavily repressed regions in the world. Its new African partnership falls under the umbrella of the multicontinental Chinese infrastructure and investment scheme known as the Belt and Road Initiative (BRI).3 CloudWalk’s offerings threaten to exacerbate political repression in Zimbabwe, where authorities recently carried out a violent postelection crackdown.
These are not isolated examples. Around the world, AI systems are showing their potential for abetting repressive regimes and upending the relationship between citizen and state, thereby accelerating a global resurgence of authoritarianism. The People’s Republic of China (PRC) is driving the proliferation of AI technology to authoritarian and illiberal regimes, an approach that has become a key component of Chinese geopolitical strategy.
The concept of AI has proven resistant to exact definition. One widespread assertion is that the goal of AI is to “make machines intelligent,” a concept often explained with reference to human intelligence.4 Others, such as Jerry Kaplan, question the usefulness of such analogies. Kaplan maintains that whether machines are “self-aware as people are” is irrelevant. Instead, the essence of AI can be boiled down to a computer’s “ability to make appropriate generalizations in a timely fashion based on limited data.”5
This article does not seek to resolve these disputes. Rather, it focuses on the practical effects of new technologies that are coming into circulation thanks to three major developments: 1) the increased availability of big data from public and private sources; 2) enhanced machine learning and algorithmic approaches; and 3) correspondingly advanced computer processing. (Machine learning, which can be applied to tasks that range from winning Go matches to identifying pathogens, is an iterative statistical process in which an AI system is introduced to a set of data and “tries to derive a rule or procedure that explains the data or can predict future data.”6) The import of this technology for the world’s authoritarians and their democratic opponents alike is growing ever clearer. In recent years, autocracies have achieved new levels of control and manipulation by applying advanced computing systems to the vast quantities of unstructured data now available online and from live video feeds and other sources of monitoring and surveillance. From facial-recognition technologies that cross-check real-time images against massive databases to algorithms that crawl social media for signs of opposition activity, these innovations are a game-changer for authoritarian efforts to shape discourse and crush opposition voices.
AI is not the only category of new technology increasingly being harnessed by autocrats for political gain. Other communications and information technologies, frequently used in tandem with AI, are having equally alarming effects. These include advanced biometrics, state-based cyber hacking, and information-distortion techniques. This article highlights the repressive impact of AI technology for two reasons. First, AI provides a higher-order capability that integrates and enhances the functions of other technologies in startling new ways. Second, mainstream understanding of the policy impact of AI technology remains limited; policy makers have yet to seriously grapple with AI’s repressive implications....
This article was originally published in the Journal of Democracy.
About the Author
Senior Fellow, Democracy, Conflict, and Governance Program
Steve Feldstein is a senior fellow at the Carnegie Endowment for International Peace in the Democracy, Conflict, and Governance Program. His research focuses on technology, national security, the global context for democracy, and U.S. foreign policy.
- The Unintended Consequences of Iran’s Asymmetric Strategy and America’s AI WarArticle
- What We Know About Drone Use in the Iran WarQ&A
Steve Feldstein, Dara Massicot
Recent Work
Carnegie does not take institutional positions on public policy issues; the views represented herein are those of the author(s) and do not necessarily reflect the views of Carnegie, its staff, or its trustees.
More Work from Carnegie Endowment for International Peace
- Trump and Xi Should Tackle a Previously Impossible AI ConversationCommentary
Previous dialogues ended in failure. This time could be different.
Scott Singer
- “China Doesn’t Do Anything for Free”Commentary
Why the outcomes of the U.S.-China meetings may be limited.
Aaron David Miller, David Rennie
- The Unintended Consequences of Iran’s Asymmetric Strategy and America’s AI WarArticle
The Iran war is unique in the scope and scale of asymmetric warfare and AI-enabled conflict. These will test the limits of protecting civilians.
Steve Feldstein
- The Effect of Military AI on Contemporary BattlefieldsArticle
AI in warfare has numerous impacts, including how they shape human responses to target recommendations and how they increase the speed at which lawful targets can be recommended.
Yahli Shereshevsky
- Cities Have a Crucial Role to Play in Advancing Climate Mobility PrioritiesCommentary
Ensuring that cities’ perspectives shape international discussions at this year’s forums is not just equitable; it is likely to produce better outcomes.
Liliana Gamboa, Marissa Jordan