In early 2018, one of Malaysia’s key security forces made a startling announcement. The Auxiliary Force, a branch of the Royal Malaysia Police Cooperative, had entered into a partnership with the Chinese company Yitu Technology to equip the Force’s officers with facial-recognition capabilities. Security officials will be able to rapidly compare images caught by live body cameras with images from a central database. The head of the Auxiliary Force explained that this use of artificial intelligence (AI) was a “significant step forward” in efforts to improve public security. He also noted that his agency planned eventually to enhance the body-camera system so as to enable “real-time facial recognition and instant alerts to the presence of persons of interest from criminal watch lists.”1

Neighboring Singapore soon followed suit, declaring its plans to launch a pilot camera-installation project with the end goal of embedding facial-recognition technology on every public lamppost. The project is ostensibly aimed at facilitating “crowd analytics” and assisting with antiterror operations. Privacy advocates such as the Electronic Frontier Foundation have warned that this technology will enable governments to target political opponents and suppress free expression, but their protests have been to no avail.2

Steven Feldstein
Steven Feldstein is a nonresident fellow in Carnegie’s Democracy, Conflict, and Governance Program, where he focuses on issues of democracy, human rights, governance, rule of law, political reform, security, emerging economies, and Sub-Saharan Africa.
More >

Meanwhile in April 2018, AI startup CloudWalk Technology, based in the Chinese city of the Guangzhou, reportedly signed a deal with Zimbabwe’s government to provide facial-recognition technology for use by state-security services and to build a national image database. CloudWalk is also known for supplying facial-recognition and identity- verification technology to police forces in China’s Xinjiang region, one of the most heavily repressed regions in the world. Its new African partnership falls under the umbrella of the multicontinental Chinese infrastructure and investment scheme known as the Belt and Road Initiative (BRI).3 CloudWalk’s offerings threaten to exacerbate political repression in Zimbabwe, where authorities recently carried out a violent postelection crackdown.

These are not isolated examples. Around the world, AI systems are showing their potential for abetting repressive regimes and upending the relationship between citizen and state, thereby accelerating a global resurgence of authoritarianism. The People’s Republic of China (PRC) is driving the proliferation of AI technology to authoritarian and illiberal regimes, an approach that has become a key component of Chinese geopolitical strategy.

The concept of AI has proven resistant to exact definition. One widespread assertion is that the goal of AI is to “make machines intelligent,” a concept often explained with reference to human intelligence.4 Others, such as Jerry Kaplan, question the usefulness of such analogies. Kaplan maintains that whether machines are “self-aware as people are” is irrelevant. Instead, the essence of AI can be boiled down to a computer’s “ability to make appropriate generalizations in a timely fashion based on limited data.”5

This article does not seek to resolve these disputes. Rather, it focuses on the practical effects of new technologies that are coming into circulation thanks to three major developments: 1) the increased availability of big data from public and private sources; 2) enhanced machine learning and algorithmic approaches; and 3) correspondingly advanced computer processing. (Machine learning, which can be applied to tasks that range from winning Go matches to identifying pathogens, is an iterative statistical process in which an AI system is introduced to a set of data and “tries to derive a rule or procedure that explains the data or can predict future data.”6) The import of this technology for the world’s authoritarians and their democratic opponents alike is growing ever clearer. In recent years, autocracies have achieved new levels of control and manipulation by applying advanced computing systems to the vast quantities of unstructured data now available online and from live video feeds and other sources of monitoring and surveillance. From facial-recognition technologies that cross-check real-time images against massive databases to algorithms that crawl social media for signs of opposition activity, these innovations are a game-changer for authoritarian efforts to shape discourse and crush opposition voices.

AI is not the only category of new technology increasingly being harnessed by autocrats for political gain. Other communications and information technologies, frequently used in tandem with AI, are having equally alarming effects. These include advanced biometrics, state-based cyber hacking, and information-distortion techniques. This article highlights the repressive impact of AI technology for two reasons. First, AI provides a higher-order capability that integrates and enhances the functions of other technologies in startling new ways. Second, mainstream understanding of the policy impact of AI technology remains limited; policy makers have yet to seriously grapple with AI’s repressive implications....

Read Full Text

This article was originally published in the Journal of Democracy.