Baku may allow radical nationalists to publicly discuss “reunification” with Azeri Iranians, but the president and key officials prefer not to comment publicly on the protests in Iran.
Bashir Kitachaev
Source: Getty
People across the world are grappling with a few global technology companies’ domination of their public spheres and increasingly of other spheres of social, economic, and political engagement.
When I first had coffee with Facebook’s head of public policy in India, she was using a shared workspace. Before joining Facebook, she had led the policy practice of one of India’s most powerful law firms and was on friendly terms with judges, ministers, and other influential people. It was surprising to find her in a small office in a shared workspace, with just one administrative assistant for support. In a couple of years, she moved to a suite with a beautiful view in one of Delhi’s finest luxury hotels, acquired an expanding team of seasoned lawyers and policy professionals. Within five years, the public policy team had grown to be so large that it had its own plush office in the heart of Delhi. This expansion of influence within Delhi mirrored the expansion of Meta’s (previously Facebook) influence in India. If it was difficult to be in an Indian city without relying heavily on Big Tech’s products, it was impossible to work on technology policy in Delhi without being swept into currents of change created by Big Tech.1
People across the world are grappling with a few global technology companies ’ domination of their public spheres and increasingly of other spheres of social, economic, and political engagement. In her essay on the “algorithmic colonization of Africa,” Abeba Birhane pointed out that most of Africa’s digital infrastructure is owned and controlled by major Western technology companies, and she further questioned how relevant artificial intelligence (AI) tools from the West are in other contexts.2 This is a question that I also asked in my 2019 chapter on AI and the Global South, using the architecture of buildings and climate as a metaphor for how design should be based on local contexts.3 Birhane and I have described problems emerging from global AI’s one-size-fits-all models, and these concerns continue to be relevant.4 Other authors have also argued that acontextual approaches to AI are unhelpful.5 I am using the term AI here to refer to automated decisionmaking, automated tools,6 and automated systems,7 including the products, processes, and systems that result in what is known widely as generative AI.8
Before discussing the potential impact of Big Tech’s AI on the Majority World (based on the impact of Big Tech’s social media platforms in countries like Myanmar and the Philippines), it is worth asking why and how Big Tech companies makes the choices they make, which lead to this impact.9 In this essay, I focus on AI’s global political economy. I propose thinking about the way in which AI is permeating the world as corporate imperialism.10 This framing highlights the inequalities that feed AI and that its political economy exacerbates. It explains the harms emerging from the largest global tech platforms and the biggest global AI companies.11 I am not arguing that AI’s harms are inevitable, but I am suggesting that they have a major, often overlooked, cause. The framing of problems and goals is an important first step of regulation. If AI’s harms emerge from its global political economy, advocates and regulators may find that coordinated global action, not just local action, is necessary.12
Although specific harms can vary depending on the context, focusing on the influential private actors that cause these harms and on the forces that drive them can help regulators, activists and their allies better understand the forces causing these risks. It might even help the people working within or with Big Tech companies understand the nature of the industry whose practices they are trying to change. As I have argued in prior work, companies are not homogenous—advocates and regulators can act in ways that empower the teams within companies whose mandate is to make their products and practices less harmful.13 The corporate imperialism lens foregrounds the role and impetus of the industry. It also highlights the vulnerable populations this industry harms in the short term and explains why populations who think they are safe will be harmed in the long term.
Some of the most significant U.S.-based tech companies conceive of, design, and control AI-powered products. They do so using the labor of people, and through the extraction of materials, from several countries around the world. AI’s supply chains and markets are global. Kate Crawford and Vladan Joler described the life cycle of an Amazon Echo device, rendering visible the informal laborers who mine lithium for the device from countries like Bolivia, Argentina, or Chile.14 They also highlighted the role of indentured labor that sources the minerals for AI’s physical infrastructure, including children who risk their lives mining cobalt in the Democratic Republic of the Congo.15 This infrastructure is built using strictly controlled and sometimes dangerous manufacturing and assembly processes in Chinese factories and informal physical laborers are involved in shipping these materials and eventually in cleaning the (often-toxic) physical waste that these systems generate.16 In addition to physical laborers, cognitive workers who are often located in developing countries, and working in challenging environments, are recruited to label AI training datasets.17
Policy conversations about AI and algorithmic decisionmaking tend not to extend to the physical infrastructure, materials, and labor that make these systems possible.18 In addition to the populations affected by algorithmic decisionmaking, the people who build the decisionmaking systems and help maintain them, are also affected by AI’s business models. The term machine learning erases much of the human labor necessary to teach machines.19 By this, I do not refer to the highly paid engineers who design these systems, but rather to the outsourced cognitive labor that is a feature of AI’s supply chains as a result of informational capitalism.20
The outsourcing of cognitive labor arguably started with call centers and business process outsourcing.21 Social media companies then used outsourcing for cost-effective content moderation. Outsourcing is now used to create labelled datasets to train AI models, including large language models (LLMs). For instance, Open AI used Kenyan workers, hired through an outsourcing company called Sama, to label toxic content for ChatGPT.22 These workers were paid less than $2 an hour to view and label abusive and disturbing content for hours. 23 They received limited time to make decisions, and in contradiction to the company’s assertions, the workers said that the mental health resources made available to them were inadequate.24
Other ways in which global AI companies engage with the developing world include collection of data, experimentation, and the sale of algorithmic products and services.25 According to Business Today, Bing’s AI chatbot Sydney was piloted in India initially, and it appears that feedback from Indians about the LLM’s tendencies toward disturbing conversation were ignored, until Kevin Roose famously wrote about it in the New York Times.26 This shows that the major AI companies, like the major social media companies, are already more attentive to the voices of those who hold power.27 Even if this is not surprising, it should be of concern, especially since, fairly recently, Facebook’s structural inability to hear marginalized voices resulted in its contributing to the Myanmar genocide.28 It would be unfortunate if AI companies, and those who are interested in regulating them, failed to learn from this history.
In addition to making their models available to businesses and consumers, the companies enter domains that were previously considered the public sector. Linnet Taylor and Dennis Broeders were among the earliest to express concerns about the use of privately held social quantification systems for the delivery of public services, and their concerns apply to algorithmic systems.29 Facial recognition systems are used at airports, for crowd management, and for predictive policing. In the Argentinian province of Salta, the Ministry of Early Childhood entered into a partnership with Microsoft to predict which girls and young women were likely to have teenage pregnancies, according to reporting from the Not My AI partnership between Coding Rights and activist Paz Peña.30 This partnership attracted wide criticism, and many parts of the system remain difficult to understand, including why the project focused on women, why and how causal links were drawn between the data collected and the likelihood of pregnancy, and what decisions were to be made about the girls and young women as a consequence of this determination.31
The creation of databases should also raise concerns about epistemic injustice from datasets that encode certain ways of seeing society and human relationships.32 For example, consider the epistemic injustice resulting from databases that treat gender as a binary.33 Similarly, datasets designed by those who have little or no contextual knowledge may misdescribe communities.34 For example, a dataset designed for nuclear families, consisting of a married man and woman and their children, may fail to adequately capture relationships in societies where other familial ties, such as relationships between women and their families of birth, are more significant. AI systems trained on this data can perpetuate the problem.35 Additional harms, including cultural imperialism, might result from these AI systems.36 In the context of AI and big data, corporate informational imperialism may mean that companies work with states to determine how people and societies are seen.37 These companies’ influence will likely increase as states outsource public services to these corporations, with practices like predictive policing, AI for development, and smart cities on the rise.
Different lenses are used to understand the roles and choices of technology companies. Debates that focus on geopolitics and technology focus on the roles of countries and blocs like the United States, the European Union (EU), and China, as well as sometimes India and Brazil.38 Technocentric debates focus on the choices of individual companies. However, these companies make choices within a certain political economy, and the forces that drive the companies to seek new markets, cut costs, and compete with each other to increase their profit margins are key to understanding why these companies choose certain courses of action.39 Acknowledging that a competitive company will try to reduce its labor costs leads to more actionable questions about Open AI’s reliance on Kenyan data labelers, for instance. If observers see that such a company is driven to compete ferociously for new markets and to develop new products and services, they might understand why companies create and sell capabilities like facial recognition systems.
Nick Couldry and Ulises Meijas’s framework of “data colonialism” is a helpful description of the exploitative relationship between companies and people in the context of the data economy.40 In short, their book, The Costs of Connection, suggested that capitalist extraction now appropriates “human life through its conversion into data.” 41 Data colonialism can take place in several ways, including through the state-controlled model that China is building. Couldry and Meijas highlighted the fact that data empires are concentrated in the United States and in China and that these countries use trade negotiations to ensure that developing countries continue to permit the “social quantification” business.42 While this work is insightful and has offered helpful ways to think about the transnational nature of AI, I argue that closer attention to the companies and the forces that drive them (as opposed to focusing on states) is essential. This is a necessary step for my argument in future work that lawmakers and advocates should be more attentive to how Big Tech shapes the global legal order to protect its business interests.43
The global technology companies with the greatest influence, with their informational capitalist business models, tend to engage in data colonialism as well as the other kinds of harm that I described above through corporate imperialism. This framing focuses on the power and influence that these companies wield. Corporate imperialism is arguably a feature of capitalism. As Hannah Arendt wrote, “imperialism was born when the ruling class in capitalist production came up against national limitations to its economic expansion.”44 However, unlike politicians who operate within territories and election cycles, capitalists operate in “continuous space and time,” which arguably allows them to accumulate power and be agile across borders.45 Scholars have highlighted the connection between capitalism and imperialism for decades.46 In the context of the global information economy, and in combination with informational capitalism and data colonialism, it has explanatory value and added consequences.
These companies lobby governments to create laws that are favorable to them and cultivate relationships with other parties, including elites and civil society, who might facilitate influencing laws in company-friendly ways.47 They influence the discourse through which regulators draw their priorities.48 Advocates recognize companies as influential actors, as do international organizations. Microsoft has an office at the UN, for example.49 This is why my opening story about Facebook’s head of public policy and the growth of her influence and team, along with the company’s growth in influence, is significant.
Global informational capitalism and corporate imperialism drive companies to take advantage of the uneven geographical conditions of capital accumulation, which explains why AI’s business models inevitably harm vulnerable people that cannot access state protection.50 If one state legislates to protect data annotators, companies can move to a different state that permits their exploitative practices. If one state bans an AI product, companies can sell it in a more permissive market. Companies are driven to compete to maximize profits, and they seek access to new markets as they reach the limits of their initial markets. They engage in exploitation of resources to increase their profit margins.
Given the distribution of capital, it is unsurprising that a significant proportion of the demand for digital labor comes from the so-called Global North, while the supply of labor tends to come from what was once called the Global South.51 More than one scholar focusing on global capitalism has noted that the exploited workforces tend to be concentrated in the Global South.52 After painstaking empirical work on the people who annotate and verify data for three major platforms operating in Latin America, Julian Posada concluded that American technology companies, often through intermediaries, use poorly paid workers in countries in crisis as a source of cheap labor.53 The platformization of labor breaks down traditional organizing structures: workers are unable to bargain with the companies since they see themselves as being in competition, not in solidarity, with each other.54 There are no institutional or structural solutions to protect them. Kalindi Vora argued that there are “structural inequalities that result from histories of colonial, racial, and gender dispossession that map directly onto new technological platforms.”55
The lucrative business of data annotation, through which companies hire Kenyan workers for less than $2 an hour, is an example of how the exploitation of cognitive labor is a part of the creation of global AI systems.56 While the exploitation of physical labor—whether that of the miners and assembly line workers or those who clean the waste that these systems generate— is also a part of AI’s value chain, informational capitalism has inherited this set of practices from industrial capitalism. The exploitation of cognitive labor is characteristic of informational capitalism but derives from, and is co-extensive with, industrial capitalism’s systems for labor exploitation. Industrial capitalism also exploited immaterial labor such as affective and biological labor.57 Informational capitalism has created new practices to do so. These new practices can be more easily understood in terms of their connection to capitalism.58
Uneven legal protections in states around the world leave some populations vulnerable to exploitation and other harms resulting from the companies’ practices. The history of capitalism is full of such examples. Consider product liability laws in the United States, which emerged from mass torts or from regulators: products like banned pesticides that cannot be sold or used in the United States are exported to other markets.59 Apart from the harm that this causes to people in states without regulatory safeguards, the risk can boomerang to the United States through products further down the value chain.60 When the United States imports produce grown using banned pesticides, its population is also affected. While international coordinated action is taken from time to time, like in the case of the insecticide DDT, capitalism’s legal order permits risky products to find markets that are unable to offer resistance.61 Informational capitalism is currently governed by a similarly permissive legal order—if the United States and EU regulate to protect their populations, harmful AI products will continue to be developed and inflicted on the people of other states. Powerful states’ legislators and regulators might wish to consider, right from the start, how to identify and restrict the algorithmic equivalent of DDT.
Even those who are not concerned about people at a distance should be concerned about the proliferation of harmful algorithmic systems and datasets that cause downstream problems.62 An AI system encoded with outdated and discriminatory information perpetuates harm. Take, for example, the databases that have designated certain freedom fighters and opposition leaders around the world as terrorists. Imagine how this could affect someone such as former South African President Nelson Mandela, who was once designated as a terrorist and later underwent a change of status.63 AI systems trained on old datasets may not encode this change of status and similar changes of the corresponding statuses of people connected with such figures, meaning that such systems may still flag all these people as potential terrorists on the basis of outdated information. Even seemingly safe and stable states go through phases of discrimination and persecution and will probably eventually be affected by harmful AI products that are created and available because they are marketable.64 Even if the products are never used in the United States or the EU, persecution, oppression, and violence elsewhere in the world will be visible at the borders of more privileged countries in the form of asylum-seekers, and an increase in global crime and terrorism.
To avoid future problems, the discourse about AI regulation needs to expand its currently myopic view of risk and who is at risk to cover the labor harms as well as data and algorithmic injustices that will certainly result from global AI systems.65 Such discourse needs to account for a world in which companies rely on data annotators in one country, experiment in another country, and pilot potentially harmful products in a third country and a fourth.
Circling back to the theme with which I opened, it is crucial that regulators do not leave companies to lead the discourse about AI’s legal order. By regulators, I mean all regulators who possess the power and resources to negotiate with these companies. Although the risks I highlight here may seem distant and feel like someone else’s problem, AI is global. American companies understand this, but American regulators do not seem to. Risk-oriented regulation should engage with the true nature of the AI products shaped by Big Tech while identifying AI’s harms. As technology globalizes, regulators need to expand and engage with its entirety and not with the outcomes that appear, but might not actually be, most proximate and most salient.
Chinmayi Arun is the executive director of the Information Society Project, and a lecturer and research scholar at Yale Law School. Her research focuses on platform governance, social media, algorithmic decision-making, the data economy and privacy, within the larger universe of questions raised by law’s relationship with the information society. She is interested in how these questions affect marginalized populations, especially in the Majority World. Before arriving at Yale, Arun was an assistant professor of law at two of the most highly regarded law schools in India. During that time, she also founded and led the Centre for Communication Governance at National Law University Delhi. Arun has served as a human rights officer (temporary appointment) at the United Nations, where she worked on questions of privacy, online speech and artificial intelligence
The author thanks Aubra Anthony, Elina Noor, George Perkovich, Jack Balkin, Kalindi Vora, Lakshmee Sharma, and everyone present at the Carnegie Endowment for International Peace’s workshop on the project on global majority perspectives on AI governance for helpful feedback through different stages of this essay. The author is responsible for any errors.
Carnegie does not take institutional positions on public policy issues; the views represented herein are those of the author(s) and do not necessarily reflect the views of Carnegie, its staff, or its trustees.
Baku may allow radical nationalists to publicly discuss “reunification” with Azeri Iranians, but the president and key officials prefer not to comment publicly on the protests in Iran.
Bashir Kitachaev
The GCC states’ use of Artificial Intelligence will generate much leverage over the global digital infrastructure and climate talks.
Camille Ammoun
Uneven investment in the technology will widen regional inequalities in the Middle East and North Africa.
Nur Arafeh
The GCC states are, to varying degrees, opening up to digital finance. This is part of an effort to diversify their economies and wean themselves off U.S.-dominated monetary systems.
Ala’a Kolkaila
A humanitarian crisis in Lebanon deepens, and Syrian refugees face a perilous choice: remain in a war-torn environment or return to Syria where they risk encountering significant dangers and discrimination. There are significant challenges and risks to their search for safety in Syria.
Haid Haid