Source: Getty
article

Transnational AI and Corporate Imperialism

People across the world are grappling with a few global technology companies’ domination of their public spheres and increasingly of other spheres of social, economic, and political engagement.

by Chinmayi Arun
Published on October 8, 2024

Introduction

When I first had coffee with Facebook’s head of public policy in India, she was using a shared workspace. Before joining Facebook, she had led the policy practice of one of India’s most powerful law firms and was on friendly terms with judges, ministers, and other influential people. It was surprising to find her in a small office in a shared workspace, with just one administrative assistant for support. In a couple of years, she moved to a suite with a beautiful view in one of Delhi’s finest luxury hotels, acquired an expanding team of seasoned lawyers and policy professionals. Within five years, the public policy team had grown to be so large that it had its own plush office in the heart of Delhi. This expansion of influence within Delhi mirrored the expansion of Meta’s (previously Facebook) influence in India. If it was difficult to be in an Indian city without relying heavily on Big Tech’s products, it was impossible to work on technology policy in Delhi without being swept into currents of change created by Big Tech.1

People across the world are grappling with a few global technology companies ’ domination of their public spheres and increasingly of other spheres of social, economic, and political engagement. In her essay on the “algorithmic colonization of Africa,” Abeba Birhane pointed out that most of Africa’s digital infrastructure is owned and controlled by major Western technology companies, and she further questioned how relevant artificial intelligence (AI) tools from the West are in other contexts.2 This is a question that I also asked in my 2019 chapter on AI and the Global South, using the architecture of buildings and climate as a metaphor for how design should be based on local contexts.3 Birhane and I have described problems emerging from global AI’s one-size-fits-all models, and these concerns continue to be relevant.4 Other authors have also argued that acontextual approaches to AI are unhelpful.5 I am using the term AI here to refer to automated decisionmaking, automated tools,6 and automated systems,7 including the products, processes, and systems that result in what is known widely as generative AI.8

Before discussing the potential impact of Big Tech’s AI on the Majority World (based on the impact of Big Tech’s social media platforms in countries like Myanmar and the Philippines), it is worth asking why and how Big Tech companies makes the choices they make, which lead to this impact.9 In this essay, I focus on AI’s global political economy. I propose thinking about the way in which AI is permeating the world as corporate imperialism.10 This framing highlights the inequalities that feed AI and that its political economy exacerbates. It explains the harms emerging from the largest global tech platforms and the biggest global AI companies.11 I am not arguing that AI’s harms are inevitable, but I am suggesting that they have a major, often overlooked, cause. The framing of problems and goals is an important first step of regulation. If AI’s harms emerge from its global political economy, advocates and regulators may find that coordinated global action, not just local action, is necessary.12

Although specific harms can vary depending on the context, focusing on the influential private actors that cause these harms and on the forces that drive them can help regulators, activists and their allies better understand the forces causing these risks. It might even help the people working within or with Big Tech companies understand the nature of the industry whose practices they are trying to change. As I have argued in prior work, companies are not homogenous—advocates and regulators can act in ways that empower the teams within companies whose mandate is to make their products and practices less harmful.13 The corporate imperialism lens foregrounds the role and impetus of the industry. It also highlights the vulnerable populations this industry harms in the short term and explains why populations who think they are safe will be harmed in the long term.

Means and Ends

Some of the most significant U.S.-based tech companies conceive of, design, and control AI-powered products. They do so using the labor of people, and through the extraction of materials, from several countries around the world. AI’s supply chains and markets are global. Kate Crawford and Vladan Joler described the life cycle of an Amazon Echo device, rendering visible the informal laborers who mine lithium for the device from countries like Bolivia, Argentina, or Chile.14 They also highlighted the role of indentured labor that sources the minerals for AI’s physical infrastructure, including children who risk their lives mining cobalt in the Democratic Republic of the Congo.15 This infrastructure is built using strictly controlled and sometimes dangerous manufacturing and assembly processes in Chinese factories and informal physical laborers are involved in shipping these materials and eventually in cleaning the (often-toxic) physical waste that these systems generate.16 In addition to physical laborers, cognitive workers who are often located in developing countries, and working in challenging environments, are recruited to label AI training datasets.17

Policy conversations about AI and algorithmic decisionmaking tend not to extend to the physical infrastructure, materials, and labor that make these systems possible.18 In addition to the populations affected by algorithmic decisionmaking, the people who build the decisionmaking systems and help maintain them,  are also affected by AI’s business models. The term machine learning erases much of the human labor necessary to teach machines.19 By this, I do not refer to the highly paid engineers who design these systems, but rather to the outsourced cognitive labor that is a feature of AI’s supply chains as a result of informational capitalism.20

The outsourcing of cognitive labor arguably started with call centers and business process outsourcing.21 Social media companies then used outsourcing for cost-effective content moderation. Outsourcing is now used to create labelled datasets to train AI models, including large language models (LLMs). For instance, Open AI used Kenyan workers, hired through an outsourcing company called Sama, to label toxic content for ChatGPT.22 These workers were paid less than $2 an hour to view and label abusive and disturbing content for hours. 23 They received limited time to make decisions, and in contradiction to the company’s assertions, the workers said that the mental health resources made available to them were inadequate.24

Other ways in which global AI companies engage with the developing world include collection of data, experimentation, and the sale of algorithmic products and services.25 According to Business Today, Bing’s AI chatbot Sydney was piloted in India initially, and it appears that feedback from Indians about the LLM’s tendencies toward disturbing conversation were ignored, until Kevin Roose famously wrote about it in the New York Times.26 This shows that the major AI companies, like the major social media companies, are already more attentive to the voices of those who hold power.27 Even if this is not surprising, it should be of concern, especially since, fairly recently, Facebook’s structural inability to hear marginalized voices resulted in its contributing to the Myanmar genocide.28 It would be unfortunate if AI companies, and those who are interested in regulating them, failed to learn from this history.

In addition to making their models available to businesses and consumers, the companies enter domains that were previously considered the public sector. Linnet Taylor and Dennis Broeders were among the earliest to express concerns about the use of privately held social quantification systems for the delivery of public services, and their concerns apply to algorithmic systems.29 Facial recognition systems are used at airports, for crowd management, and for predictive policing. In the Argentinian province of Salta, the Ministry of Early Childhood entered into a partnership with Microsoft to predict which girls and young women were likely to have teenage pregnancies, according to reporting from the Not My AI partnership between Coding Rights and activist Paz Peña.30 This partnership attracted wide criticism, and many parts of the system remain difficult to understand, including why the project focused on women, why and how causal links were drawn between the data collected and the likelihood of pregnancy, and what decisions were to be made about the girls and young women as a consequence of this determination.31

The creation of databases should also raise concerns about epistemic injustice from datasets that encode certain ways of seeing society and human relationships.32 For example, consider the epistemic injustice resulting from databases that treat gender as a binary.33 Similarly, datasets designed by those who have little or no contextual knowledge may misdescribe communities.34 For example, a dataset designed for nuclear families, consisting of a married man and woman and their children, may fail to adequately capture relationships in societies where other familial ties, such as relationships between women and their families of birth, are more significant. AI systems trained on this data can perpetuate the problem.35 Additional harms, including cultural imperialism, might result from these AI systems.36 In the context of AI and big data, corporate informational imperialism may mean that companies work with states to determine how people and societies are seen.37 These companies’ influence will likely increase as states outsource public services to these corporations, with practices like predictive policing, AI for development, and smart cities on the rise.

What Is in a Name?

Different lenses are used to understand the roles and choices of technology companies. Debates that focus on geopolitics and technology focus on the roles of countries and blocs like the United States, the European Union (EU), and China, as well as sometimes India and Brazil.38 Technocentric debates focus on the choices of individual companies. However, these companies make choices within a certain political economy, and the forces that drive the companies to seek new markets, cut costs, and compete with each other to increase their profit margins are key to understanding why these companies choose certain courses of action.39 Acknowledging that a competitive company will try to reduce its labor costs leads to more actionable questions about Open AI’s reliance on Kenyan data labelers, for instance. If observers see that such a company is driven to compete ferociously for new markets and to develop new products and services, they might understand why companies create and sell capabilities like facial recognition systems.

Nick Couldry and Ulises Meijas’s framework of “data colonialism” is a helpful description of the exploitative relationship between companies and people in the context of the data economy.40 In short, their book, The Costs of Connection, suggested that capitalist extraction now appropriates “human life through its conversion into data.” 41 Data colonialism can take place in several ways, including through the state-controlled model that China is building. Couldry and Meijas highlighted the fact that data empires are concentrated in the United States and in China and that these countries use trade negotiations to ensure that developing countries continue to permit the “social quantification” business.42 While this work is insightful and has offered helpful ways to think about the transnational nature of AI, I argue that closer attention to the companies and the forces that drive them (as opposed to focusing on states) is essential. This is a necessary step for my argument in future work that lawmakers and advocates should be more attentive to how Big Tech shapes the global legal order to protect its business interests.43

The global technology companies with the greatest influence, with their informational capitalist business models, tend to engage in data colonialism as well as the other kinds of harm that I described above through corporate imperialism. This framing focuses on the power and influence that these companies wield. Corporate imperialism is arguably a feature of capitalism. As Hannah Arendt wrote, “imperialism was born when the ruling class in capitalist production came up against national limitations to its economic expansion.”44 However, unlike politicians who operate within territories and election cycles, capitalists operate in “continuous space and time,” which arguably allows them to accumulate power and be agile across borders.45 Scholars have highlighted the connection between capitalism and imperialism for decades.46 In the context of the global information economy, and in combination with informational capitalism and data colonialism, it has explanatory value and added consequences.

These companies lobby governments to create laws that are favorable to them and cultivate relationships with other parties, including elites and civil society, who might facilitate influencing laws in company-friendly ways.47 They influence the discourse through which regulators draw their priorities.48 Advocates recognize companies as influential actors, as do international organizations. Microsoft has an office at the UN, for example.49 This is why my opening story about Facebook’s head of public policy and the growth of her influence and team, along with the company’s growth in influence, is significant.

The Effects of Corporate Imperialism

Global informational capitalism and corporate imperialism drive companies to take advantage of the uneven geographical conditions of capital accumulation, which explains why AI’s business models inevitably harm vulnerable people that cannot access state protection.50 If one state legislates to protect data annotators, companies can move to a different state that permits their exploitative practices. If one state bans an AI product, companies can sell it in a more permissive market. Companies are driven to compete to maximize profits, and they seek access to new markets as they reach the limits of their initial markets. They engage in exploitation of resources to increase their profit margins.

Given the distribution of capital, it is unsurprising that a significant proportion of the demand for digital labor comes from the so-called Global North, while the supply of labor tends to come from what was once called the Global South.51 More than one scholar focusing on global capitalism has noted that the exploited workforces tend to be concentrated in the Global South.52 After painstaking empirical work on the people who annotate and verify data for three major platforms operating in Latin America, Julian Posada concluded that American technology companies, often through intermediaries, use poorly paid workers in countries in crisis as a source of cheap labor.53 The platformization of labor breaks down traditional organizing structures: workers are unable to bargain with the companies since they see themselves as being in competition, not in solidarity, with each other.54 There are no institutional or structural solutions to protect them. Kalindi Vora argued that there are “structural inequalities that result from histories of colonial, racial, and gender dispossession that map directly onto new technological platforms.”55

The lucrative business of data annotation, through which companies hire Kenyan workers for less than $2 an hour, is an example of how the exploitation of cognitive labor is a part of the creation of global AI systems.56 While the exploitation of physical labor—whether that of the miners and assembly line workers or those who clean the waste that these systems generate— is also a part of AI’s value chain, informational capitalism has inherited this set of practices from industrial capitalism. The exploitation of cognitive labor is characteristic of informational capitalism but derives from, and is co-extensive with, industrial capitalism’s systems for labor exploitation. Industrial capitalism also exploited immaterial labor such as affective and biological labor.57 Informational capitalism has created new practices to do so. These new practices can be more easily understood in terms of their connection to capitalism.58

Uneven legal protections in states around the world leave some populations vulnerable to exploitation and other harms resulting from the companies’ practices. The history of capitalism is full of such examples. Consider product liability laws in the United States, which emerged from mass torts or from regulators: products like banned pesticides that cannot be sold or used in the United States are exported to other markets.59 Apart from the harm that this causes to people in states without regulatory safeguards, the risk can boomerang to the United States through products further down the value chain.60 When the United States imports produce grown using banned pesticides, its population is also affected. While international coordinated action is taken from time to time, like in the case of the insecticide DDT, capitalism’s legal order permits risky products to find markets that are unable to offer resistance.61 Informational capitalism is currently governed by a similarly permissive legal order—if the United States and EU regulate to protect their populations, harmful AI products will continue to be developed and inflicted on the people of other states. Powerful states’ legislators and regulators might wish to consider, right from the start, how to identify and restrict the algorithmic equivalent of DDT.

Even those who are not concerned about people at a distance should be concerned about the proliferation of harmful algorithmic systems and datasets that cause downstream problems.62 An AI system encoded with outdated and discriminatory information perpetuates harm. Take, for example, the databases that have designated certain freedom fighters and opposition leaders around the world as terrorists. Imagine how this could affect someone such as former South African President Nelson Mandela, who was once designated as a terrorist and later underwent a change of status.63 AI systems trained on old datasets may not encode this change of status and similar changes of the corresponding statuses of people connected with such figures, meaning that such systems may still flag all these people as potential terrorists on the basis of outdated information. Even seemingly safe and stable states go through phases of discrimination and persecution and will probably eventually be affected by harmful AI products that are created and available because they are marketable.64 Even if the products are never used in the United States or the EU, persecution, oppression, and violence elsewhere in the world will be visible at the borders of more privileged countries in the form of asylum-seekers, and an increase in global crime and terrorism.

Conclusion

To avoid future problems, the discourse about AI regulation needs to expand its currently myopic view of risk and who is at risk to cover the labor harms as well as data and algorithmic injustices that will certainly result from global AI systems.65 Such discourse needs to account for a world in which companies rely on data annotators in one country, experiment in another country, and pilot potentially harmful products in a third country and a fourth.

Circling back to the theme with which I opened, it is crucial that regulators do not leave companies to lead the discourse about AI’s legal order. By regulators, I mean all regulators who possess the power and resources to negotiate with these companies. Although the risks I highlight here may seem distant and feel like someone else’s problem, AI is global. American companies understand this, but American regulators do not seem to. Risk-oriented regulation should engage with the true nature of the AI products shaped by Big Tech while identifying AI’s harms. As technology globalizes, regulators need to expand and engage with its entirety and not with the outcomes that appear, but might not actually be, most proximate and most salient.

About the Author

Chinmayi Arun is the executive director of the Information Society Project, and a lecturer and research scholar at Yale Law School. Her research focuses on platform governance, social media, algorithmic decision-making, the data economy and privacy, within the larger universe of questions raised by law’s relationship with the information society. She is interested in how these questions affect marginalized populations, especially in the Majority World. Before arriving at Yale, Arun was an assistant professor of law at two of the most highly regarded law schools in India. During that time, she also founded and led the Centre for Communication Governance at National Law University Delhi. Arun has served as a human rights officer (temporary appointment) at the United Nations, where she worked on questions of privacy, online speech and artificial intelligence

Acknowledgments

The author thanks Aubra Anthony, Elina Noor, George Perkovich, Jack Balkin, Kalindi Vora, Lakshmee Sharma, and everyone present at the Carnegie Endowment for International Peace’s workshop on the project on global majority perspectives on AI governance for helpful feedback through different stages of this essay. The author is responsible for any errors.

Notes

  • 1By Big Tech, I refer to American companies which currently are worth much more than their counterparts from any other country, including China. I focus on these companies in this essay since they are the most influential globally. However, I have discussed the role of other technology companies in previous work. See Chinmayi Arun, AI and the Global South: Designing for Other Worlds in Markus Dubber et. Al ed. Oxford Handbook of AI and Ethics (2020).

  • 2Abeba Birhane, Algorithmic colonization of Africa, 17 SCRIPTed 389 (2020) at 392; Birhane went on to point out that practices that are successful in high income countries, such as the use of mammograms to reduce mortality from breast cancer do not work for Sub Saharan Africa, where self and clinical examination is the more effective method. https://script-ed.org/article/algorithmic-colonization-of-africa/

  • 3Arun, AI and the Global South, supra note 1.

  • 4This is true generally of the use of technology designed for one context, in a different context. For example, catastrophic harm was caused by propaganda on Facebook in Myanmar, where there was no mainstream media to offer alternative narratives to the propaganda being shared on the platform. See Arun, AI and the Global South, supra note 1. See also how the use of mammograms works poorly in Sub Saharan Africa, Birhane, “Algorithmic colonization of Africa”, supra note 2.

  • 5See for eg. Amba Kak, "The Global South is everywhere, but also always somewhere: National Policy Narratives and AI Justice”, Proceedings of the AAAI/ACM Conference on AI, Ethics, and Society (2020) 307-312.

  • 6See Ifeoma Ajunwa, Automated Governance101 NC L. Rev. 355 at 361. See also Margot E. Kaminski and Jennifer M. Urban, The right to contest AI, Colum. L. Rev. 121(7) 1957-2048 at 1957.

  • 7See Danielle K. Citron DK & Frank Pasquale, The Scored Society: Due Process for Automated Predictions, Wash. L. Rev. 89 Wash. L. Rev. 1.

  • 8See Kate Crawford, Atlas of AI (2022), 7, describing AI as “both embodied and material, made from natural resources, fuel, human labor, infrastructures, logistics, histories, and classifications.” Also see Katherine Lee,, A. Feder Cooper, & James Grimmelmann, “Talkin' 'Bout AI Generation: Copyright and the Generative-AI Supply Chain”, arXiv preprint arXiv:2309.0813 (2023) available at https://arxiv.org/abs/2309.08133.

  • 9What was once decribed as the ‘global south’ is increasingly being referred to as the ‘majority world’ or the global majority’. See Shahidul Alam, “Majority world: Challenging the West's Rhetoric of Democracy”, 2008 Amerasia Journal 34(1) 88-98. Also see Matthew Ingram, “Facebook now linked to violence in the Philippines, Libya, Germany, Myanmar, and India”, 2018 Columbia Journalism Review 7(03) at https://www.cjr.org/the_media_today/facebook-linked-to-violence.php.

  • 10See J.R. Ward, The Industrial Revolution and British Imperialism, 1750-1850, Economic History Review, XLVII, 1(1994), pp. 44-65; [10] David Harvey, The New Imperialism (2003); Giovanni Arrighi et. al., Political Economy and Global Capitalism, 7-24 (2007); Immanuel Wallerstein, Modern World-System I: Capitalist Agriculture and the Origins of the European World-Economy in the Sixteenth Century (2011); See also Yochai Benkler, The Role of Technology in Political Economy Parts 1-3(2018), LPE Blog, at https://lpeproject.org/authors/yochai-benkler/; Nick Couldry, & Ulises A. Meijas, The Costs of Connection (2019); Thomas Piketty, Capital in the Twenty First Century (2013), in which he suggests that “political economy… conveys the only thing that sets economics apart from the other social sciences: its political, normative, and moral purpose.”

  • 11By Big Tech, I refer to American companies which currently are worth much more than their counterparts from any other country, including China. Microsoft looms large in Open AI’s ownership and physical infrastructures. It is also worth thinking of Big Tech as an industry. If one major company chooses not make a profitable but harmful product, the drive to profit will likely prompt another company to take its place.

  • 12See Chinmayi Arun, The Silicon Valley Effect [forthcoming Stanford Journal of International Law,  Volume 61(2025)).

  • 13See Chinmayi Arun, Facebook's Faces, 135 Harv. L. Rev. F. 236.

  • 14Kate Crawford & Vladan Joler, Anatomy of an AI System (2018) XIV at https://anatomyof.ai/ .

  • 15Terry Gross, How 'modern-day slavery' in the Congo powers the rechargeable battery economy, NPR (February 1, 2023) at inequitable global status quo.

  • 16Id, IX.

  • 17See Kalindi Vora, Life Support (2015), 52-54. Crawford & Joler also discuss elements of what I call cognitive labor here, in their work under ‘digital labor’, see supra note 14. For a discussion of cognitive labor in the context of unpaid household work see Allison Daminger, “The Cognitive Dimension of Household Labor”, 2019 American Sociological Review84(4) 609-633.

  • 18See Crawford, Atlas, supra note 8.

  • 19See Neda Atanasoski, N. & Kalindi Vora, Surrogate humanity: Race, robots, and the politics of technological futures (2019), 88-107; and Mary L. Gray, & Sidharth Suri, Ghost Work (2019).

  • 20Informational capitalism involves a shift to informationalism, such that the “action of knowledge upon knowledge upon knowledge” is the source of productivity: Manuel Castells, Network Society (2nd ed., 2010), 17. See also, Amy Kapzynski The Law of Informational Capitalism, 129 Yale LJ 1460; Julie E. Cohen, Between Truth and Power (2019); Shoshanna Zuboff, The Age of Surveillance Capitalism (2018).

  • 21See Vora, Life Support, supra note 17; See also Kalindi Vora & Maurizia Boscagli, Working under Precarity: Work Affect and Emotional Labor (2013) at https://escholarship.org/uc/item/0m87j6n1.

  • 22Billy Perrigo, OpenAI Used Kenyan Workers on Less Than $2 Per Hour to Make ChatGPT Less Toxic, TIME (Jan 18, 2023) at  https://time.com/6247678/openai-chatgpt-kenya-workers/; See also Inside Facebook’s African Sweatshop, TIME (Feb 17, 2022) at https://time.com/6147458/facebook-africa-content-moderation-employee-treatment/: Dave Lee, Why Big Tech pays poor Kenyans to teach self-driving cars, (Nov 2,2018) BBC News, https://www.bbc.com/news/technology-46055595.

  • 23Billy Perrigo, id.

  • 24Billy Perrigo, id.

  • 25See Nick Couldry, & Ulises A. Meijas, The Costs of Connection (2019), Ulises A. Meijas & Nick Couldry, Data Grab (2024); and Linnet Taylor & Dennis Broeders, “In the Name of Development: Power, Profit and the Datafication of the Global South” 2015 Geoforum 64, 229-237.

  • 27Arun, Facebook's Facessupra note 13.

  • 28Id. Eli Meixler, U.N. Fact Finders Say Facebook Played a ‘Determining’ Role in Violence Against the Rohingya, Time (Mar 13, 2018), at https://time.com/5197039/un-facebook-myanmar-rohingya-violence/.  See also Alex Warofka, An Independent Assessment of the Human Rights Impact of Facebook in Myanmar, Meta (Nov 5, 2018) at https://about.fb.com/news/2018/11/myanmar-hria/.

  • 29Taylor & Broeders, supra note 25.

  • 30See Microsoft News, The use of Artificial Intelligence is advancing in Argentina with experiences in the public, private and NGO sectors, News Center Microsoft Latin America (April 2, 2018) https://news.microsoft.com/es-xl/avanza-el-uso-de-la-inteligencia-artificial-en-la-argentina-con-experiencias-en-el-sector-publico-privado-y-ongs/.; See also Paz Peña & Joanna Varon, Teenager pregnancy addressed through data colonialism in a system patriarchal by design, Coding Rights (Apr. 26, 2022, 4:00 PM), https://notmy.ai/news/case-study-plataforma-tecnologica-de-intervencion-social-argentina-and-brazil/; Diego Jemio et al, The Case of the Creepy Algorithm That ‘Predicted’ Teen Pregnancy, Wired (Feb. 16, 2022, 7:00 AM) https://www.wired.com/story/argentina-algorithms-pregnancy-prediction/..

  • 31Id.

  • 32Taylor & Broeders, supra note 25.

  • 33Konstantinos Argyriou, “Misgendering As Epistemic Injustice: A Queer STS Approach”, (2021) Las Torres de Lucca Revista internacional de filosofía política 10(19), 71-82.

    See also Shah, N., “I spy, with my little AI: How queer bodies are made dirty for digital technologies to claim cleanness”, Queer Reflections on AI (pp. 57-72). Routledge (2023).

  • 34Gordon Hull, “Dirty Data labelled Dirt Cheap: Epistemic Injustice in Machine Learning Systems”, (2023) Ethics and Information Technology25(3), 38.

  • 35See Hideyuki Matsumi & Daniel Solove, “The Prediction Society: Algorithms and the Problems of Forecasting the Future”, (2024) GWU Legal Studies Research Paper No. 2023-58, GWU Law School Public Law Research Paper No. 2023-58, at https://ssrn.com/abstract=4453869 or http://dx.doi.org/10.2139/ssrn.4453869. See also Neil Richards, Why Privacy Matters (2021).

  • 36Paula Ricaurte, “Ethics for the majority world: AI and the question of violence at scale”, 2022 Media, Culture & Society44(4) 726-745.

  • 37See Zuboff, (2018), supra note 20. See also Balkin, J., The Constitution in the National Surveillance State, 93 Minn. L. Rev.1.; and James C. Scott, J., Seeing like a State (1999).

  • 38Anu Bradford, Digital Empires (2023); Julie E Cohen, “Who’s Rulin’ Who?”, Lawfare (April 16, 2024), at https://www.lawfaremedia.org/article/who-s-rulin-who.

  • 39See Yochai Benkler, Structure and Legitimation in Capitalism: Law, Power, and Justice in Market Society (2023) at SSRN: https://ssrn.com/abstract=4614192 or http://dx.doi.org/10.2139/ssrn.4614192; Cohen, Between Truth and Power, supra note 20; Kapczynski, Law of Informational Capitalism, supra note 20.

  • 40See Couldry & Meijas, Costs of Connection, supra note 25.

  • 41Couldry & Meijas, Costs of Connection, supra note 25, at xix.

  • 42Id at 105.

  • 43Arun, The Silicon Valley Effect, supra note 12.

  • 44Hannah Arendt, The Origins of Totalitarianism (Schoken NY, 2004), 170.

  • 45David Harvey, The New Imperialism (2003), 27.

  • 46Scholars including Hobson, Lenin and Arigghi have discussed imperialism in detail. This is too short an essay to outline their arguments.

  • 47Zephy Teachout and Lina Khan, Market Structure and Political Law: A Taxonomy of Power, 9 Duke J. Const. L. & Pub. Pol’Y 37, 42 (2014).

  • 48See also Arun, The Silicon Valley Effect, supra note 12.

  • 50Harvey, New Imperialism, supra note 45, 31.

  • 51Mark Graham, Isis Hjorth, and Vili Lehdonvirta, V., “Digital Labor and Development”, impacts of global digital labour platforms and the gig economy on worker livelihoods”, (2017) Transfer: European Review of Labour and Research23(2), 135-162. https://doi.org/10.1177/1024258916687250.

  • 52See Gargi Bhattacharya, Rethinking Racial Capitalism (2018) 122.

  • 53Julian Posada, The Coloniality of Data Work: Power and Inequality in Outsourced Data Production for Machine Learning (Doctoral dissertation, 2022, University of Toronto) 3, 149.

  • 54Graham et al, supra note 49; See also Jun-E. Tan, and Rachel Gong, “The Plight of Platform Workers Under Algorithmic Management in Southeast Asia”, Carnegie Endowment for International Peace (2024), at

  • 55Atanasoski & Vora, Surrogate Humanity, supra note 18, 85. The argument is from Vora, Life Support, supra note 17.

  • 56Perrigo, supra note 22; Lee supra note 22.

  • 57See Kalindi Vora, Vora, Life Support, supra note 17. 8-15. Vora points out at 31 that ‘Activities of service, care, and nurture engage the biological use of their bodies and lives as well as labor, and the requirements of such work intrude on the laboring subject in ways that radically compromise any sense of “autonomy” or “separation of spheres” presumed by both liberal and Marxist discussions of workers within Western societies.’

  • 58Id.

  • 59Lairold M. Street, U.S. Exports Banned for Domestic Use, but Exported to Third World Countries, 95 Md. J. Int’l L. (1980).

  • 60Arun, The Silicon Valley Effect, supra note 12.

  • 61See  https://www.epa.gov/ingredients-used-pesticide-products/ddt-brief-history-and-status; See also Arun, The Silicon Valley Effect, supra note 12.

  • 62See Matsumi & Solove. supra note 35; Richards, supra note 35; Citron & Pasquale, supra note 8; Ajunwa, I., The Quantified Worker (2023), Frank Pasquale, the Black Box Society (2015).

  • 64Arun, The Silicon Valley Effect, supra note 12.

  • 65Margot E. Kaminski, Regulating the Risks of AI, 103 B.U. L. Rev. 1347, 1391-1392.

Carnegie does not take institutional positions on public policy issues; the views represented herein are those of the author(s) and do not necessarily reflect the views of Carnegie, its staff, or its trustees.