Source: Getty
article

Ordinary Ethics of Governing AI

Artificial intelligence has real impacts on the everyday lives of people all around the world. These stories invite a broader conversation on research and policy about AI in the global south.

by Ranjit Singh
Published on April 30, 2024

How are the concerns around artificial intelligence (AI) different in the global south than in the global north?1 This question is at the heart of the global challenge of governing AI and assessing its implications. Scholarship on AI ethics in the global north revolves around keywords such as bias, fairness, accountability, transparency, explainable AI, and human-centered design. A key similarity between these keywords is that they focus on design; AI is seen as a tool. And as a tool, it can be embedded with designed features that enact a particular understanding of these keywords.

In contrast, the stories narrated in this article engage with AI as an everyday experience. They resonate with the conceptual vocabulary of scholars in and from the global south, who have articulated the uneven consequences of AI through keywords such as dignity, labor, extraction, colonialism, experimentation, sovereignty, and solidarity. Such keywords offer a different vocabulary to think through the challenges of digitalization and building the infrastructure for data management and AI in most countries. They engage with the everyday experience of living with computational systems that are gradually reorganizing every aspect of ordinary life.

AI manifests in such experiences through the exercise of agency attributed to these systems in ordinary decision making. Most people find themselves at the receiving end of such automated decisions, but this does not mean that they have no agency.2 The stories in this article illustrate how the exercise of this human agency is an example of ordinary ethics in practice—ethics that is grounded in everyday decisions and ordinary judgments to get work done, rather than broad universal values. These stories are an invitation to a broader conversation on emerging research and policy questions on the implications of AI in and from the global south,3 which is home to the majority of the world’s population and, hence, should be framed and understood as the majority world.4

 

Approaches to managing the implications of AI have centered on ethical guidelines or principles; these principles are often used as starting points for regulations.5 This discourse also has come to include concerns of safety and security around large-scale AI models.6 To present an alternative viewpoint on addressing the challenges of governing AI, I focus on diverse stories of becoming a data subject and of responding to such subjecthood with two interrelated questions for examining AI in and from the majority world.7 First, how do people narrate their stories to AI? And second, how does AI tell stories about them? The exchange of stories between people and AI is at the core of the everyday struggles of interfacing with the inputs and outputs of computational systems. These struggles invite attention to ordinary practices of living with AI and making it “work.”8 The use of quotation marks around work is deliberate. AI does not work by itself; it is made to work, and people work with it. AI works because of large-scale human efforts at infrastructuring it into existing ordinary practices of doing work. The two questions, therefore, are resources for investigating how and when AI becomes ordinary.

On Becoming Ordinary

In the summer of 2015 as I began the first round of fieldwork for my dissertation research on Aadhaar, India’s biometrics-based national identification infrastructure,9 I met Yogita. She was one of my first field respondents and narrated her troubles in obtaining a marriage registration certificate without her Aadhaar number. She concluded her story by saying:

Now when you go to these [government] offices, people have found a new excuse for why they cannot do your work. Computers are the new babus!10 They will tell you things like: “Madam, we want to register your marriage, but this computer won’t let us!”11

Yogita’s story was about the challenges of navigating the Indian bureaucracy without an Aadhaar number. While the bureaucrats agreed that Yogita and her spouse were married, they claimed that the interface of the marriage registration system would not let them document their marriage without Aadhaar. The couple made numerous visits to the office; they even got a lawyer involved because Aadhaar is not a mandatory requirement for registering marriages in India (although there are ongoing debates over whether it should be). Ultimately, after months of negotiations, their problem was resolved with a workaround of keying in dots instead of digits for their Aadhaar numbers. Yogita made a simple, yet poignant, observation: computers are the new bureaucrats.

None of the work that Yogita had to do is surprising or extraordinary. These are ordinary encounters with computational systems designed to follow rules made by humans; these moments represent the majority of experiences of doing the work of fitting into data categories. While experts often argue that computational systems make practices efficient and that these systems work for more people than not, such systems change the scale and speed at which things do not work and go wrong. Yogita’s decision to not enroll into Aadhaar has had implications for all of her interactions with the Indian bureaucracy. Furthermore, the burden of efforts to resolve such situations is placed directly on people for whom these systems do not work, while reducing the possibility and space for ordinary human-in-the-loop interventions such as help from a street-level bureaucrat.

Beyond everyday routines and habits, “ordinary” signifies everyday moments when people relate to others. Yogita’s observation stands in for a diverse range of experiences of people that struggle with these systems and their folk explanations for computational agency. In such ordinary encounters, it is not the technical capabilities (such as novel algorithmic techniques used to process data), it is rather the social perception of the role of computers in everyday life that creates conditions of attributing agency and “intelligence” to them.12

AI is often framed as a problem of computationally engineering intelligence that scales up human capacity to reason, or as a problem of improving the performance and efficiency of a business by delegating discrete tasks to computers. These aspects of AI are distinct from the problem of navigating the ordinary troubles of dealing with the agency of computational systems. Focusing on social perceptions draws attention away from the technical features that constitute AI and toward how AI becomes ordinary. For example, AI practitioners can easily argue that a database is not an instance of AI while a recommendation system, an automated decision-making system, and a large language model are all instances of AI with varying capacities to reason at scale. For the majority of people in the world who are not involved in developing computational systems, this distinction is not salient to their lived experience of interacting with these systems. Their challenges are centered on making AI familiar by developing the capacity to navigate ordinary troubles with computational systems. The core challenge for AI ethics in the majority world is to contend with the making and management of the agency of computational systems in ordinary life.

Enacting Ordinary Ethics

Attention to everyday encounters with computational systems offers an alternative approach to AI governance, one that is grounded in ordinary ethics.13 Anthropologist Veena Das has described “ordinary ethics” as:

a shift in perspective . . . [toward] thinking of the ethical as a dimension of everyday life in which we are not aspiring to escape the ordinary but rather to descend into it as a way of becoming moral subjects. Such a descent into the ordinary . . . is done not by orienting oneself to transcendental, objectively agreed-upon values but rather through the cultivation of sensibilities within the everyday.14

Cultivating such sensibilities with respect to AI implies focusing on everyday moral judgments on how to represent oneself in and through computational systems and how to contend with their outputs or decisions. These everyday choices and judgments are the empirical site for observing ordinary ethics in action. Engaging with ordinary ethics raises issues that fall within, but are not limited to, three broad areas of inquiry.

  • Mandatory data: What kinds of data are increasingly becoming mandatory for navigating everyday life in a particular nation state?
  • Data-driven delegation: What kinds of decisions are increasingly delegated to computers, and how is such data-driven delegation accomplished in practice?
  • Backend work: How do ordinary decisions around data recursively exacerbate challenges of misalignment between everyday life and its representation in or through computational systems?

By backend work, I mean the work of aligning computational systems with life situations, which (1) depends on the capacity of people to figure out aspects of computational systems that are inaccessible to them, and (2) creates understandings, practices, and workarounds (such as keying in dots instead of digits) to get work done through these systems.15 Standing in line for a whole day waiting for internet connectivity for biometric authentication to access services and calling customer support when digital interfaces are confusing—two predicaments common across communities in the majority world—are examples of backend work. Backend work is the cost of living in a data-driven world, a cost that is passed from companies and governments providing services to their customers and citizens.

These three areas of inquiry together offer resources to engage with the core challenge of enacting ordinary ethics in living with AI: What is good quality of everyday life in a world driven by data and AI?16 This question does not have a straightforward answer; it requires a descent into ordinary life by contending with computational systems where people tell their stories to AI, and AI tells stories about them. While such ordinary situations are ubiquitous, they become more problematic in the majority world because the infrastructure needed to make computational systems work is more unevenly distributed. Furthermore, these situations exacerbate existing social inequalities along well-recognized intersections of gender, race, class, caste, age, and ability.

Narrating Stories to AI

Enrolling in computational systems is like telling such systems a story about oneself. Each data category is a feature of this story, a translation of a complex life situation into a representative proxy. I will explore the challenges posed by this translation by narrating the story of the victims of a case of double registration in Kenya. These victims are Kenyan citizens who are also registered in the database of Somali refugees hosted in Kenya maintained by the United Nations High Commissioner for Refugees (UNHCR). A person cannot be a refugee and a citizen at the same time; the mismatch between these life situations as data categories is central to the challenges faced by these victims:

The problem of double registration can be traced back to the civil war in Somalia from 1991 as well as droughts in subsequent decades, resulting in an influx of Somali refugees into Kenya. . . Many Kenyans from the host community routinely entered the refugee camps in pursuit of food aid, education, medical services, and other opportunities that were available to refugees but not to the host community. From the late 2000s, the [UNHCR] introduced a biometric identification system for refugees, [which was later shared with the Kenyan government]. . . . It is estimated that about 40,000 of the people registered in the UNHCR database are Kenyan citizens.

The issue of double registration affects mainly young people below the age of 40 years, [who] . . . had their fingerprints taken by the UNHCR when they were children. . . . [They] have had their application for the Kenyan national identity card rejected on account of their fingerprints being in the refugee database. The current law on the registration of persons does not have provisions on how to deal with such situations. . . . With the long history of using the national identity card for identification in virtually every transaction in life, it is difficult to function as a Kenyan adult without this document.17

Some of these doubly registered Kenyan citizens joined together in a court case against the government seeking to be removed from the UNHCR database and were subsequently issued national identity cards.18

This story of double registration illustrates moments of translating the complexities of people’s everyday lives into data categories legible to computational systems. In short, people tell stories in “words” that AI can “understand.” This story speaks to all three areas of inquiry.

  • Mandatory data: Often the absence of data is offered as an explanation for the limits in people’s ability to leverage data-driven services. At the same time, the presence of data is not necessarily a resource for empowerment. The presence of data that registers a person as a refugee makes it harder for them to represent themselves as a citizen.
  • Data-driven delegation: Double registration raises important questions about how social protection is organized and how evidence enables certification or authentication of a refugee’s or a citizen’s status. For governments, managing citizen data is replete with the challenges of dealing with forgery and other forms of corruption. The state’s reliance on computational systems to deal with these challenges without appropriate mechanisms for redressing grievances is central to the struggles of these Kenyan citizens.
  • Backend work: The story of double registration showcases how backend work is not simply about registering data about oneself on a digital interface; rather it involves a wide variety of efforts that ranges from working to secure the right bureaucratic documents that are prerequisites for registering identity to participating in a court case.

Telling stories to AI has produced increasingly polarized debates19 on how to balance between recognition (knowing citizens is essential to provide services to them), surveillance (knowing citizens is a way of tracking them), and exclusion (efforts to know citizens often exclude the most vulnerable). On the one hand, recognition in computational systems is increasingly becoming a precondition for claiming citizenship and state services. Lack of recognition is often connected with systemic exclusion of communities.20 Yet for many, recognition remains difficult to achieve. On the other hand, surveillance is an inevitable outcome of the legibility offered by computational systems. The more legible people are, the less private their lives are. How people tell their stories to AI systems is a deeply consequential choice.

The Stories Told By AI

It is not just that people tell their stories to computational systems; these systems also tell stories about people. At their core, these systems learn to predict and replicate stories of people’s past decisions and behaviors as recorded in their training data. While there are occasions when people record these stories-as-data themselves, data are all too often also stories that others record without affording people any opportunity to shape what is recorded. Every data-driven prediction or judgment encapsulates a story about a person. The story of a person enrolled in a computational system for refugees is told as a story of a refugee, although they may be a citizen. This next story from Australia illustrates the challenges of contending with stories that a system used to streamline the distribution of social security tells about beneficiaries.

On Saturday July 20, 2019, a woman named Sally received a notice from Centrelink, what was then Australia’s government agency administering social security payments to eligible persons. The notice said she owed them $24,000 for overpayment of the sickness benefit.

Sally was not alone. Between 2016 and 2020 . . . thousands of social security recipients received automated debt notices as part of the new Online Compliance Intervention (OCI), commonly known as Robodebt. The OCI employed a simple algorithm that used weekly income to estimate yearly income. Because many social security recipients are in casual employment, their income is irregular. That meant many yearly income estimates were wrong.

For Sally, who suffers from depression and anxiety, the experience of being issued a Robodebt notice awoke feelings of vulnerability, distrust, and betrayal.21


Eventually, Robodebt victims such as Sally came together to file a class action lawsuit against the Australian government. This lawsuit successfully compelled the government to pay out 1.8 billion Australian dollars in refunds and wiped debts to 443,000 individuals.22 Sally’s story raises a different set of ethical challenges.

  • Mandatory data: The challenges of accommodating variance of income illustrate how difficult it can be to map a changing value for a data category over time. On the one hand, self-declaration of data is easily subject to concerns of mistrust; it is also more difficult for people struggling with mental health issues. On the other hand, variation in data is difficult to consistently map without automation, which inevitably lends itself to worker surveillance.
  • Data-driven delegation: While the datafication and automation of the state is well underway, efficient error checking and recourse mechanisms for those who are incorrectly or unjustly classified by automated systems are lagging behind. It takes too long to correct an automated decision-making system. Criminal proceedings around the world operate on the premise that a person is innocent until proven guilty, yet automated fraud detection systems operate by flagging beneficiaries suspected of fraud. Their benefits are instantly rescinded until they are able to prove their innocence, effectively inverting procedural justice in the delivery of welfare services.23
  • Backend work: Reversing the withdrawal of a service requires enormous backend work. Beneficiaries must not only contend with complex paperwork and unclear processes to make a case for themselves, but they must also be able to show where and how the data-driven judgment went wrong in adjudicating their claim. Furthermore, they must do so under difficult circumstances after having their benefits rescinded.


The stories that AI tells about people are grounded in existing infrastructural processes of data collection, curation, and analysis. AI cannot simply be considered layers of modular software stacks and hardware connected to each other. It is rather an uneven imbrication of computational systems and ordinary practices that lie in messy and even unarticulated local overlap with each other. It is crucial to take this imbrication seriously as the stage is set for AI to tell its stories about people.24

AI Governance As A Site For Ordinary Ethics


The exchange of stories between people and AI represents everyday struggles to interface with both the inputs and outputs of computational systems. On the one hand, mundane moments of seeking alignment with inputs can be extractive, for example, having to share biometric data just to be able to secure access to food aid. Ordinary struggles with self-determination in representing oneself through data can motivate claims to sovereignty. On the other hand, the experiences of contending with uneven outputs often instigate solidarity among victims of data-driven decisions. The challenges they face in securing redress for their grievances are central to concerns around dignity and human rights in the organization of data-driven services. Yet the discourse around social consequences of AI systems continues to highlight concerns of bias, disparate impact, surveillance, privacy violations, opacity, and now, safety and security risks. A common thread that binds these concerns is the premise of approaching AI as a tool. The primary focus remains on designing AI better.

My provocation for change in approaches to governing AI can be summarized as a question: How would the collective approach to governance change when attention is directed toward living with AI? This question invites focus on the ordinary experiences of contending with the agency of computational systems in everyday life. This agency has unique implications for the majority world where such systems are often characterized as symbols of development, modernity, and progress,25 which makes questioning, critiquing, and resisting them harder.

At the same time, living with the agency of computational systems is a global phenomenon, whose burden is felt most acutely by those who remain at the margins of who AI is optimized and designed for. Accounting for the ordinary ways in which once-manageable lives are rendered unmanageable by AI (as in the case of Sally or the doubly registered Kenyan citizens) is crucial to the processes of governing AI. This accounting doesn’t simply require attending to the challenges of representing people’s pasts as my stories illustrate; this accounting is a way to orient toward the future. Engaging with how AI becomes ordinary is necessary to identify the harms of AI systems that may become more widespread in the coming years. I conclude this piece with observations on the three broad areas of inquiry.

  • Mandatory data: The more mandatory data categories needed to organize a service, the more difficult it becomes for people to access that service. Services need accessible and simple forms on a website that loads quickly and well on a laptop and a mobile phone, from a shared library computer or a personal device. The easiest way to ensure broader representation of people in computational systems is to minimize the requirements of mandatory data and maximize the accessibility of services.
  • Data-driven delegation: When people are afforded a broader scope of participation including, but not limited to, (1) learning to navigate the networked logic of computational systems, (2) capacity to opt out without losing access to services, and (3) opportunities to speak back and seek redress when faced with harms, their data can effectively represent their lives. The balance of power in relations between different actors increasingly organized around and through data-driven judgments is necessary for governing AI. Negotiations over this balance of power through, for example, public participation, activism, advocacy, and court cases serve as a crucial proxy for the mutual shaping of the agency of computational systems and data subjects in the organization of ordinary data-driven life.
  • Backend work: Although one of the core purposes of deploying computational systems is to remove intermediaries, they continue to play a key role in making these systems work for or against people in the last mile of service delivery. While street-level bureaucrats and frontline customer service operators are key actors, there are also increasing forms of communal knowledge sharing among people about their experiences of resisting computational systems and getting them to work.26 The formation of these data communities exemplifies novel forms of backend work in enacting ordinary ethics.

Finally, many of the stories presented in this piece deal with emerging contests over computational systems in courts. Such court proceedings are crucial sites of formalizing ordinary ethics; these court cases are occasions when questions of evidence and authority with respect to data-driven organization of services are explicitly contested. Their emergence and increasing prevalence across the world highlight that efforts to govern AI must begin with the insight that data systems do not act alone; data-driven and human judgment are inevitably enmeshed in the process of charting the ethical landscape of everyday life with computational systems. After all, ethics cannot be separated from everyday lives.

Notes

1 The lower-case use of “global south” and “global north” in this article is meant to acknowledge the fluidity and complexity of global relationships.

Esteban Morales and Katherine Reilly, “The Unhomed Data Subject: Negotiating Datafication in Latin America,” Information, Communication, and Society, August 30, 2023, https://www.tandfonline.com/doi/abs/10.1080/1369118X.2023.2250436.

Paola Ricaurte Quijano, “Ethics for the Majority World: AI and the Question of Violence at Scale,” Media, Culture, and Society 44, no. 4 (May 1, 2022): 726–745, https://doi.org/10.1177/01634437221099612; Sareeta Amrute, Ranjit Singh, and Rigoberto Lara Guzmán, “A Primer on AI in/from the Majority World: An Empirical Site and a Standpoint,” Data and Society Research Institute, September 14, 2022, https://datasociety.net/library/a-primer-on-ai-in-from-the-majority-world; and Chinmayi Arun, “AI and the Global South: Designing for Other Worlds,” in The Oxford Handbook of Ethics of AI, ed. Markus D. Dubber, Frank Pasquale, and Sunit Das (New York: Oxford University Press, 2020), 589–606.

Shahidul Alam, “Majority World: Challenging the West’s Rhetoric of Democracy,” Amerasia Journal 34, no. 1 (January 1, 2008): 88–98, https://doi.org/10.17953/amer.34.1.l3176027k4q614v5.

Thilo Hagendorff, “The Ethics of AI Ethics: An Evaluation of Guidelines,” Minds and Machines 30, no. 1 (March 1, 2020): 99–120, https://doi.org/10.1007/s11023-020-09517-8; and Anaïs Rességuier and Rowena Rodrigues, “AI Ethics Should Not Remain Toothless! A Call to Bring Back the Teeth of Ethics,” Big Data and Society 7, no. 2 (July 1, 2020): 1–5, https://doi.org/10.1177/2053951720942541.

6 Markus Anderljung et al., “Frontier AI Regulation: Managing Emerging Risks to Public Safety,” arXiv, September 4, 2023, http://arxiv.org/abs/2307.03718.

Ranjit Singh and Rigoberto Lara Guzmán, “Prologue,” in Parables of AI in/from the Majority World: An Anthology, ed. Ranjit Singh, Rigoberto Lara Guzmán, and Patrick Davison (New York: Data and Society Research Institute, 2022), 1–15, https://datasociety.net/wp-content/uploads/2022/12/DSParablesAnthology_Dec2022Prologue_Singh_Guzma%CC%81n.pdf.

Ignacio Siles, Living with Algorithms: Agency and User Culture in Costa Rica (Cambridge, MA: MIT Press, 2023), https://doi.org/10.7551/mitpress/14966.001.0001; and Samir Passi and Phoebe Sengers, “Making Data Science Systems Work,” Big Data and Society 7, no. 2 (July 2020): 1–13, https://doi.org/10.1177/2053951720939605.

Ranjit Singh and Steven Jackson, “Seeing Like an Infrastructure: Low-Resolution Citizens and the Aadhaar Identification Project,” Proceedings of the ACM on Human-Computer Interaction 5, no. CSCW2, October 18, 2021, 315:1–315:26, https://doi.org/10.1145/3476056.

10 Babu is a Hindi word for a street-level bureaucrat or a government servant.

11 Author’s personal communication with Yogita, August 3, 2015, (emphasis added).

12 Evelyn Fox Keller, “Booting Up Baby,” in Genesis Redux: Essays in the History and Philosophy of Artificial Life, ed. Jessica Riskin (Chicago: University of Chicago Press, 2007), https://doi.org/10.7208/chicago/9780226720838.003.0016.

13 Jonathan Corpus Ong, “Toward an Ordinary Ethics of Mediated Humanitarianism: An Agenda for Ethnography,” International Journal of Cultural Studies 22, no. 4 (July 1, 2019): 481–498, https://doi.org/10.1177/1367877919830095.

14 Veena Das, “Ordinary Ethics,” in A Companion to Moral Anthropology, ed. Didier Fassin (Hoboken, NJ: Wiley-Blackwell, 2012), 134, https://doi.org/10.1002/9781118290620.ch8.

15 Ranjit Singh, “The Backend Work of Data Subjects: Ordinary Challenges of Living with Data in India and the US,” in Media Backends: Digital Infrastructures and Sociotechnical Relations, ed. Lisa Parks, Julia Velkova, and Sander De Ridder, (Urbana: University of Illinois Press, 2023), 229–244.

16 Eduardo Gudynas, “Buen Vivir: Today’s Tomorrow,” Development 54, no. 4 (December 1, 2011): 441–447, https://doi.org/10.1057/dev.2011.86.

17 Haki na Sheria Initiative, “Biometric Purgatory: How the Double Registration of Vulnerable Kenyan Citizens in the UNHCR Database Left Them at Risk of Statelessness,” Citizenship Rights in Africa Initiative, November 17, 2021, https://citizenshiprightsafrica.org/biometric-purgatory-how-the-double-registration-of-vulnerable-kenyan-citizens-in-the-unhcr-database-left-them-at-risk-of-statelessness.

18 Haki na Sheria Initiative, “Kenya: Press Release – Double Registration,” Citizenship Rights in Africa Initiative, January 18, 2022, https://citizenshiprightsafrica.org/kenya-press-release-double-registration.

19 Keren Weitzberg et al., “Between Surveillance and Recognition: Rethinking Digital Identity in Aid,” Big Data and Society 8, no. 1 (January 1, 2021): 1–7, https://doi.org/10.1177/20539517211006744.

20 Christoph Sperfeldt, “Legal Identity in the Sustainable Development Agenda: Actors, Perspectives and Trends in an Emerging Field of Research,” International Journal of Human Rights, April 2021, 1–22, https://doi.org/10.1080/13642987.2021.1913409.

21 Lyndal N. Sleep, “Trauma and Automated Welfare Compliance in Australia,” Data and Society: Points (blog), December 14, 2022, https://medium.com/datasociety-points/trauma-and-automated-welfare-compliance-in-australia-ba661a60b50f.

22 Luke Henriques-Gomes, “Robodebt: Five Years of Lies, Mistakes and Failures That Caused a $1.8bn Scandal,” Guardian, March 10, 2023, sec. Australia news, https://www.theguardian.com/australia-news/2023/mar/11/robodebt-five-years-of-lies-mistakes-and-failures-that-caused-a-18bn-scandal.

23 Chris O’Neill et al., “Social Issues in Automated Decision-Making,” ARC Centre of Excellence for Automated Decision-Making and Society, 2022, https://admscentre.org/socialissuesinADM.

24 Ranjit Singh, “Study the Imbrication: A Methodological Maxim to Follow the Multiple Lives of Data,” in Lives of Data: Essays on Computational Culture in India, ed. Sandeep Mertia (Amsterdam: Institute of Network Cultures, 2020), 51–59, https://networkcultures.org/wp-content/uploads/2021/02/Lives-of-Data-.pdf.

25 Linnet Taylor and Dennis Broeders, “In the Name of Development: Power, Profit and the Datafication of the Global South,” Geoforum 64 (2015): 229–237, https://doi.org/10.1016/j.geoforum.2015.07.002.

26 Morales and Reilly, “The Unhomed Data Subject.”