Source: Getty
article

A Global South Perspective on Explainable AI

A context-driven approach is necessary to translate principles like explainability into practice globally. These vignettes illustrate how AI can be made more trustworthy for users in the Global South through more creative, context-rooted approaches to legibility.

by Jake Okechukwu Effoduh
Published on April 30, 2024

Introduction

In the last five years, over a dozen international institutions have emphasized the need for outputs from artificial intelligence (AI) systems to be explainable. Among the first sets of influential international guidelines that have shaped AI policies globally is the Principles on AI by the Organisation for Economic Co-operation and Development (OECD).1 Third in a list of five values-based principles, the OECD principle on transparency and explainability prescribes transparency and responsible disclosures around AI systems to ensure that people understand when they are engaging with them and can challenge relevant outcomes. This analysis is far broader in its conception of explainability and seeks to consider not only the system’s technical ability to be understood but the broader workings of the AI system as embedded in society and the ability of impacted actors within broader societal contexts to understand, accept, and trust the outcomes of these AI systems.

This article explores how realizing the concept of explainable AI could benefit from some subaltern propositions observed from an African context. One proposition is the incorporation of humans serving as AI explainers akin to griots or midwives, who can provide culturally contextualized and understandable explanations for this technology. Another proposition is for explainability to be modelled as a generative exercise that enables users to customize explanations to their language and receive communication in native dialects and familiar linguistic expressions. It is possible for explainability to not just benefit individual understanding but communities as a whole by recognizing human rights and related norms of privacy and collective identities.

This international push for explainability in AI is partly driven by a desire to equip users and stakeholders worldwide to understand how AI systems make decisions, which could be crucial for fostering trust and accountability. For example, if an AI system that assists in diagnosing diseases from medical images can explain its reasoning by highlighting areas of an X-ray that led to its conclusion, doctors could be able to provide sanity checks on the factors that led to the diagnosis and see if those conclusions appear reasonable or not. Patients may also feel more confident about the use of AI in cases where there is a greater ability to evaluate the accuracy of these systems (which can often be opaque due to their complex technical nature).

Moreover, explainable AI is also significant for purposes of regulatory compliance, systems improvement, and effective risk management (to mention but a few beneficial purposes). These potential benefits could justify why explainable AI is gaining normative influence among states, regions, and international bodies. Most AI principles and frameworks today recognize explainability as a rationale for enabling people to understand how AI applications are developed, trained, operated, and deployed in their various domains.2

However, there are two possible challenges. One is a lack of clarity (or consensus) on what indicates a truly explainable AI system. Are there common yardsticks or benchmarks that these AI systems must meet? Who sets them and for whom? The absence of clear descriptions and universally agreed-upon standards for explainable AI may be indicative of a fragmented regulatory landscape that perhaps complicates the evaluation of explainability and raises questions about how well these AI systems are meeting a diverse range of stakeholder needs. But it is not clear whether globally agreed-upon standards on explainability would be a net positive or negative. Does the absence of such universal standards not leave room for pluralist and culturally relativist approaches that could be more resonant in local contexts? Nevertheless, it is still a challenge to ensure that explainability is meaningful and accessible to all users—from developers to end-users—in both global majority and minority countries.

The second possible challenge is that the dominant policy prescriptions to encourage explainable AI are often rooted in Western perspectives. While undoubtedly helpful, these prescriptions are limited in their applicability across subaltern cultural and contextual landscapes. For example, in light of the OECD prescription for AI actors to provide meaningful information that is appropriate to a given context (along with the logic that served as the basis for an AI-generated decision), an AI system designed to assess creditworthiness may need to account for different economic structures and cultural attitudes toward money in various domains.3 In a society where cash transactions are more common than credit, the system’s explanations may be irrelevant or incomprehensible, impeding end users in such societies from understanding how the AI system interacts with their financial realities. Even more so, there is a need for such a tool to explain that its decisionmaking information may look different between various contexts, and such an explanation should highlight what is critical to the model’s behavior to enable people to assess the peculiar inputs, features, or attributes that are relevant for a given outcome.

None of the AI tools I have come across in sub-Saharan Africa that assess creditworthiness has been able to offer an explanation for the financial reliability and creditworthiness demonstrated through participation in a chama, a traditional, informal savings and lending group prevalent in Kenya (and East Africa), where members contribute money regularly and use the pooled funds for various purposes.4 The gap in recognition of such local financial practices by AI systems implicates the need for more culturally aware AI models as their functionality may be impaired without such recognition. But when it comes to explainability, the dominant context of a model’s training and application should be mentioned at the outset, irrespective of how prevalent these features are. Personal credit systems may be commonplace, so explaining the dominant financial culture from where such credit-related AI systems are trained may be considered rudimentary, but they may give significant context to people in other climes relying on, say, barter systems, commodity monies, or chama.

It is essential to recognize that several African countries, like in other regions of the Global South, often do not have the extensive resources required to develop advanced AI systems and, therefore, rely significantly on AI software created by more technologically advanced countries in the Global North. This dynamic places African nations in a consumer position, using AI tools whose development contexts do not necessarily align with the nuances of their own local cultural, ethical, and social traditions. Western countries, recognizing the untapped potential of African markets, are keen to supply AI technologies, an arrangement that implicitly encourages a form of technological dependence. Therefore, a dialogue between African countries and AI developers in the Global North must take place to help promote a shared understanding and joint contributions to what constitutes explainable AI.

A Snapshot of AI From the Standpoint of the Global South

The Global South perspective that this piece hopes to provide tries to use the viewpoint of an African researcher who works in the region and who wants to provide some varied assessments on what function, purpose, and meaning is assigned to AI explainability. During the fieldwork for my doctoral research, I spent twenty nonconsecutive months across various sub-Saharan African countries (mainly Nigeria in West Africa, Kenya in East Africa, and South Africa in the continent’s southern reaches). My research investigates how AI is being legitimized in the region and how it could help cure a legitimization crisis that human rights activists face in the region. Adopting a Global South perspective here means providing an alternative narrative that is not typically in the mainstream of explainable AI discourse. The term “Global South” itself may arguably be an outdated term that alludes to a simplistic dichotomy between regions of the world,5 but it is used here theoretically to provide some subaltern contribution to AI explainability and to advocate for an inclusive appreciation of explainable AI that accommodates views from sub-Saharan Africa.6

I interviewed several AI experts and met with many local respondents who have surprisingly deployed AI tools in their work.7 A few of them were bureau de change dealers in the capital of Nigeria who used an AI application to determine black market exchange rates and forecasts with real-time data feeds of current market rates from various informal sources as well as charts and graphs to visually represent fluctuations in exchange rates. Another set of respondents were dairy farmers in Kiambu County of Kenya, who used machine vision and image recognition software to detect diseases in cows and suggest treatment options. These farmers (with help from technical operators) uploaded pictures of their cows directly from their farms to a mobile platform, which provided them with disease analysis and treatment suggestions. Then there were gold miners in the Witwatersrand Basin of South Africa who used an integrated system of sensors to monitor the structural stability of the mines and an algorithm that analyzed both sensor data and equipment data to flag potential issues, thereby improving deep mining optimization and safety.

I was impressed that people in remote areas across these three subregions had begun relying on AI tools in beneficial ways, disproving the generalizations made only a few years ago that Africa had yet to enter its AI revolution. However, I was concerned that hardly any of the aforementioned users understood how these AI systems functioned or had received any explanation of how the software reached outcomes. For example, some of the farmers in Kenya expressed that they had since grown keenly interested in knowing about the operations underpinning the “magical” image-parsing algorithms they relied on, so they could perhaps elicit some rational explanation to fully trust it, but they had no way of drawing out such explanations. Even the local technical operators who assisted them with operating the phones couldn’t explain the outputs from the AI systems. Also, some of the bureau de change agents in Nigeria wondered about getting information as simple as the names of the people who built the system so that when customers asked them about the source of their forecasts, they could perhaps rely on the prestige of the developers, but this seemed far-fetched. It is worth noting that, in such situations, the number of translations that would have to occur between those who developed the models and those who are impacted by their use may make explainability difficult. However, even if the explainability of these models was technically possible, how it can be achieved in practice may require more than an engineering solution. After all, the bureau de change agents still needed trust to be established by perhaps the financial analysts behind the app’s models to instill confidence in its forecasts rather than only the strength of the system’s predictive performance.

Perhaps what made it more difficult for the users of these tools was that the AI tools they relied on were not developed in the region and had no local interlocutors to rely on. If a greater degree of localization were present, perhaps the developers of these tools could be easily reached to help these users find some answers, and the developers themselves could, in turn, have a better grasp of their users’ needs—considering that, unlike other technologies, AI outcomes can be dynamic and can change over time as the system learns from and adapts to new data and interactions. This interplay (or lack thereof) also highlights a consequence of the positioning of most African countries as only users or consumers of these technologies. This disparity is a much bigger problem than solely the need for explainability, but it does make explainability (among other principles) correspondingly more challenging.

Most of the AI systems I encountered in the subregions were imported, and some were even hosted abroad—mainly in the United States, with some in Canada, the United Kingdom, China, Germany, Israel, and surprisingly Estonia. Imported AI systems like these can, of course, offer immense value. But AI systems’ benefits are undermined if their design insufficiently accounts for relevant cultural contexts, or if their models are not optimized for a given region. Moreover, explanations provided by such AI systems may not align with local contexts, practices, or needs. Such misalignment in AI model design or explanations can make it difficult for users in the region to comprehend the basis of the AI system’s decisions, undermining their effectiveness and limiting users’ ability to trust or effectively interact with them. Some local users that I interviewed expressed frustration to me in the field when the software provided information that was not contextually applicable. For example, the software used by the bureau de change dealers described the Nigerian currency as “other” (despite having currency depictions for more than 40 currencies). Such limitations—along with more basic challenges, like outputs being in foreign or uninterpretable languages—caused some confusion with initial app users and allowed room for miscalculations, hindering customer trust in the software.

To cite another example, some of the cattle herders in Kenya had cows of the Boran and Sahiwal breeds. They complained about the image vision machine regularly misdiagnosing indigenous breeds of cows. The image recognition software often labelled them as undernourished because they were petite with a lean build (a natural adaptation to their environment as they often walked long distances for grazing and required less feed intake). The so-called optimal weight template displayed on the platform was like the Western Holstein, Angus, or Hereford breeds, which typically are larger and selectively bred for meat or dairy. Therefore, the insufficient representativeness of the data used to build these AI models may be partly responsible for why these systems did not effectively capture the diversity or complexity of the real-world scenarios that they were expected to handle. Because of such gaps, outputs may become less transparent and more challenging to explain. For the herders, trust in the model was undermined by a lack of clarity about what was causing such misdiagnoses. If the model were better set up for explainability, they could perhaps dig in to understand how much of the diagnosis was likely due to actual undernourishment, and how much was a result of mislabeling of weights. If the image vision software was built to explain its functionality better, that could allow the cattle herders to trust its predictions more, but as it was, they had a sense that the model was not adapted to meet their local context, yet they had no real way to confirm if that was the key issue.

Explainability would not fundamentally fix problems with the opacity of many AI models, but it would help make users (and developers) aware of such problems and the extent to which they affect outputs in different contexts.8 Missing opportunities for interactions with end users meant that avenues for leveraging user feedback to enhance explainability went unexplored. The Kenyan herders had no opportunity to seek clarification on the weighted template of the cows displayed on the portal or to provide feedback on the software regarding the breeds of their herds. As a result, a few of them said they found it challenging to justify treatment suggestions for nutritional rehabilitation and dietary supplements due to the system’s opacity, even when the treatment suggestions may have been well-founded. What is the value of an explanation (no matter how useful or intelligible) if these cattle farmers or bureau de change dealers have reasons to believe that the explanations are not tailored to them and their situations? These examples reflect issues broader than explainability, but they typify how a gap in relative contexts could hinder trust in AI and undermine user confidence in the technology, a feature that explainability is meant to enhance. These are only two examples of several that exist in the field.

Subaltern Considerations and the Advancement of Explainable AI

A technical problem can be addressed with both technological and nontechnological interventions. Regarding the latter, the idea of incorporating human explainers as intermediaries and midwives between AI systems and users is worth exploring. This concept (different from the human-in-the-loop intervention for AI oversight) draws upon a deeply human tradition of guidance and informed interpretation that is vital in many African societies and indigenous communities.9 For example, in countries like Mali and Senegal, griots long have been central to the preservation and interpretation of these societies’ cultural knowledge and histories. For centuries, they have helped to interpret the context, relevance, or implications of various events, histories, and traditions in ways that local communities and the public can understand and appreciate. They mobilize knowledge using skills of music, storytelling, mediation, and even advocacy to translate information for their communities.10 In present-day Senegal, some griots are using new technologies and helping their local communities understand new digital media tools with explanations that are creative and culturally contextualized.11

In the context of AI, such jalis or griot-like figures may help convey factual content and interpret the context, relevance, and ethical implications of information processed by AI systems. They could translate outcomes from AI systems into meaningful narratives that align with their communities’ cultural contexts. In my fieldwork, I found what I will call human “AI explainers” in Tanzania acting in a griot-like capacity. There, a local women’s health nongovernmental organization (NGO) was supported with a mobile-optimized AI system that automatically interpreted ultrasound videos, helping local indigent women who might not have had financial access to specialist centers or would have had to travel a long distance or wait a long time to access ultrasound services. This AI tool (less intricate than the big ultrasound scanners in radiology labs) required Bluetooth-enabled wand-like ultrasound probes to be swept across these women’s abdomens several times, reflecting sound waves captured by the probes and translated into digital images or videos. After the video and image data was preprocessed, the AI software extracted relevant features, providing these women with insights about their pregnancies in infographic ways via handheld tablets. These women were thrilled to see simulations of their babies on the tablets, a feature that perhaps promoted emotional bonding between the women and their fetuses.

Even in its beta phase, this AI system relied on edge detection and pattern recognition (from images and videos collected from these women over time). It integrated advanced imaging with an AI algorithm to learn patterns and classify images for the purpose of providing simulated outputs. Despite its limited functionality, this AI tool seemed faster and more accessible to more women, and it did not require as much technical support as the traditional alternatives. However, when the software interpreted fetal measurements and indicated potential concerns but could not explain the basis for its conclusions, or when different assessments in successive ultrasounds were made without explanations for these changes, the AI system left some women worried or distrusting its accuracy. The staff of the NGO (mainly local midwives and public health specialists) requested some training on the AI system from the software’s development team (mainly foreign machine-learning engineers, data scientists, and sonographers). The NGO learned about the data processes and management surrounding the model, including the software’s user interface, user experience, and performance metrics. Now, with some considerable understanding of the AI system’s functions and limitations, coupled with the NGO members’ established knowledge of maternal health issues and practices in the community, some of the NGO’s members (along with their volunteers) served as human AI explainers.

These explainers were physically present to detail the rationale behind specific measurements and outcomes; they also provided basic contextual explanations of possible implications and errors (using culturally applicable anecdotes), and they indicated cases where the women may need standard follow-up procedures. More so, with the opportunity to describe the tablet simulations in Swahili and to even answer pregnancy questions unrelated to the AI tool, these griot-like explainers aided the patients’ comprehension and trust in the use of the AI system, but more than that, they helped fill in some explainability gaps that the technology’s designer may not have been able to realize. To solve this explainability gap, it may be promising to consider adopting (or employing people in) explainer roles modeled after griots or midwives. These human AI explainers can even serve a purpose beyond explanation. They can provide a check against the decontextualized or potentially biased interpretations of data, lending a human touch to the cold calculations of machine intelligence.

Moreover, some developers have complained that the expectation of explainability within AI governance is too far-reaching for an ethical requirement (as the construction may be broad, dynamic, and require domain specificity).12 As a response to them, one possible solution to this challenge of explainability could be for developers to avoid creating general-purpose AI systems and instead focus on specialized AI applications that will allow for tailored approaches to meet explainability requirements that are more optimized for specific, well-understood climes. This could also reduce the risk of creating systems that, while potentially efficient, perpetuate cultural insensitivities or echo the dynamics of historical imperialism. Explainable AI should emphasize the need to consider the broader sociocultural contexts in which AI systems are deployed. Researchers at the University of Cape Town in South Africa are developing a new explainability approach built to a continuum of literacy and contexts.13 Instead of just words and information production, this approach to AI systems uses simple visuals, interactive dashboards, and storytelling to explain their outputs in an adaptable and self-explanatory way to locals. There is a need for a more participatory approach to developing AI systems and a more robust framework for explainability, which could ensure that people affected by a system’s decisions can challenge or change the outcome.

Conclusion

Regardless of how sophisticated AI tools are, they alone cannot achieve specific outcomes without people’s understanding, input, or intervention. Drawing a parallel to a cooking pot, no matter the pot’s utility, it requires cooks to know how to use it and what it can do so they can use it suitably (and perhaps safely) to cook their meals. Seeing that people have different culinary needs, preferences, and cooking traditions, these people should be the ones to control how the pot prepares their food and to do this, they require information on how the pot functions (with a manual that they can understand in their language) so that they may maximize the utility of the pot and trust it to cook their food well. In a broader sense, the highest-quality AI tools may help to solve some of the biggest problems in society, but they will be insufficient if they lack adaptability to the varying needs of transparency, interpretability, and explainability that a diverse range of people and communities need. Explainability is a process and not just an outcome. And even the best cooking pot will not cook everybody’s food in a suitable way without adaptations.

Along these lines, the African Commission on Human and People’s Rights passed a resolution in 2021 to undertake a study on how AI systems will consider African norms and values, such as Ubuntu, and the African communitarian ethos of community well-being, collective responsibility, and inclusivity.14 The goal is to drive toward developing and deploying AI systems that are transparent and understandable not just to individuals but to communities as a whole. More applicably, this is the first supranationally mandated inquiry into how AI decisions can be made and explained in a way that respects communal values, cultural nuances, and social dynamics prevalent in local African societies. (This perspective may also be challenging the often-individualistic approach of Western-centric AI models by advocating for AI systems that are not just technically explainable but also culturally resonant and ethically aligned with the values and structures that matter to their users.)

Explainable AI should be navigable. This does not negate the principle of universal design. Instead, explanations from AI systems should be accessible to everyone, regardless of geographical location (or physical, neurological, or cognitive differences). Explainability should, to the degree it is practicable, be adaptable to people’s idiosyncratic needs and capabilities, as one-size-fits-all explanations will hardly be satisfactory.

Notes

1 Organisation for Economic Co-operation and Development (OECD), “Transparency and Explainability (Principle 1.3),” OECD, https://oecd.ai/en/dashboards/ai-principles/P7.

2 BSA (the Software Alliance), “Comparing International Frameworks for the Development of Responsible AI,” BSA, https://ai.bsa.org/global-ai-principles-framework-comparison.

3 OECD, “Transparency and Explainability (Principle 1.3).”

4 William Worley, “We Don’t Want to Depend on Husbands, We Want to Help Ourselves,” Al Jazeera, January 29, 2019, https://www.aljazeera.com/features/2019/1/29/we-dont-want-to-depend-on-husbands-we-want-to-help-ourselves.

5 Stewart Patrick and Alexandra Huggins, “The Term ‘Global South’ Is Surging. It Should Be Retired.,” Carnegie Endowment for International Peace, August 15, 2023, https://carnegieendowment.org/2023/08/15/term-global-south-is-surging.-it-should-be-retired-pub-90376.

6 Alina Sajed, “From the Third World to the Global South,” E-International Relations, July 27, 2020, https://www.e-ir.info/2020/07/27/from-the-third-world-to-the-global-south.

7 I interviewed a total of 255 people mostly in Nigeria, Kenya, and South Africa. In Nigeria, in December 2020, April–June 2021, and November 2023. In Kenya and Tanzania, December 2020–February 2021 and August 2022–October 2023. In South Africa, in June 2022, July 2022, and June–August 2023. 

8 Upol Ehsan, Koustuv Saha, Munmun De Choudhury, and Mark O. Riedl, “Charting the Sociotechnical Gap in Explainable AI: A Framework to Address the Gap in Xai,” Proceedings of the ACM on Human-Computer Interaction, April 1, 2023, https://dl.acm.org/doi/abs/10.1145/3579467; and Chinasa T. Okolo, “Towards a Praxis for Intercultural Ethics in Explainable AI.” arXiv.org, April 25, 2023, https://arxiv.org/abs/2304.11861.

9 Ge Wang, “Humans in the Loop: The Design of Interactive AI Systems,” Stanford University, October 20, 2019, https://hai.stanford.edu/news/humans-loop-design-interactive-ai-systems.

10 Caroline Iantosca, “Music and Storytelling in West Africa,” University of North Carolina, March 1, 2017, https://worldview.unc.edu/news-article/music-and-storytelling-in-west-africa.

11 Cornelia Panzacchi, “The Livelihoods of Traditional Griots in Modern Senegal,” Africa 64, no. 2 (1994): 190–210, https://www.cambridge.org/core/journals/africa/article/abs/livelihoods-of-traditional-griots-in-modern-senegal/F01249A98990637AF1940C69CC606173.

12 “A Canadian Perspective on Responsible AI,” Office of the Superintendent of Financial Institutions and the Global Risk Institute, April 2023, https://www.osfi-bsif.gc.ca/Eng/Docs/ai-ia.pdf.

13 “Structure and History,” Centre for Artificial Intelligence Research, https://www.cair.org.za/about.

14 “About,” African Commission on Human and Peoples’ Rights, https://achpr.au.int/en; and “Resolution on the Need to Undertake a Study on Human and Peoples’ Rights and Artificial Intelligence (AI), Robotics and Other New and Emerging Technologies in Africa ACHPR/Res. 473,” March 10, 2021. https://achpr.au.int/en/adopted-resolutions/473-resolution-need-undertake-study-human-and-peoples-rights-and-art.

Carnegie does not take institutional positions on public policy issues; the views represented herein are those of the author(s) and do not necessarily reflect the views of Carnegie, its staff, or its trustees.