A few years ago, I visited Ashesi University in Ghana to help with a workshop. I was there in my official capacity as part of the U.S. Agency for International Development (USAID), working with researchers to support Ashesi’s computer science students in exploring how different concepts of trust and fairness in machine learning arise in the Ghanaian context. One of the speakers shared an example of a machine learning application that had been optimized to assess the insurability of drivers in suburban America, using data on their driving patterns. The application (quite predictably) failed miserably when deployed in a West African context. The anecdote was amusing to us all. The implications of what it signaled—consequential decisions were being made at an international scale based off a myopically U.S.-centric design—were not.
My formal training in the hard sciences had relied on technology being a neutral tool—one that afforded me the luxury of arriving at an objectively right or wrong answer. But technology represents choices made by human designers, and those choices are inevitably influenced by a designer’s assumptions, world views, and values. As I transitioned out of academic physics and into technology policy and international development, I quickly found that my work not only affirmed this lack of neutrality, but also reinforced how important it is that policymakers understand what that means at a global scale. Context, as they say, is king, and all too often the context of apps, algorithms, designers, and users in less-developed countries is an afterthought to technologists and policymakers in more advanced economies—despite our interconnectedness and the increasingly rapid proliferation of technology-based products and services in emerging markets.
Digital and data-centric technology offer such tremendous potential to developing countries in terms of economic growth, innovation, transparency in civic engagement, and so much more. But there is also the risk of adverse digital incorporation and the dangers arising from these technologies’ ability to perpetuate or exacerbate power asymmetries—both within and across national boundaries. During my time with USAID, I was proud to lead the development of policy efforts to shape the U.S. government’s approach to addressing the challenges surrounding the responsible, inclusive development and deployment of technology in low- to middle-income countries, including USAID’s first-ever Digital Strategy and Artificial Intelligence Action Plan.
Building trustworthy technology can be done when policies, and their implementation, account for the many ways that society has struggled to establish trust in an analog world. Our policy solutions must account for the myriad ways technology ends up being anything but neutral, and I’m thrilled to have the opportunity to work with colleagues at Carnegie and beyond to explore these issues.