A prophetic Romanian novel about a town at the mouth of the Danube carries a warning: Europe decays when it stops looking outward. In a world of increasing insularity, the EU should heed its warning.
Thomas de Waal
Source: Getty
Subnational jurisdictions are grappling with the tangible impacts that AI is beginning to have. Their efforts provide an important space to learn best practices for policy going forward.
Artificial intelligence (AI) policies and frameworks are developing rapidly at the national, international, and supranational levels, as well as at the subnational level. AI policy is a developing field, and this “working guide” seeks as much to establish key areas and concepts to watch as to draw conclusions. But even now, some notable trends at the subnational level have emerged from the ongoing work.
First, while the past decade has seen an explosion in diplomatic engagement by subnational officials, a refinement in subnational diplomatic practices, and increased visibility for cities and states on the global stage, those practices have not translated to the AI policy space. Subnational officials are turning to national and international frameworks for guidance in developing their respective policies toward AI, but they do not yet have influence upon these frameworks. Here, these officials may learn important lessons from the experiences of experts and policymakers who have focused on climate change, democracy, and sustainability issues and have integrated local perspectives and solutions into international fora and agreements.
Second, subnational jurisdictions are themselves employing a wide array of approaches toward AI, even though subnational, national, and international actors have yet to build connective tissue on AI policy issues. Though subnational jurisdictions often share similar goals and are building on existing policies, their level of engagement with the technology differs dramatically. Some cities are experimenting with AI for traffic management, chatbots for service delivery, and analysis of public comment and participation. Others, such as Los Angeles, are engaging in extensive internal stakeholder consultation. Still others, including Seattle, are refining existing policies through extensive engagement with city residents. And naturally, there are those who remain in wait-and-see mode.
Third, across the spectrum of engagement, most subnational jurisdictions are involved in intense knowledge-gathering exercises that seek to develop better understanding of both the technology and its possible implications for service delivery and policy priorities. Such efforts include creating inventories of use-cases, developing sandboxes and new risk frameworks, and building partnerships with outside institutions. These efforts will shape policy for years to come, but for the most part they focus on the use of the technologies by governments themselves. For the larger, potentially seismic changes that will occur in societies and economies, a larger set of questions still remains.
Fourth, and finally, there are important lessons to be learned from previous subnational policymaking frameworks related to other issues, including climate change and housing. In particular, climate change conversations often have focused around risks and attendant options to mitigate them, while housing increasingly has considered a rights-based approach. As knowledge around AI risks remains nascent and is being built presently, and rights regimes related to data and technology differ across jurisdictions, cities, states, and provinces are toggling back and forth between risk- and rights-based frameworks in trying to anchor their emerging approaches.
Though subnational policymaking mechanisms and authorities differ across national contexts, policy practices are beginning to emerge at the state/provincial and city levels, as is a spectrum of engagement with the technology itself. This “working guide” seeks to capture some of those practices, as well as the process and philosophies that inform their development. It represents, in part, learnings from an ongoing series of workshops co-hosted by Carnegie California and the Barcelona Centre for International Affairs (CIDOB). These workshops have included participation from industry, civil society, as well as senior officials from the states of California and Utah; the region of Catalonia; and the cities of Los Angeles, Carlsbad, Long Beach, Seattle, Boston, and Barcelona, as well as Eurocities and the United States Conference of Mayors.
The explosion of public attention to new capabilities enabled by Large Language Models (LLMs), such as Generative AI, in 2022 hastened the need for and quickened the pace of policy innovation. Cities, states, provinces, and regions have been engaged with AI for years, and many have well-developed policies around privacy and data use. LLMs and AI, which can be used for content creation, natural language generation, and creative tasks, have expanded the horizons of AI applications beyond traditional rule-based and analytical functions.
Subnational jurisdictions have a decade of experience in developing policies and governance around big data and artificial intelligence. Some of these policies are applicable to newer forms of AI, but political contexts and policymaking processes have evolved radically, as has the technology. With regard to policymaking, we have captured four broad goals and a series of evolving practices in pursuit of them. Though the goals are broadly shared, the level of engagement and the discrete approaches to advance them vary significantly.
There is an experimental, innovative, even chaotic pluralism to the emerging approaches at the subnational level, and they are by no means captured in their entirety here. Governments are engaging in these categories at different speeds and with different sequences of priorities. As such, there exists a wide spectrum of engagement with AI that spans from active experimentation with the technology to extensive internal stakeholder consultation to small refinement of existing policies to full wait-and-see mode.
Subnational AI policymaking occurs in the context of developing, though often at a slower pace, national, international, and supranational frameworks and regulations. National governments are taking the lead on catastrophic risk and national security AI policy–related questions. More likely than not, they will also lead on related questions of electoral processes and integrity. These policies and frameworks matter for subnational policymakers, who seek broad guidance, standard setting, and even ethical frameworks for their own policymaking. Although such developing AI governance regimes cannot be captured in their entirety, some of the essential national, international, and supranational frameworks are referenced below.
National policies, regulations, and uses of AI are also rapidly evolving and diverse in nature. Approaches range from informal guidelines to AI reporting requirements to outright bans. Stanford University’s Institute for Human-Centered Artificial Intelligence (HAI) found that since 2016, countries have passed 123 AI-related bills.
An increasing number of international forums are attempting to advance global frameworks for AI governance. These include the United Nations’s High-Level Advisory Body on Artificial Intelligence, the US-EU Trade and Technology Council, the Global Partnership on Artificial Intelligence (GPAI), and the Organisation for Economic Co-operation and Development (OECD). These international organizations seek to create a framework for global AI policymaking that establishes norms, mitigates risk, and inspires responsible collaboration between the private and public sectors. Numerous organizations, including the Carnegie Endowment for International Peace,Google Deepmind, and the World Economic Forum, have also proposed global AI governance frameworks. A Carnegie Endowment proposal, for example, calls for a new organization, the International Panel on AI Safety (IPAIS) to target the most urgent AI governance challenge—safety and security. This organization, inspired by the Intergovernmental Panel on Climate Change (IPCC), would have a deep technical understanding of current AI capabilities and the relevant safety and security risks. The panoply of efforts around AI global governance and leadership continues apace.
Introduced in 2021 and adopted in 2022, the EU’s AI Act regulates AI within EU member states. The act encourages the development of AI technologies that align with European values, emphasizing the importance of ethical AI deployment while fostering innovation and competitiveness in the EU AI landscape. It outlines rules for high-risk AI systems, including mandatory requirements for transparency, data quality, and human oversight. The act assigns applications of AI to three risk categories based on the potential danger these applications pose: unacceptable risk applications, high-risk applications and limited or low-risk applications. It bans AI applications that pose the most significant risks to safety and fundamental rights. Enforcement, importantly, lies with national governments, not subnational ones. Subnational players, through networks like Eurocities and platforms like the Committee of the Regions, have had a largely consultative role in the policy process around the Act.
Subnational governments are moving quickly to adopt or adapt established frameworks and policies around AI. Many, but not all, acted before their national counterparts. There is no uniformity of approach to how AI should be utilized or regulated, and subnationals are in different stages of policy development. Different subnational jurisdictions, idiosyncratic and diverse, approach AI with different degrees of comfort and fear, “anxiety and excitement.” Along that spectrum, subnationals have pursued various actions, ranging from interim guidelines and internal IT policies to EOs and legislation.
As demonstrated in the CIDOB Atlas of Urban AI, a map and repository of city initiatives to regulate the use, development, and application of AI, cities are fertile ground for testing benefits of technology and mitigating risks through policy entrepreneurship. The atlas, which tracks 165 initiatives across 63 cities, reveals that even though many cities are innovating on AI use-cases, few have overarching strategies. Hundreds of cities, however, do have existing privacy, big data, and even machine learning policies. Many of these policies have been developed through collaboration and networks. The Cities Coalition for Digital Rights (CDDR), for instance, was launched by Amsterdam, Barcelona, and New York City in 2018. It now includes 50 cities worldwide with the goal to “promote and defend digital rights” to “ensure fair, inclusive, accessible and affordable non-discriminatory digital environments.” The CDDR was not AI-specific, preceding the 2022 leap in AI by nearly four years, but it does focus on policy issues captured in the AI policy problem set, including data privacy, bias, and algorithmic transparency. Many cities, including U.S. technology hubs, are looking to the CDDR for guidance on AI policy.
City practitioners have been examining state and national regulations for guidance while seeking to influence the frameworks with ethical principles and lessons gleaned at the local level. As subnational jurisdictions, cities, states, provinces, and regions share many of the same policy levers and goals, and therefore are grouped together in the examples referenced below.
Rapidly developing policy processes occur in the context of preexisting policies, as well as nascent (or entirely absent) national and international efforts. For example, over the past decade the concept of the “Smart City,” now often conflated with commercial platforms, has introduced key concepts around data in policy processes into the public sphere. Some subnationals are using these existing policies to manage the influx of policy questions arising from the introduction of emergent technologies.
According to a recent survey by Bloomberg Philanthropies, the vast majority of mayors (96 percent) are interested in how they can use AI to improve local government. Of those cities surveyed, 69 percent report that they are currently exploring or testing the technology to increase the efficiency of government services for data analysis (58 percent); citizen service assistance (53 percent); and drafting memos, documents, and reports (47 percent). A large majority of cities reported that security and privacy (81 percent), and accountability and transparency (79 percent) are the key ethical principles that guide their exploration and use of AI. Cities are actively engaged in policymaking to ensure that AI, when used, is employed in a manner that reflects the preferences of their residents.
The interest of policymakers and city and state officials in engaging AI may be well matched to the interests and concerns of their residents. In 2023, for example, Carnegie California surveyed Californians on their AI perspectives. Tracking the international efforts underway, nearly 50 percent of Californians expressed support for an international agreement on AI standard setting. Meanwhile, around 40 percent of Californians noted that local, state, and federal governments are “not doing enough” to respond to the potential benefits and risks of AI. More action on AI at not just the national level but also the state and local levels was the most common sentiment from Californians.
What might that action look like? The following subsections capture emerging practices in four broad categories: experimentation with technology and new policies; explainability and accountability; procurement policies; and efforts to enhance understanding of the technology, and potential policies, within government.
These efforts stand in contrast to other subnational jurisdictions that are taking more reserved approaches.
Explainability and accountability are critical themes in subnational AI policy development. By incorporating mechanisms such as public registries that hold both developers and users accountable for the outcomes of AI applications, policymakers seek to foster responsible and ethical deployment of AI technologies in local contexts.
Across multiple subnational contexts, governments have implemented general guidelines for public sector procurement of AI in particular. The public procurement process is not only a means to acquire technology, but also a process by which the cities can vet models for accuracy and anti-bias measures before implementing the technology in government services.
Subnationals recognize the knowledge gap that exists within public bureaucracy on AI. Therefore, a number of states, cities, and municipalities seek to build and train staff and their internal expertise, as well as establish partnerships with external expert bodies.
As outlined in the EO, the California Government Operations Agency issued in November a report on “The Benefits and Risks of Generative Artificial Intelligence.” The report offered a use-case focused comparison between “conventional AI” and “generative AI,” as well as a risk framework broken down into “shared,” “amplified,” and “new” risks, applied to issues such as labor impacts and privacy. Merging knowledge building and experimentation, the EO also directed the California Department of Technology to establish infrastructure to carry out AI pilot projects by March 2024, and set up sandboxes to test the projects to ensure that state agencies can begin to consider their implementation by July.
Just as the City of Boston appended the prefix “interim” onto its AI policy, so too did the State of California in its recent report on benefits and risks note the “preliminary” nature of its findings and the “rapidly developing” nature of the technology itself. Subnational governments are learning quickly, connecting, if informally, and attempting to deliver for their residents.
Looking forward, the ability of subnational governments to develop policy locally, exchange best practices regionally and globally, and influence policy at all level levels, will be determined by a number of issues that bear watching: which transitional platforms, such as the Frontier Models Forum or the G20, will emerge as the leaders, and how will subnational governments plug into them? How will the Global South, home to some of the faster growing urban areas and some of the leading voices in subnational diplomacy, most influentially enter into the AI global governance conversation? And, ultimately, which acute risks, as well as wider societal impacts, will emerge as the most pressing—and how might cities, states, provinces, and regions prepare for and organize around them?
Ian Klaus is the founding director of Carnegie California.
Ben Polsky is a consultant with Carnegie California.
Carnegie does not take institutional positions on public policy issues; the views represented herein are those of the author(s) and do not necessarily reflect the views of Carnegie, its staff, or its trustees.
A prophetic Romanian novel about a town at the mouth of the Danube carries a warning: Europe decays when it stops looking outward. In a world of increasing insularity, the EU should heed its warning.
Thomas de Waal
The EU lacks leadership and strategic planning in the South Caucasus, while the United States is leading the charge. To secure its geopolitical interests, Brussels must invest in new connectivity for the region.
Zaur Shiriyev
In return for a trade deal and the release of political prisoners, the United States has lifted sanctions on Belarus, breaking the previous Western policy consensus. Should Europeans follow suit, using their leverage to extract concessions from Lukashenko, or continue to isolate a key Kremlin ally?
Thomas de Waal, ed.
Carnegie scholars examine the crucial elements of a document that’s radically different than its predecessors.
James M. Acton, Saskia Brechenmacher, Cecily Brewer, …
The hyper-personalized new version of global sphere-of-influence politics that Donald Trump wants will fail, as it did for Russia. In the meantime, Europe must still deal with a disruptive former ally determined to break the rules.
Thomas de Waal