Source: iStock

article

Governing Military AI Amid a Geopolitical Minefield

The lack of an international governance framework for military AI poses risks to global security. The EU should spearhead an inclusive initiative to set global standards and ensure the responsible use of AI in warfare.

Published on July 17, 2024

The absence of a comprehensive global governance framework for military artificial intelligence (AI) presents a perilous regulatory void. This gap leaves a powerful technology category unchecked, heightening risks to international peace and security, escalating arms proliferation, and challenging international law. Governments worldwide are competing for leadership in emerging and disruptive technologies (EDTs) and grappling with the profound and transformative implications of AI. Meanwhile, corporate tech players have joined a trillion-dollar arms race in generative AI, jockeying for venture capital investment in foundation models. In the battle for economic supremacy and the competition over ethical standards, the global balance of power is precarious and the stakes are high. 

The geopolitical landscape is rife with tensions, as states and corporate giants vie for dominance in AI. There is therefore a sense of urgency among international organizations, scientists, and researchers, prompted by the potential of runaway AI developments, including disruptive applications in the military domain. If indeed AI poses an extinction-level existential threat to the future of humankind akin to the atomic bomb, as many in the field claim, the absence of a universally accepted global governance framework for military AI is a crucial concern. While this future Oppenheimer moment is worrying, the present risk of mission creep is more troubling because AI systems initially designed for specific civilian tasks can be repurposed to serve military objectives. 

These scenarios underscore why it is important for the EU to better position itself with clear normative and strategic options to respond to changes in the AI ecosystem and the geopolitical landscape. These options include enhanced governance of military AI, as the EU has an essential role to play in shaping safeguards for high-risk uses of AI and thereby promoting global norms and ethical standards. Standing at the confluence of EDT advancements and rising great-power competition in AI, the EU faces the daunting challenge of navigating a geopolitical minefield. That is why the union should take a leading role in supporting the building of a coalition for the global governance of military AI. 

A Military AI Gold Rush 

Major powers, such as the United States and China, are locked in a high-stakes competition for military technology. China’s 2019 white paper on national defense championed the theory of “intelligentized warfare,” in which leveraging AI is vital to the modernization plans of the People’s Liberation Army. The United States, meanwhile, has focused on curbing China’s access to advanced semiconductors that are crucial for AI models, amid fears of bolstering Beijing’s bleeding-edge military AI capabilities. Yet, it is unclear whether such efforts serve U.S. national security interests or the promotion of international peace, or both. What is certain is that attempts to control AI through export restrictions on cutting-edge chips have drawn parallels with nuclear nonproliferation strategies

But translating Cold War–era models to the digital age is not straightforward. OpenAI, a global leader in AI research, has called for an AI oversight body similar to the International Atomic Energy Agency, which monitors nuclear activities—a proposal that was supported by UN Secretary General António Guterres. A global multilateral treaty, similar to the Nuclear Nonproliferation Treaty, that stigmatizes governments that seek strategic advantages from perilous military AI developments, also holds theoretical merit. But unlike nuclear arms, AI technologies are highly fluid, raising critical questions about the suggestion of a research and development program for AI safety akin to the Manhattan Project undertaken during World War II to produce the first nuclear weapons. The rapidly evolving nature of AI calls into question the adaptability of nuclear nonproliferation frameworks more generally. Furthermore, as AI is a general-use technology category, distinguishing between its civilian and military uses is far harder than governing physical items like nuclear weapons. 

Even while advocating greater oversight over military AI, OpenAI in January 2024 revised its usage guidelines to lift restrictions that had explicitly barred its technology for applications linked to “weapons development” and “[the] military and warfare.” This shift in corporate policy marked a significant departure, as it allows for military applications of OpenAI’s advanced AI systems. This move reveals the company’s evolving stance on the ethical boundaries of using its technology for security and defense purposes. It further illustrates how new forms of corporate nonstate sovereignty exercised by tech giants, compounded by the lack of global governance guidelines on military AI, are allowing firms like OpenAI to delve into uncharted waters. 

Russia’s ongoing war in Ukraine has already showcased how AI is shaping military strategies and national security. Dubbed by Time’s Vera Bergengruen an “AI war lab,” the conflict has seen civilian tech firms experiment with AI tools and play critical roles in military operations. Private companies like Palantir and ClearviewAI have become pivotal actors on the battlefield by providing data analytics for drone strikes and surveillance. Such ventures raise concerns about the increasing militarization of AI as well as the ethical and legal responsibilities of the private tech sector during conflict. Israel’s algorithmic warfare and use of AI targeting systems in Gaza with little human oversight further illustrate the ethical, international legal, and strategic dilemmas posed by military AI. 

When it comes to governance frameworks, the EU’s AI Act, the first-ever legal framework to address the risks of AI uses, represents a bold attempt to set a global standard for AI technologies. Yet, the act explicitly excludes military AI from its purview. This exclusion, while justified by EU member states’ sovereign right to ensure their national security and protect their cutting-edge defense capabilities, underlines the urgent need to institutionalize stringent EU and international norms for military AI. This would require strong governance guardrails around these technologies, whose dual-use nature blurs the lines between civilian and military applications

What is more, the argument that military AI should be governed solely at the national level because of security concerns is becoming increasingly feeble, for two reasons. First, these technologies have implications that transcend national borders, making coordinated governance and oversight necessary. Second, existing frameworks of defense cooperation at the EU level, such as the European Defense Fund, already demonstrate EU member states’ capacity for coordinated and integrated efforts to address complex security and defense challenges. 

The Double-Edged Sword of Military AI 

Because of AI’s general-purpose nature and enabling characteristics, military AI encompasses a wide array of tools and applications, from lethal autonomous weapons systems (LAWS) and drones to cybersecurity and strategic decisionmaking, among others. The concept of military AI demands deeper critical scrutiny, as it is often narrowly interpreted as a race to develop and deploy LAWS and is associated with popular depictions of these so-called killer robots. Indeed, at the UN level, military applications of AI have largely been debated in the context of the Group of Governmental Experts on LAWS, specifically in terms of meaningful human control (MHC) over such systems. 

Yet, the military applications of AI extend far beyond LAWS, requiring broader and more substantive discussions to tackle unique, unaddressed challenges. AI has the potential to shape nearly every facet of warfare, including defense innovation, industry supply chains, civil-military relations, military strategies, battle management, training protocols, forecasting, logistical operations, surveillance, data management, and measures for force protection. For instance, AI’s role in enhancing cybersecurity and informing strategic decisions highlights its dual-use nature, complicating regulatory and governance efforts. 

Military AI is also a double-edged sword for national security. On the one hand, it can act as a force multiplier: the strategic enabler and the “most powerful weapon of our time,” in the words of U.S. Cybersecurity and Infrastructure Security Agency Director Jen Easterly. As a coveted technological solution, it promises enhanced operational efficiency, significant strategic and tactical advantages, precision targeting, speed in processing vast amounts of data, decisionmaking support, and logistical advancements. On the other hand, there have been numerous examples of AI failing, sometimes with fatal results, from software not recognizing or misidentifying people to fatalities resulting from self-driving cars. More worryingly, adversarial techniques could conceivably be used in a conflict to alter a targeting assistance system’s source code so that, for example, it identifies school buses as enemy vehicles. Ultimately, although it stands to reason that AI will yield military and strategic advantages, it is not yet clear precisely what kind—or what benefits might be offered by advanced AI models over lesser ones. 

Finally, there is a growing consensus on the importance of MHC over military AI in warfare, but the conceptualization and practical implementation of this concept remain highly contested. Experts have shown that the growing integration of autonomy and automation into the critical functions of LAWS makes human control over specific uses of force decisions increasingly meaningless because of the speed, complexity, and opacity with which such systems operate. This situation also raises key ethical questions about hybrid human-machine teaming and the alluring quest to create human-machine cognitive architectures that may allow entirely new levels of warfare. With such increasingly complex systems, the prospect of compounding errors with devastating consequences intensifies. 

The Quest for Global Governance

The lack of a global governance framework for military AI therefore represents a critical and perilous regulatory gap, as it leaves these powerful technologies unchecked and poses significant risks in terms of global security, arms proliferation, and international law. Furthermore, AI’s digital nature complicates hardware-based controls, governance measures, and regulatory interventions. The lessons learned from nuclear arms control provide a road map for global action, but adapting them to the digital realm of AI is a challenge that will require out-of-the-box thinking. Compared with nuclear weapons, AI models can be more easily copied and disseminated, and AI risks, while potentially catastrophic, remain speculative and hyped compared with the tangible devastation of the atomic bomb. This uncertainty, coupled with differing national security interests among major powers, hampers the creation of a robust international consensus. 

Yet, despite such obstacles, diplomatic efforts to govern military AI are intensifying. In 2023, three notable initiatives emerged, marking pivotal moments in the global discourse on responsible AI in military contexts. The first was February’s summit on Responsible Artificial Intelligence in the Military Domain (REAIM), hosted by the Netherlands in collaboration with South Korea. On the summit’s final day, the United States launched its Political Declaration on Responsible Military Use of Artificial Intelligence and Autonomy. And in December, UN countries voted in favor of a resolution on the dangers of LAWS, acknowledging the “serious challenges and concerns” raised by “new technological applications in the military domain,” including those related to AI and autonomous weapons systems. The UN secretary general has called for a legally binding treaty by 2026 to ban LAWS that operate without human control or oversight and cannot comply with international humanitarian law. This treaty should also establish regulations for all other types of autonomous weapons system. 

While significant, the REAIM process, the U.S. political declaration, and the UN resolution are all limited in their ambition, scope, and ability to operationalize a comprehensive global regulatory regime. The three initiatives seek primarily to cultivate nonbinding coalitions of the willing centered on shared yet watered-down or abstract norms for military AI. The approaches differ primarily in the composition of the coalition they propose, the military technologies they target, and the governance vision they have adopted. The United States has strategically positioned its declaration as a response to perceived threats from its near-peer competitor China, aiming to set out from the top down what constitutes irresponsible state behavior in the development and use of AI. Conversely, REAIM represents a bottom-up, normative, and less politicized approach to knowledge building, and its most important difference is its multistakeholder engagement. 

In this context, the EU must take a leading role in promoting inclusive, global frameworks for military AI governance and intensifying international norms-promotion efforts via platforms such as the REAIM and UN processes. The EU also needs to strengthen its partnerships with states and corporations to align governance regimes, foster global stability and security, and collaborate with key partners on shared norms and standards. The bloc is uniquely placed to pursue these goals thanks to its pioneering AI Act, which it should leverage to foster international consensus, spearhead global norms, and curb the military AI arms race. In doing so, the EU’s goal should be to advocate the responsible development and use of these technologies while emphasizing that human dignity and human rights must be respected in all defense-related activities. This approach would also require an EU strategy to prohibit the use of LAWS, as lethal actions using such systems should always be carried out by a human who exercises judgment and meaningful control in line with the principles of necessity and proportionality. Overall, intensifying norms-promotion efforts and partnerships can establish the EU as a normative and regulatory power on the international stage that can contribute to setting global benchmarks for the responsible use of AI in military contexts.

Conclusion 

The EU’s engagement in the global governance of military AI is not merely a regulatory challenge but also a foreign policy, moral, and strategic imperative. Although security and defense are not EU competencies, the union cannot ignore the profound implications of the development and proliferation of military AI. To mitigate the global spread of these technologies, EU leaders need to build strategies for responsible military AI, including stronger multilateral, minilateral, and bilateral partnerships to align governance regimes.

The EU also needs to ground multistakeholder coalition building and norms promotion in a shared vision of responsible military AI. One way ahead is for the EU to establish a framework for dual-use and military AI applications, drawing on the AI Act’s tiered approach to risk assessment. An EU-wide strategy would guide European military organizations and defense industries to approach these technologies responsibly. Overall, spearheading a comprehensive global governance framework for military AI is a monumental but essential task to defend the future, and it requires sustained multistakeholder advocacy and international cooperation.

Carnegie Europe is grateful to the Patrick J. McGovern Foundation for its support of this work.

Carnegie does not take institutional positions on public policy issues; the views represented herein are those of the author(s) and do not necessarily reflect the views of Carnegie, its staff, or its trustees.