Source: Valdai Club
The world was stunned when in 1997 IBM’s Deep Blue supercomputer defeated world chess champion Garry Kasparov. Twenty years later, the world champion chess program, Stockfish 8, was beaten in a 100-game match up by AlphaZero, Google’s game-playing AI software. The big difference: AlphaZero taught itself how to play chess – in under four hours! As a side note, it won or drew all 100 games! Another example of the major leaps in 21st century technology is Boston Dynamic’s Atlas robot parkouring through a landscape of boxes (see the incredible and somewhat scary video here). It does not take much imagination to see the potential for military applications in both cases in the future. That begs the question, is it already too late to regulate emerging technologies?
So far, the international community has regulated emerging technologies and corresponding weapons systems with three general purposes in mind: (1) to limit or prevent unnecessary or extensive human suffering, (2) to prevent certain actors from gaining access to certain technologies, and (3) to prevent all-out, that is, nuclear, war. Historically seen, the prevention of nuclear war is the youngest broad-ranging effort to regulate war technology. (By the way, the oldest known efforts date back to the Middle Ages when Pope Innocent II in 1139 banned the use of crossbows to be employed against Christians and Catholics alike.)
With the emergence of Weapons of Mass Destruction and International Humanitarian Law in the 20th century, efforts to regulate technological advancements became a norm that manifested itself in dozens of bilateral and multilateral international arrangements to limit or somewhat control nuclear, biological, chemical, missile, and other weapons and their corresponding technologies. Export controls, nonproliferation instruments as well as arms control and disarmament agreements came to be part and parcel of the international community’s tool kit to regulate military and dual-use technology.
The latest wave of technological breakthroughs in Artificial Intelligence (AI), drones or genetic engineering (CRISPR/Cas) have triggered renewed interest in humanity’s ability to secure its own survival by commonly devising rules. But in order to successfully regulate today’s emerging technologies we should get rid of a number of misleading assumptions first. I try to address three of them in the following without claiming that this list is complete.
One assumption, often voiced in multilateral fora such as the United Nations, is that today’s emerging technologies are vastly different as they emerge much more rapidly than previous inventions. The implicit concern is that international regulatory policies cannot keep pace. While the first assumption is at least questionable, the latter one is probably correct. The question is, was it any different back then? It took the international community almost a quarter century to devise global regulations against the spread of nuclear weapons. The Missile Technology Control Regime only came into being almost 50 years after the first long-range rocket launches. Arguably, with the Cuban Missile Crisis the Cold War superpowers had to race to the brink first before they could agree on a number of rules.
Seen from that angle, the UN’s current efforts to regulate so-called Lethal Autonomous Weapons Systems (sometimes called ‘Killer Robots’) before they become a dangerous feature of world politics or, in the case of the Nuclear Suppliers Group, to proactively address the potential proliferation impacts of 3D-printing are certainly positive signals. At a time of definitely increased demand for regulation, the more worrisome development is the retreat of the one nation that acted as the torchbearer of international regulatory efforts since the end of World War II – the retreat of America from multilateral diplomacy writ large.
Another assumption is that emerging technologies are challenging existing arms control and disarmament agreements. Again, the implicit concern seems to be that new technological inventions might contribute to the collapse of decades-old treaties. The acrimonious debates about hypersonic glide vehicles and their potential impact on the future of the US-Russian New START accord or the Russian claims regarding drones in conjunction with the end of the Intermediate-range Nuclear Forces (INF) Treaty are often cited as examples.
Again, the problem lies more in the political than the technological realm. Disagreements about new weapons systems or new technological enablers are anything but new, and in the past arms control-abiding nations have usually used technical working groups to address such problems to adjust or modernize the relevant rules. Unfortunately, with the waning interest in arms control and, more broadly, military restraint in a number of nations, also the appetite for cooperative solutions to common problems has vanished. In that regard, we should not make technology the scapegoat of changed political interest.
Finally, we seem to assume that our strategic concepts and terminologies are still fit to address the effects of emerging technologies. One typical example is to subsume all things “cyber” under the “deterrence”/”arms control” dichotomy. To begin with, most cyber weapons are developed, stored, and employed in very clandestine environments. Contrary to classical military equipment which can be visibly deployed and used e.g., for “signaling” or “assurance” missions, cyber weapons cannot be used that way without at least causing massive potential for “inadvertent escalation.” Anon, the classical laws of “escalation” and “escalation management” do not work in an environment that cannot be defined in terms of “horizontal escalation” or the attribution, and hence the “significance,” of “offensive action.”
The same could be said about “escalation management” on a future battlefield inhabited predominantly by killer robots. Which actions would be so significant as to be viewed as “escalatory” if human suffering is not involved, or at least not initially? Try reading some of the classic works of Thomas C. Schelling or Herman Kahn with cyber space, killer robots or drone swarms in mind and you will immediately run into some serious conceptual problems. However, this does not mean that we should do away with those concepts whatsoever – and certainly not as long as we continue to rely on old-fashioned nuclear deterrence.
Taken together, regulating emerging technologies is still possible and a significant number of states is engaging in such efforts on a daily basis and in various fora. Nevertheless, if we take the challenge serious, we will have to come up with some novel and certainly more creative thinking. Perhaps we should start by questioning our own fundamental assumptions about the nature of today’s emerging technologies and our understanding of how to ensure peace and security in the 21st century.