Introduction to Understanding Cyber Conflict: 14 Analogies
In extensive conversations with senior civilian and military cyber policymakers in the United States, the United Kingdom, France, Canada, Israel, Russia, and China, the editors of this volume heard repeatedly that these individuals and their counterparts in government frequently invoke historical analogies—aptly and inaptly—as they struggle to manage new technologies. The cyber domain is new to most senior officials. Cyber capabilities have unique properties. Experience with them in conflict thus far has been limited. Consequently, it is difficult to make confident judgments about their effects and escalatory potential. Moreover, the range of adversaries and behaviors that policymakers and experts must strive to dissuade, deter, or defeat in and through cyberspace is unprecedented: massive-scale thievery, political subversion, terrorism, covert operations, and open warfare. In such circumstances the human mind naturally pulls up analogies from the past to guide thinking and acting amid the new.
One of our interlocutors, in early 2014, recommended that we read Cyber Analogies, a collection of essays edited by Emily O. Goldman of US Cyber Command and John Arquilla of the US Naval Postgraduate School.1 We took this advice and found that, indeed, those essays sharpened our thinking about differences and similarities between cyber and previous military technologies and episodes. Goldman and Arquilla encouraged us to extend the exploration of analogies, with an eye toward adding examples and perspectives that would be pertinent to readers beyond the United States. The result is the present volume, which includes four revised essays from their collection, plus ten chapters that we commissioned to explore additional analogies.
Human beings think, learn, and communicate through analogies. We use analogies—naturally, often without trying—to familiarize that which is new. As Richard E. Neustadt and Ernest R. May recorded in their classic study, Thinking in Time, policymakers and pundits regularly invoke analogies as they struggle to make sense of and affect new situations, often without adequate reflection.2 This practice occurs now regarding the cyber world, which is evolving with an ever-quickening pace. For people who were born in this era, the benefits and risks that flow from the enhancement and distribution of information and communications technologies are more familiar than the earlier technologies, episodes, and policy challenges to which elders analogize. Young readers may know how hacktivists operate and how cyber attacks brought Estonia to a standstill in 2007, but they may be less familiar with the eighteenth- and nineteenth-century privateering at sea that resembles the challenges posed by proxy actors in cyberspace. Cybersecurity professionals may be convinced that the speed of offensive attacks will require automated defensive responses, but they may be unaware of how governments wrestled internally over the pre-delegation of authority to launch nuclear weapons under attack. Curricula today in courses on history, political science, international relations theory, and security studies still derive from pre–cyber era experiences; relatively few explore whether and how the cyber era may be similar or different. So, too, the strategies, policies, and institutions that governments use to manage dual-use technologies today generally predate the World Wide Web. Therefore, analogies across eras can be instructive for the young as well as for the not-so-young.
Variations in culture, ideology, and circumstances affect how audiences perceive and understand analogies. The authors of this volume are American, British, Israeli, and Swiss. The analogies to which they compare cyber technology and the challenges arising from it tend to be especially meaningful in their countries and, probably, in the West more broadly. We have tried throughout to keep the aperture wide enough to invite readers with different backgrounds to consider whether observations and analyses offered here do or do not apply more broadly. Moreover, readers from other locales and perspectives may gain insight from considering how these well-informed Western authors think about the given topic even if it differs from their perspective. In any case, the Cyber Policy Initiative of the Carnegie Endowment for International Peace hopes subsequently to build on the present volume and invite authors from other countries and perspectives to write about analogies that may be especially important to them.
Learning from analogies requires great care. Analogies can mislead as well as inform. Indeed, their educational value stems in no small part from identifying where, when, and how an analogy does not work well. Differences between technologies, effects, and historical, political, and strategic circumstances are as important to understand as similarities are. For example, today one must take particular care in analyzing which attributes of the nuclear era carry forward into the cyber era and which do not, and what the implications of confusion on this score could be.
Stanley Spangler, a professor of national security affairs at the US Naval War College, noted in 1991 that “virtually every postwar American president has been influenced by parallels drawn from the 1930s when Great Britain and France failed to react soon enough and strongly enough to halt [Adolf] Hitler.”3 President Lyndon Johnson, for example, declared, “Surrender in Vietnam [would not] bring peace, because we learned from Hitler at Munich that success only feeds the appetite of aggression.”4 Ironically, in ensuing decades the Vietnam War itself became a frequently used analogy in American debates over military intervention in other distant lands. In the 2015 debate over the Joint Comprehensive Plan of Action, which was negotiated to resolve the crisis over Iran’s nuclear program, critics made countless references to the Munich Pact of 1938 and Neville Chamberlain, while proponents invoked the need to avoid repeating the 2003 war in Iraq. The Iran debate of 2015, like the Vietnam debate, demonstrated the risk that analogies can be a flawed substitute for actual knowledge of the past and the present and for critical thinking about both. Nevertheless, people ineluctably employ analogies to conceptualize and manage new circumstances. Thus, it is necessary and salutary to examine analogies carefully and to search for what is apt and inapt in them.
We have organized the essays (and analogies) in this volume into three groups. The first section, “What Are Cyber Weapons Like?,” examines the characteristics of cyber capabilities and how their use for intelligence gathering, signaling, and precision strikes compares with pertinent earlier technologies for such missions. The second section, “What Might Cyber Wars Be Like?,” explores how insights from episodes of political warfare, preventive force, and all-out war since the early nineteenth century could apply or not apply to cyber conflict in the twenty-first century. The final section, “What Are Preventing and Managing Cyber Conflict Like?,” suggests what insights that states seeking to civilize cyberspace might draw from earlier experiences in managing threatening actors and technologies. We introduce the essays here accordingly.
What Are Cyber Weapons Like?
The cyber domain—and its associated hardware, software, and human resources issues—is constantly growing and evolving. Information and communications technologies can serve manifold peaceful and coercive purposes in addition to providing legal and illegal means of generating wealth. In the context of interstate conflict alone, hundreds of analogies could be drawn and analyzed between cyber weapons and their predecessors. Capabilities and plans exist and are being developed further to use cyber assets in large-scale, combined-arms military campaigns. Cyber operations could be conducted to cause massive disruption and, indirectly, significant human casualties. A literature is already emerging on these larger-scale capabilities and scenarios.5 Essays in the second and third sections of this volume explore whether and how technologies and practices central to World Wars I and II and the management of nuclear deterrence offer insights to the conduct and prevention of cyber warfare.
Here, in this first section, we focus on analogues to less destructive capabilities. In an era when all-out warfare among major powers may be deterred by nuclear weapons, among other factors, and global dependence on networked information and telecommunications technologies creates unprecedented vulnerabilities, the instruments of stealth, speed, and precision that can be controlled from great distances will be particularly salient as states compete to influence each other in the coming years. These applications pertain to intelligence gathering, covert operations, “political warfare,” and relatively low-intensity, precise offensive actions. Such activities are especially germane to operations in the gray zone between declared war and peace, when large numbers of boots on the ground are not envisioned but exercising covert influence and coercive power is deemed expedient or necessary.
“What we call cyber is intelligence in an important sense,” Michael Warner writes in the first chapter. “Intelligence activities and cyberspace operations can look quite similar.” Warner, the US Cyber Command’s historian, describes how cyber capabilities have been applied rather straightforwardly to serve the functions of spying and counter-spying that human agents have performed for millennia. “The main difference,” he notes, “is the scale that can be exploited” by cyber techniques. Similarly, the use of cyber capabilities to conduct covert operations and to inform the planning and conduct of military operations builds on methods developed through the advent of the telegraph and radio in the nineteenth and twentieth centuries. The similarities here extend to the importance of cryptography and counter-cryptography to facilitate offensive and defensive missions. A key difference in the cyber era is that previously “the devices that secured and transmitted information did not also store it.” Today, however, past, current, and future data are vulnerable to spies and eavesdroppers in unprecedented ways. This raises several questions that Warner examines: Will cyber espionage be more likely to cause conflict than traditional spying has done? What can responsible states do to gain the benefits of more fulsome intelligence collection while minimizing the risks to international stability and their own reputations, as well as to the brand value of companies whose products they exploit?
“No one has ever been killed by a cyber capability,” write Lt. Gen. Robert Schmidle, Michael Sulmeyer, and Ben Buchanan in their chapter, “Nonlethal Weapons and Cyber Capabilities.” Schmidle, the deputy commander of US Cyber Command from 2010 to 2012, and Sulmeyer, formerly the director for plans and operations for cyber policy at the Office of the Secretary of Defense, have been deeply involved in US military cyber policymaking. Buchanan is a postdoctoral fellow at the Cyber Security Project in Harvard University’s Belfer Center for Science and International Affairs. Their chapter analogizes cyber capabilities to nonlethal weapons that the United States and other states have developed for decades. The Department of Defense defines nonlethal weapons—such as pepper spray, spike strips to puncture tires of vehicles, rubber bullets, flash bangs, electronic jamming devices, and lasers—as “weapons, devices, and munitions that are explicitly designed and primarily employed to incapacitate targeted personnel or materiel immediately, while minimizing fatalities, permanent injury to personnel, and undesired damage to property in the target area of environment.”6 In a first-of-its-kind analysis, the authors compare and contrast potential utilities of nonlethal weapons and cyber capabilities in four ways: their ability to incapacitate, the reduced collateral damage they inflict, the reversibility of their effects, and their ability to deter. Schmidle, Sulmeyer, and Buchanan also address an interesting paradox: Why have US defense officials been particularly reluctant to approve the use of nonlethal capabilities, and can this reluctance be expected to continue, in the United States and in other states?
Moving up the ladder of coercive power, James M. Acton, a physicist and the codirector of the Nuclear Policy Program at the Carnegie Endowment for International Peace, explores the analogy between precision-guided munitions (PGMs) and cyber weapons. The development of PGMs—“guided gravity bombs and cruise missiles, in particular—has had profound implications for warfare,” Acton begins. “Such weapons tend to cause much less collateral damage than their unguided predecessors do, and because they can remain effective when used from a distance, they can also reduce casualties sustained by the attacker. Thus, PGMs have altered national-level decision-making by lowering the political threshold for the use of force and by slowing the likely loss of public support during a sustained military campaign.”
Cyber weapons may extend the militarily, politically, and morally attractive logic and functionality of PGMs. Cyber weapons offer the potential of “exquisite precision” in terms of targets and effects, although this potential may be very difficult for many actors to achieve in practice. They involve “minimal risk to the lives of the service personnel who ‘deliver’ them” and are “likely to cause fewer civilian casualties than even the most carefully designed and executed kinetic attack.” As a result of these attributes, cyber weapons “could further lower the threshold for the use of force.” At the same time, the effective use of cyber weapons requires sophisticated intelligence, surveillance and reconnaissance, and time-sensitive battle damage assessment. As with PGMs, it also remains questionable whether cyber weapons can accomplish larger, strategic political-military objectives. From all this the fundamental question arises of whether cyber weapons will augment deterrence of military conflict or make conflict more likely.
Drones, or unmanned aircraft used to surveil and precisely strike targets on the ground, have been celebrated and reviled since their use by the United States became an open secret in the mid-2000s. Armed drones are a form of PGM. What has made them more controversial, and perhaps more analogous to cyber weapons, are both the secrecy that for a long time shrouded the decision-making surrounding their use and the perception that their operators’ immunity from physical harm lowers inhibitions on their use. David E. Sanger, the New York Times’ chief Washington correspondent and author of Confront and Conceal: Obama’s Secret Wars and Surprising Use of American Power, explores this analogy.
Sanger begins by recounting how outgoing-president George W. Bush told President-elect Barack Obama “there were two programs he would be foolish to abandon”—the drone program and a super-secret program called Operation Olympic Games, which was designing an offensive cyber operation to disable centrifuges in Iran’s nuclear enrichment facility at Natanz. In the years that followed, Obama famously (or infamously to some) intensified the use of attack drones and authorized what became known as the Stuxnet attack on Iran. As Donald Trump stamps his imprint on US policy, he will need to grapple with the moral, legal, and strategic issues that these two types of weapons raise. What targets in what locations and under what circumstances are legitimate not only for the United States but for others too? What degree of confidence can realistically be attained that effects of cyber attacks (and drone strikes) will be limited to legitimate targets and will not cause unintended harm, or “collateral damage”? Many observers argue that drone strikes have incited escalatory revenge. Can cyber capabilities enhance deterrence of terrorism and other forms of aggression without this counterproductive effect? Sanger unpacks these issues by comparing and contrasting the nature and effects of drone and cyber attacks, and by drawing on the experience with drones, he considers how secrecy regarding cyber techniques and operations may affect prospects of governing them nationally and internationally.
What Might Cyber Conflicts Be Like?
The present conflicts in Ukraine, the Islamic State of Iraq and Syria operating in both states, and the cyber-abetted interference in the 2016 US presidential campaign may characterize prevalent challenges to peace and security in the twenty-first century, at least in cyberspace. At the same time, of course, the recent escalation in tensions between Russia and the West, and between China and its US-backed neighbors in East Asia, underscores the enduring importance of historical major power conflicts in continuing to shape perceptions and political discourse in the East and the West. Thus, the chapters in this section explore analogies from a wide span of history to draw implications for a range of confrontations and conflict contingencies that cyber-capable states may face and in which cyber operations may play a role.
In his chapter, Stephen Blank, of the American Foreign Policy Council, describes how Russia’s contemporary use of offensive cyber operations against Estonia (2007), Georgia (2008), and Ukraine (2014–15) is not merely analogous but also a direct continuance of the strategy and practice of Soviet subversion of neighboring states. He writes, “Tactics and strategies developed and employed during the Soviet period have served as a foundation for establishing new strategies that incorporate some of the century-old Leninist repertoire and new trends like IW [information warfare], as defined by Moscow, for the conduct of continuous political warfare against hostile targets.”
In describing the conduct of IW and cyber attacks in Estonia, Georgia, and Ukraine, Blank reports that Russia’s aim was to “instill a feeling of constant political and economic insecurity among the target state’s population” while testing whether and how European security institutions and the United States would respond. In Georgia and Ukraine, attackers believed to be linked to the Russian state penetrated and placed malware in electricity supply systems. When the Georgian conflict ended early, without Western intervention, no decision to execute destructive cyber attacks was made. In Ukraine, nationalists sabotaged electricity supply lines to (Russia-annexed) Crimea in November 2015 and cut off power there. Russian retaliation, prepared well in advance, was executed four weeks later in the form of a sophisticated, measured cyber attack that shut down three regional electric power distribution companies. Thus, as Blank details, cyber capabilities provide Russian actors with a spectrum of relatively inexpensive and risk-mitigating coercive instruments to impose Russian interests on adversaries below the threshold of violence that would prompt military escalation, especially by Western powers. “Russia has already engaged its adversaries in information warfare,” Blank concludes, “thus, its adversaries must understand and learn from it for their own security.”
Moving up the ladder of force, many assessments posit that offensive cyber operations would optimally be undertaken secretly, before armed warfare has commenced, to impair an opponent’s capacity to fight or to create facts on the ground that could motivate an opponent to stand down. In “An Ounce of (Virtual) Prevention?,” John Arquilla, the chair of defense analysis at the US Naval Postgraduate School, considers how the use of preventive force in the Napoleonic Wars and leading up to World War I may hold insights for the cyber era. Arquilla describes how the British navy in 1801 and 1807 conducted attacks on the Danish fleet, the coastal artillery emplacements, and the city of Copenhagen to prevent Denmark from colluding with Napoleon in closing the Baltic Sea to British trade. While the British attacks accomplished their tactical and strategic objectives, the exercise of preventive force also motivated Germany in the late nineteenth and early twentieth centuries to build up its navy to deny Britain the option of preventive force. Fast forwarding a hundred years, Arquilla analogizes that the Stuxnet cyber attack conducted by the United States and Israel against Iran’s centrifuge program not only successfully slowed Iran’s acquisition of enriched uranium but also may have spurred Iran and future potential nuclear proliferators to take defensive measures that will make counter-proliferation more difficult in the future. Ultimately, Arquilla concludes, twenty-first-century states are likely to see cyber techniques and operations as useful for preventive force—including against terrorist groups—and will therefore compete offensively and defensively in this type of conflict.
Francis J. Gavin, an international historian and director of the Henry A. Kissinger Center for Global Affairs at the School of Advanced International Studies at Johns Hopkins University, addresses the issue of war instigation from a different angle, assessing whether and how the technology of railroads drove Germany, France, the United Kingdom, and Russia into World War I. Early historiography on the war posited that the European great powers’ reliance on railways to transport military forces to their borders placed a premium on deploying their forces before their adversaries did. Ambiguities about the purpose of mobilization—either offensive or defensive—exacerbated crisis dynamics. Moreover, the logistics of railway mobilization made it difficult to pause or reverse once it started. Consequently, according to early historiography, once mobilization began, it acquired too much momentum to be stopped in the amount of time that the complicated diplomacy to prevent war would have required.
Modern historians have corrected the overly simplistic determinism of the railway narrative, yet, as Gavin notes, this work has not prevented the notion of technological determinism from influencing conceptions of cyber warfare. Nor should it necessarily. Indeed, the military implications of major, globally infused dual-use technologies can and should be analyzed independently. Comparing their similarities and differences with prior technologies can be helpful in this regard.
In Gavin’s view, rail and especially cyber technologies are more facilitating technologies than they are instruments for killing adversaries, destroying their military assets, and occupying their territory. Both rail and cyber technology quickly spread around much of the world because they were vital to national and international economies, even as they also serve military purposes. The economic indispensability of these technologies complicates efforts to control their military or other coercive uses. Both technologies condense the effects of space and time, making the world smaller and faster, which, in turn, dramatically increase the pressures on decision-making during a crisis.
Yet, as Gavin analyzes, differences between cyber technology and railways may be most instructive. In any case, looking from 1914 to the future of potential cyber conflict, a portentous question is whether states in tense regions possess the “institutional capacities . . . to deal with massively increased amounts of information coming from a variety of different sources and in an environment where cyber attacks might be oriented toward degrading and blinding” decision-making capabilities.
World War I offers another analogy to potential cyber warfare in the twenty-first century, as the British historian Nicholas A. Lambert considers in “Brits-Krieg: The Strategy of Economic Warfare.” Lambert, the Class of 1957 Chair in Naval Heritage (2016–17) at the US Naval Academy, fascinatingly describes how the advent of the telegraph and undersea cables enabled an unprecedented, global movement of goods, money, knowledge, and information that transformed international commerce. In earlier eras, traders purchased and stockpiled large amounts of goods. In the newly globalized system, traders relied on processes such as just-in-time delivery, credit-based purchase, and transfer of goods, all underpinned by new information technology. Britain was the hub of much of this global trade and finance. Realizing this, a few strategists in the Admiralty began in 1901 to consider how, in a time of war, Britain could leverage its dominant naval and commercial position to halt global trade and thereby cause a quick and devastating economic shock to an adversary’s economy and society, in this case Germany’s. Unlike the interdiction of ships and the preventive and attrition bombing of military-economic assets, “the British aim” would be “far higher: . . . delivering an incapacitating ‘knock-down’ blow that would obviate the need for less intense but more prolonged types of war.”
In the cyber era, an analogous act would be to use “cyber means as a weapon of mass destruction or disruption, targeting an enemy’s economic confidence as well as its infrastructure, with the aim of causing enemy civilians to put political pressure on their government.” For example, a sophisticated actor could corrupt the integrity of data and the processing algorithms in one or more major financial institutions in ways that would profoundly undermine the confidence on which modern international commerce depends. Yet, as Lambert recounts, the United Kingdom’s application of economic warfare at the onset of war in 1914 was so effective that it ultimately backfired and had to be abandoned. Trade plummeted, and with it went the well-being of British traders, financiers, and labor. “As the scale of the economic devastation [in the United Kingdom] became increasingly apparent, domestic interest groups became ever more vocal in clamoring for relief and lobbying for special exceptions, and neutrals [countries] howled in outrage at collateral damage to their interests.” Soon, “political commitment to the strategy began to crumble; more and more exceptions to the published rules were granted, thereby further undermining the effectives of economic warfare.” In October 1914, the government aborted the strategy. Readers can easily imagine how in the globalized, digitally intertwined world of today, a strategy to cause massive economic disruption through cyber attack could pose similar challenges. Not only would the intended object of the attack suffer enormously but so too would the attacking state if its labor force, employers, and treasury were dependent on global trade and finance. Lambert’s conclusion details some of these possible challenges and ways of anticipating them.
Pearl Harbor presents the most frequently deployed analogy to cyber warfare, at least in US discourse. In October 2012 Secretary of Defense Leon Panetta warned of a possible “cyber Pearl Harbor,” saying a malicious actor could launch devastating cyber attacks to “paralyze and shock the nation and create a new, profound sense of vulnerability.”7 Since then, cyber Pearl Harbor has become a recurring motif for officials, journalists, and experts warning of the dangers of a massive surprise cyber attack, especially in the United States. The image invoked is of a bolt-from-the-blue attack that catches defenders by surprise. Yet Emily O. Goldman, the director of the US Cyber Command–National Security Agency Combined Action Group, and Michael Warner clarify in chapter 9 that Pearl Harbor was not a surprise. “The United States was exercising coercive power to contest Japan’s occupation of China and other Asian states, and Washington expected war. Pearl Harbor was a logical, if misguided, result of Imperial Japan’s long-term strategy to expand its Pacific empire and blunt the United States’ effort to stop it.” Faulty American analysis and communication of intelligence data, and mistaken assumptions that the adversary (Japan) would calculate the risks of attacking as American personnel did, produced the sense of surprise. This observation makes what happened at Pearl Harbor even more salient for the United States and perhaps others today. Insofar as weaker actors embroiled in confrontations with powerful states may calculate, correctly or incorrectly, that a surprise cyber attack could temporarily weaken their adversary’s political resolve and military capability, they may see such an attack as the least bad alternative. By creating a fait accompli, with relatively few casualties on both sides, they could shift the burden of escalation to the stronger party to choose war rather than compromise. Goldman and Warner conclude that the United States and other states whose militaries, economies, and societies are extremely reliant on cyber capabilities should both increase their vigilance and create resilience in their military cyber networks. Unlike the case of Pearl Harbor, the vectors of attack could be located not only in military networks but also through privately owned and managed networks. This possibility greatly complicates the challenge of detecting, defending against, and responding to attack.
What Are Preventing and Managing Cyber Conflict Like?
Capabilities to conduct cyber information warfare, criminal activities (including terrorism), covert operations, and preventive military force are spreading faster than the international community’s capacity to establish agreed rules for managing them. This is normal; all major disruptive technologies have emerged and created challenges that states have then struggled for years and decades to regulate. These management struggles have been waged first on a national basis and then later, if at all, internationally. Cyber capabilities may emerge and evolve faster, and spread more extensively and quickly, than have antecedents such as nuclear power plants and weapons, air transportation, radio, and so on. Moreover, cyber capabilities are less geographically bounded than preceding technologies are. Nevertheless, the inherent interests of states and societies dictate that norms and rules for managing these new capabilities must be proposed, negotiated, and ultimately agreed on, even if their enforcement will be imperfect. Otherwise, the dangers and costs of threatening activities will be too severe for most states and societies to bear.
States have already begun to address the complexities of regulating the underlying technologies of cyberspace, including the Internet’s infrastructure. The struggle to establish rules for cyber capabilities and activities is intertwined with a broader, ongoing struggle over the governance of the Internet and the nature of sovereignty in cyberspace. This plays out in various formal bodies, such as the International Telecommunications Union, nongovernmental organizations including the Internet Corporation for Assigned Names and Numbers, and multi-stakeholder groups including the Internet Governance Forum. More tentatively, informal and formal efforts at various levels have begun to develop norms for the use of cyber weapons and the conduct of cyber conflict. Most notably they come from such groups as the G20 (or Group of Twenty), the United Nations Group of Governmental Experts on Developments in the Field of Information and Telecommunications in the Context of International Security, and the participants in the Tallinn Manual on the International Law Applicable to Cyber Warfare project. Clear, internationally agreed-on rules remain elusive, but unilateral and multilateral initiatives can begin to reduce the risks of unrestrained cyber conflict. These efforts can be enlightened by past experiences in managing threats to national and international security.
In the first essay in this section, Steven E. Miller, the director of the International Security Program at Harvard University’s Belfer Center for Science and International Affairs, compares essential features of the nuclear era with those emerging in the cyber era. Miller notes that the nuclear age emerged publicly in 1945 with ferocious suddenness as nuclear weapons were detonated over Hiroshima and Nagasaki. The technology was born through secrecy, was militarized, and was tightly controlled—first by one government, the United States, and then by another, the Soviet Union. Civilian applications of the technology came later and never lived up to the advertisements of its progenitors. In contrast, cyber technology, notwithstanding its origination in the US defense establishment, quickly and widely took root and spread through commercial channels. Countless, often unpredicted, civilian applications of the technology have fueled economic growth and affected the lives of billions of people who have become dependent on them. Thus, the nature, purposes, and stakeholders associated with cyber technology are profoundly different than those associated with nuclear weapons and with civilian applications of nuclear technology. In this context, Miller considers whether and how the four central “pillars” of the nuclear order—“deterrence, damage limitation, arms control, and nonproliferation”—may be useful or not in managing cyber threats.
Miller’s essay provides a segue to the next three essays in this section, which explore key facets of the defensive challenge. John Arquilla, in “From Pearl Harbor to the ‘Harbor Lights,’ ” leads off the discussion of analogues to defending against cyber attack and conflict. Arquilla illuminates some of the sometimes surprising difficulties in reducing the vulnerabilities of civilian and defense networks. He recounts how the United States for three months after Pearl Harbor failed to “turn off” the lights in the country’s coastal cities and harbors at night. As a result, German U-boats easily identified the eastern coastline, lurked off open anchorages and undefended harbors, and inflicted enormous casualties and destruction. Once the order to darken the coasts was implemented, along with other defensive measures, the German navy significantly reduced its operations in US waters. Arquilla likens the US failure to dim the harbor lights to the ongoing, inadequate government and private sector policies and actions to make their computers, networks, and data less accessible to attackers, and he suggests ways to redress these liabilities.
One of the growing policy conundrums in cyberspace is whether and how states and legitimate non-state entities should be permitted to actively defend themselves against intrusion and attack. Passive defenses such as encryption, firewalls, authentication mechanisms, and the like do not carry risks of international crisis. But some “active” cyber defenses that in some cases could harm another country raise serious risks and challenges. Intervention in an adversary’s networks or computers that causes serious economic harm to an innocent entity in another country or that (unintentionally) impedes another state’s national intelligence collection and defenses, could make the active defender liable to economic and criminal penalties or worse. Dorothy E. Denning and Bradley J. Strawser, professors at the US Naval Postgraduate School, explore the ethical and legal issues arising from active defense by analogizing air defense to active cyber defense. They focus mainly on state-conducted defensive actions while recognizing that such cyber actions by businesses and other legitimate non-state actors, although entirely plausible, pose additional complications.
To set up the analogy, Denning and Strawser describe a range of active defenses deployed against air and missile threats. Among them are aircraft, which the United States and the United Kingdom have deployed since September 11, 2001, to defend against hijacked aircraft; missile defense weapons, such as the Patriot surface-to-air system used in the Gulf War in 1991; other rocket and missile defense weapons, such as Israel’s Iron Dome; and electronic warfare. The authors then summarize some possible forms of active cyber defense and ask several questions about each to assess their ethical implications.
The development of missile-carried nuclear weapons in the 1950s confronted American (and Soviet and UK) authorities with an existential problem—that is, how to preserve political control over these forces when evolving technology and threats narrowed the time to respond to a nuclear attack. President Dwight D. Eisenhower’s response in 1959 was to grant military commanders the authority to use nuclear weapons under carefully prescribed conditions. Peter Feaver, a professor of political science and public policy at Duke University, and Kenneth Geers, an ambassador with the North Atlantic Treaty Organization’s Cooperative Cyber Defence Centre of excellence and a senior fellow at the Atlantic Council, reflect on how this challenge and the US response to it may be analogous to challenges posed by potential cyber warfare.
Three features of nuclear war motivated the adoption of nuclear pre-delegation: “the speed with which a nuclear attack could occur, the surprise that could be achieved, and the specialized nature of the technology (that meant only certain cadres could receive sufficient training to be battle competent).” While cyber war does not pose the civilization-ending threat that global thermonuclear war does, it may impose similar challenges on the management of cyber weapons (offensive and defensive). Feaver and Geers expertly unpack these challenges and the possible solutions to them.
The final chapter in this section explores a different and necessary way of reducing cyber threats—curtailing the operations of hostile private actors that operate as proxies of states or with state toleration. The analogy here is to naval privateering between the thirteenth and nineteenth centuries. Written by Florian Egloff of the Cyber Studies Programme at the University of Oxford, “Cybersecurity and the Age of Privateering” chronicles how governments commissioned privately owned vessels in wartime to operate against their adversaries’ trade and in peacetime to attack merchants’ ships in reprisal for harms attributed to a nation and to capture goods of equal value.
Analogies to the cyber domain abound here. Several states recently have used or allowed hackers and criminal organizations to conduct cybercrime and cyber-enabled espionage against adversarial states and economic interests. This practice is analogous to privateering and piracy. Meanwhile, if a state lacks the capacity to defend the cyber domain and obtain redress for harmful cyber activities, then the users are largely left to protect themselves. Naturally, private companies, like the earlier naval merchants, are now debating with governments the advisability of issuing letters of marque that would allow companies to counterattack against cyberespionage and theft. Of course, as Egloff discusses, the myriad state and non-state actors and interests at play in the cyber domain, and the pace of technological change, mean that ordering this space will be exceptionally difficult and will take considerable time. He offers a thought-provoking framework for understanding differences and similarities in the naval and cyber domains and how this understanding could inform efforts to secure cyberspace.
Each of these chapters is valuable and instructive in its own right. Together, as we describe in the conclusion, they suggest insights into the challenges that cyber capabilities and operations pose to individual states and the international community. We expect that this work will stimulate readers to think of additional analogies that could augment their understanding of cyber capabilities and operations, as well as policies to manage them in ways that reduce conflict and enhance international well-being. It would be especially welcome if scholars, journalists, and officials from non-Western countries were to elucidate analogies from their own technological and historical experiences to the cyber era, for the unprecedented benefits of cyber technology are the relative ease and affordability of its global dissemination. To realize its benefits, and to minimize the technology’s destructive potential, the widest possible range of societies and states must learn to steward it wisely. The authors here seek to contribute to this outcome and encourage others to do the same.
Notes
1 Emily O. Goldman and John Arquilla, eds., Cyber Analogies (Monterey, CA: Naval Postgraduate School, 2014), 5.
2 Richard E. Neustadt and Ernest R. May, Thinking in Time: The Uses of History for Decision-Makers (New York: Free Press, 1986).
3 Stanley E. Spangler, Force and Accommodation in World Politics (Maxwell Air Force Base, AL: Air University Press, 1991), 52.
4 Ibid., 62.
5 See, for example, Joseph Nye, “Nuclear Lessons for Cyber Security?,” Strategic Studies Quarterly 5, no. 4 (2011); Andrew Krepinevich, Cyber Warfare: A “Nuclear Option”? (Washington, DC: Center for Strategic and Budgetary Assessments, 2012), http://csbaonline.org/research/publications/cyber_warfare_a_nuclear_option; and Richard Clarke and Robert Knake, Cyber War: The Next Threat to National Security and What to Do about It (New York: HarperCollins, 2012).
6 Ashton B. Carter, “DOD Executive Agent for Non-Lethal Weapons (NLW), and NLW Policy,” Number 3000.03E (Washington, DC: Department of Defense, April 25, 2013), 12, http://www.dtic.mil/whs/directives/corres/pdf/300003p.pdf.
7 Elisabeth Bumiller and Thom Shanker, “Panetta Warns of Dire Threat of Cyberattacks on U.S.,” New York Times, October 11, 2012, http://www.nytimes.com/2012/10/12/world/panetta-warns-of-dire-threat-of-cyberattack.html.
From Understanding Cyber Conflict: 14 Analogies, edited by George Perkovich and Ariel E. Levite and published by Georgetown University Press. For additional information about the book: http://press.georgetown.edu/book/georgetown/understanding-cyber-conflict.
Intelligence in Cyber—and Cyber in Intelligence
Cyber technologies and techniques in some respects originated in the intelligence profession. Examining cyberspace operations in the light of the history and practice of technology helps illuminate both topics.1 Intelligence activities and cyberspace operations can look quite similar; what we call cyber is intelligence in an important sense. The resemblances between the two fields are not coincidental. Understanding them opens new possibilities for exploring the applicability of intelligence concepts to a growing understanding of cyberspace.
To appreciate the evolutionary connections between these fields, it is necessary to define the multiple functions that intelligence performs. Intelligence guides decisions by providing insight to leaders and commanders, of course, but its definition is broader still. The field has always included espionage and counterespionage, and today it includes technical collection as well. Such clandestine activities are but a short step from covert operations, which fall under the ambit of intelligence organizations in many states. Finally, intelligence, with its partner activities of surveillance and reconnaissance, has become a key component of today’s real-time, networked warfare. This chapter explores these functions of intelligence and how cyber capabilities resemble or differ from the capabilities that earlier technologies provided, as well as how cyberspace capabilities and operations pose new policy dilemmas. It does so from a US perspective, but the phenomena and issues discussed here are probably pertinent to other countries too.
Spy versus Spy
Intelligence has evolved over the last century, giving rise to two overlapping but not congruent definitions of the field. US military doctrine views intelligence as information that a commander finds vital in making a decision, plus the sources, methods, and processes used to produce that information. Not all information is intelligence, of course. Only information on the adversary and the conditions under which the commander’s force might have to fight is considered intelligence.2 One should note, however, that this concept of intelligence is relatively new. Indeed, it was formally stated in such terms only in the 1920s.3 Spying, however, dates to the dawn of history; ancient texts from around the world mention spies and their exploits on behalf of rulers and commanders. The emergence of modern intelligence from classic spy craft resembles a millennia-wide “before” and “after” picture of the subject.
The Chinese sage whom we call Sun Tzu composed one of the earliest reflections on intelligence sometime around 300 BC. His classic The Art of War was hardly the first written reflection on this topic, although earlier authors (as far as we know) did not match Sun Tzu’s insight and brevity in his thirteenth and final chapter, “On the Use of Spies.” He described a lonely and deadly craft that occasionally became very important. A spy, in Sun Tzu’s telling, might collect secrets, spread disinformation or bad counsel in the enemy’s camp, or even assassinate enemy officials. He thus combined a range of activities far broader than merely passing information to his commander. A spy could potentially become a fulcrum of history, providing information or taking direct action to ensure the downfall of a dynasty and a shifting of the mandate of heaven.
Such considerations have relevance today, even for those who no longer see a change in regimes as cosmically important. Spy craft did not evolve much in the two millennia between Sun Tzu’s day and the Industrial Revolution, so we can take his ideas as fairly representative of the field up until roughly the age of Napoleon Bonaparte. Indeed, while campaigning, Napoleon ran his spy network from his tent, filing agents’ reports in pigeon holes in his camp desk. Even with the spread of intelligence collection by remote and then automated means in the twentieth century, individual spies retained importance for intelligence consumers and systems. Well-placed insiders could and did nullify expensive suites of technical collection assets during the Cold War, and more recently “insider threats” (even if not spies per se) precipitated media leaks that have significantly complicated international relations.
Spies have been eclipsed by technical collection, of course, but security and counterintelligence offices continue to focus significant resources on finding (and deterring) enemy agents. Leaders and their advisers intuit the danger that any human penetration poses to technological advantages, military operations, and diplomatic ties. The mere possibility of a spy can disrupt an intelligence bureau or even an alliance; the genuine article can do grave harm and cause effects that reverberate for years. Entire disciplines of the security field (e.g., background checks, compartmentation, and so on) grew up around the imperative to minimize and mitigate the damage that spies could inflict. Counterintelligence, of course, emerged precisely to guard against spies in a more active manner. The most effective counterintelligence operations (like Britain’s Double-Cross system in World War II) managed to take control of not only enemy spies but the perceptions of their spymasters as well. They fooled the latter into believing their espionage network was still collecting valuable secrets, which naturally turned out to be misleading “chicken feed.”4
Cyberspace operations have obvious parallels to traditional human espionage. An implant, for example, can sit in a computer for weeks, months, or years, collecting secrets great and small. The finding of such an implant, like catching a spy, evokes mingled satisfaction and fear. Not finding one, moreover, might not inspire confidence. It could mean there was no intruder to catch. Alternatively, it might mean that one looked in the wrong place.
In strategic terms, catching a spy or finding an implant is not exactly a casus belli, although running a spy (or placing an implant) is obviously a provocation. States have tacitly established protocols for handling espionage flaps. Typically the actual spy stands trial, while his or her foreign case officers are declared personae non gratae and expelled. Foreign intelligence officers (like Russia’s Anna Chapman, whom the Federal Bureau of Investigation [FBI] caught in 2010) are jailed in a glare of publicity. Soon, however, when the media’s attention has wandered elsewhere, the spies are quietly exchanged for individuals in their homeland’s prisons. We have not developed such protocols for handing disconnected computer implants back to their originators, but one suspects that similar understandings around cyber espionage will emerge over time.
How much cyber espionage is there? That depends on how broadly we define espionage as the acquisition of data in ways unbeknownst to its “owner.” At the risk of stating the obvious, entire sectors of the world economy now rest on the ability of corporations to aggregate and sell information about the online habits of consumers. Few computer users worry about such aggregation. They implicitly permit much (though by no means all) by pressing “Accept” after scrolling through the fine print in lengthy end-user agreements. This chapter must leave such matters to abler minds, though certainly a fair amount of illegal or at least unethical mischief is directed against the software sold to consumers to facilitate the harvesting and sale of their data.5 Going from such mischief to actively cyber spying on unsuspecting people is a short step. Today anyone with a network connection can be a victim of espionage mounted from nearly anywhere. A cottage industry has grown up around efforts to find and expose such cyber espionage schemes. From the instances uncovered so far, anyone possessing modest resources and sufficient motivation can readily download highly intrusive, capable, stealthy suites of surveillance tools.6 The publicly available evidence—not to mention the complaints by many governments and the myriad allegations based on leaked documents—should lead any fair-minded observer to conclude that many examples of cyber espionage were perpetrated by state actors.
The counterintelligence parallel with cyberspace operations seems to be developing another analogous aspect as well. The most ruthless counterintelligence services since at least the czars’ Okhrana have planted agent provocateurs among groups they deemed to be subversive. Their role was not only to report from within but to incite rash or premature action that would expose and discredit the groups. A whole literary subgenre explored the dramatic possibilities such plots entailed; think of Joseph Conrad’s The Secret Agent (1907) or G. K. Chesterton’s The Man Who Was Thursday (1908). Such agents were not just the stuff of fiction—Vladimir Lenin devoted his landmark essay “What Is to Be Done?” (1902) to countering them—and they spread fear and distrust among revolutionaries across Europe before World War I.
Attentive watchers of the cyber news will see an echo of these operations. Security services like the FBI seem to be learning how to persuade cyber criminals to switch allegiance while maintaining contact with their online cohorts (on secretly monitored connections, of course). Once the authorities identify the network and record enough evidence against its members to warrant prosecution, the nations involved in the investigation mount simultaneous raids—sometimes across multiple continents—to round up the suspects.7 Court filings soon expose the mole in the network, of course, but by then the person has been whisked to safety and perhaps even living under a new, state-provided identity.8 The hacker world today is turning paranoid, worried that many of the anonymous contacts in the dark web have switched sides and started providing evidence. This spreading distrust represents a direct application of counterintelligence tradecraft to cyberspace.9
In sum, espionage and counterespionage operations made the jump from the proverbial dark alleys to cyberspace virtually intact. What is new is old. How readily both of these ancient crafts adapted their techniques to the new cyber domain is astonishing. The main difference between their traditional operations and their cyber counterparts is the scale that can be exploited in the latter.
Common Roots
The history of intelligence provides still another template for understanding cyber operations. Intelligence connected itself to communications technology in the early twentieth century, with profound implications for itself and for diplomacy, security, and privacy. The modern era of communications began with the improvement of the telegraph, allowing quantities of messages and data to be transferred across global distances in near-real time. Wireless telegraphy and then radio broadcasting accelerated this trend, creating mass audiences and markets, as well as new military requirements for not only the equipment to transmit and receive such communications but also the cryptographic support to secure them and the messages they relayed. Intelligence, of course, grew in parallel with what Stephen Biddle terms the “new system” of military operations, in which real-time communications allowed generals to synchronize combined-arms actions involving infantry and artillery, and soon armor, aircraft, and ultimately guided weapons as well.10 This revolution in military affairs began with the battlefield use of radio in World War I and accelerated across the remainder of the twentieth century. Over the last generation, modern militaries have become dependent on sensors, networks, bandwidth, and surveillance. This dependence is encapsulated in the ubiquity (at least in military affairs) of the term “C4ISR,” meaning command, control, communications, computers, intelligence, surveillance, and reconnaissance.
The parallel growth of advanced, technologically enabled intelligence alongside the new system was not coincidental; rather, it was (and is) organic. These two trends share a common root in the widespread impulse across the industrialized powers to gain real-time control of military forces at a distance while monitoring and frustrating adversaries who seek to control their own assets and forces. This sea change took place quite suddenly and dramatically during World War I, in which vast armies, navies, and soon air forces had to communicate securely in real time or lose to adversaries who did. To cite but two examples, the Russian disaster at Tannenberg in August 1914 showed the occasionally strategic consequences of lapses in communications security, while the Royal Navy’s exploitation of German naval systems demonstrated what operational possibilities could be opened by a sustained cryptologic campaign against poor security practices and vulnerable technology.11 The shift to technical collection and analysis of machine-generated data revolutionized the intelligence business, transforming it seemingly overnight from an ancient craft into an industrial enterprise.
Every Western military sought to learn communications security lessons from World War I. Modern codes and encryption had arisen with the printing press in the Renaissance, but they took off anew with the telegraph revolution in the nineteenth century and especially with the wireless in the twentieth century. The difference between private cryptography and governmental and military systems, of course, was the sensitivity of the information they carried and hence the length of time (in hours, days, months, or decades) that the information’s owner would want eavesdroppers to have to devote to decrypting the intercepted messages. Despite the higher stakes for official uses, however, the quality of cryptographic support to both private and government messages for centuries remained roughly equivalent—in other words, not very good. That began to change with governments’ quests for reliable enciphering machines for tactical communications, such as the Swiss-made Enigma, which was marketed to commercial firms but was soon adopted and improved by the German military in the late 1920s. These machines had become widespread by World War II, at least among the major combatants in that conflict. In 1939 the use of coded communications had also prompted several states (most notably Britain) to mount concerted efforts to divine the secrets of those enciphering machines and the codes they protected. Enlisting their American allies, soon the British applied a new technology to the problem—the digital computer.
The Anglo-American signals intelligence alliance after World War II hastened the evolution of computers and of America’s computer industry in the 1950s. The enduring Anglo-American partnership henceforth kept its team members, particularly the National Security Agency (NSA), up to date with the evolution of computers, their concentration in networks, and the progress of a new field, computer security.
From the beginning, the NSA’s expertise in securing digital communications and networks influenced the concepts for and debates over securing computers and the data they stored and shared.12 Such effects quickly became embroiled in debates over encryption, particularly regarding the extent of the US government’s role in fostering high-grade cryptography. For decades the point had been moot, as the best cryptographic solutions were treated as military secrets (which in a sense they were) and their export was banned. With the de facto merging of telecommunications devices and computers by the 1970s, however, a new dilemma arose—that is, how to secure digital data for governmental agencies, banks, and other institutions that shared sensitive communications and files but did not need export-controlled, military-grade ciphers. The initial answer was the Data Encryption Standard (DES), which the National Bureau of Standards proposed in 1975 after its development by IBM and vetting by the NSA. Various observers soon found weaknesses with the DES algorithm, however. Some alleged that the US government had exploited its role in creating DES to leave “backdoors” in the standard that would allow government officials routine (or at least emergency) access to private data.13 For their part, the relevant agencies and even a congressional investigation insisted the government had done no such thing.14 The controversy over DES created a template that has been followed ever since—for instance, in the debates during Bill Clinton’s administration about the proposed “Clipper Chip” in the 1990s and the 2015–16 contretemps between Apple, Inc., and the FBI concerning the data residing on a smartphone used by one of the San Bernardino killers.15 Then as now, various government officials’ insistence on some official method of bypassing encryption standards for urgent national security and law enforcement purposes alarmed those who feared that US intelligence had already compromised the standards.16
This chapter cannot hope to resolve the policy issues over encryption or allay suspicions about the US government’s motives and actions. The author supports strong encryption for everyone and would like all governments to resist the urge to install backdoors in any cryptographic systems. The point of this chapter, however, seeks to add perspective by noting today virtually anyone can routinely use encryption that, historically speaking, is fantastically effective. Nevertheless, governments, hacktivists, and organized criminals have found various ways around that wonderful encryption. Most observers would surely agree that encryption has never been better, yet those observers might nonetheless concede that never have so many users lost exclusive control of so much of their data.17
The burgeoning computer security field has an additional connection to intelligence that has been largely overlooked. In certain ways the concepts of computer security grew directly from the painful education in counterintelligence and security practices that US intelligence agencies gained during and after World War II. There was nothing like operating behind the Iron Curtain for making an organization interested in end-to-end security measures. This is precisely why the Central Intelligence Agency (CIA) established a comprehensive “automated data processing” (ADP) security regime that congressional investigators publicly praised forty years ago! Committee staffers surveying federal computer security in 1976 applauded the CIA for its thorough approach, which worked “on the assumption that not only is there potential for compromise in any ADP system[,] it is likely that an attempt will be made to effect that compromise.” Though agency officials declined to offer their computer security regime as a template, the committee’s study nevertheless suggested “trying to apply certain ADP security techniques which had evolved at CIA to other Federal programs where the issue may not be national security but at stake were considerations of nearly equal consequence, such as individual privacy data and . . . financial transactions leading to disbursements of large amounts of public funds.”18
The spread of computers had heralded something novel for both communications and intelligence. Hitherto the devices that secured and transmitted information did not also store it. Computers did, at least as soon as they were given built-in memory. Thus, the level of care taken to transmit messages securely now must extend over the entire life cycle of that data and even to the machines that touch that data. Not only is your current data vulnerable to spies and eavesdroppers, it is now at risk forever in cyberspace. This raised the security bar tremendously for average users as well for governments. Consequently, the NSA since 2009 on its public website has urged “customers” to make prudent preparations now for the day when their encrypted data will be vulnerable to attack by quantum computers.19 Permanency of data not only has broadened the practice of intelligence (as hinted above) but also has drawn a line of demarcation between some traditional, passive forms of intelligence collection and the new digital methods.
Everything Goes Digital
The early development of radio suggests yet another aspect to the analogy between intelligence and cyberspace. Certain security and policy issues relating to computers and networks strongly resemble those associated with radio as that earlier medium evolved and spread in the first decades of the twentieth century. Indeed, many of the terms we routinely use to describe the workings of cyberspace—“network,” “bandwidth,” “wireless,” and others—came from radio terminology. As noted, both radio broadcasts and computer data can be intercepted in midstream and analyzed in various ways to deduce information on one’s opponents, even if one cannot read the content of the intercepted messages. Both radio and computer communications therefore must be used with care so as not to disclose too much information to opponents.
Furthermore, the kinship between intelligence and deception exactly parallels the relationship between radio (and computer) operations and the field of electronic warfare (EW). Radio was weaponized in World War II, and EW has been a standard feature of modern conflict ever since. An opponent’s employment of both radio and computer networks can be denied by jamming or flooding of one form or another. And, of course, those who intercept radio transmissions or computer data can be actively deceived by a clever originator. These intelligence dimensions of the new cyber realm (i.e., the principles of attack, defense, and exploitation) are readily apparent; indeed, they guided the US Department of Defense’s thinking on information warfare for the first couple decades of this doctrine’s existence. EW is thus one of the taproots of cyberspace operations, at least in the United States, as military thinking about command, control, and communications countermeasures in the early 1980s led directly to the earliest policy pronouncements on information warfare in 1992.20
Historians should not forget a related point: substantial impetus for the computer industry’s maturation derived from the US military’s drive to make weapons smarter and to share data to and from the battlefield. This vast field again falls beyond the scope of this exploration of the intelligence analogy, but it is important to note certain additional links between the evolution of computers and the realm of intelligence support to battlefield commanders. Smart weapons emerged in the early 1970s, motivated in part by the Pentagon’s desire to increase the precision of bombs in Vietnam (thereby reducing the danger to aircrews and minimizing morally and politically harmful collateral damage to civilians). The weapons were “smart” not only because they could be guided to their targets but also because they depended on intelligence about those targets (e.g., precise locations) and on copious and timely data flows to increase their accuracy and lethality. Such data flows eventually demanded quantum leaps in bandwidth, processing power, and networking architecture. The US military thus helped drive improvements to digital communications to increase their resilience and volume, and in the 1970s it began setting standards for the security of these burgeoning systems and the data they carried. All these developments spurred research in the computer industry and provided growing markets for innovations that initially seemed to lack consumer markets.
Government links with industry had a direct, strategic focus as well. That nexus brought the intelligence and computer sectors together at the dawn of cyberspace. As historian Jonathan Winkler has shown, the US government has jealously guarded a national interest in the progress of international telecommunications, beginning in World War I and continuing unabated to our day.21 Among many examples, Ronald Reagan’s administration in 1984 took note of the de facto blending of the computer and telecommunications fields and found this trend had significant implications for US security. President Reagan accordingly issued a top-secret directive giving the NSA responsibility for setting standards to protect sensitive but unclassified data in all US federal government computers. Though Congress soon overturned Reagan’s measure, the mere fact a president had ordered such a step demonstrated the growing overlap between the intelligence and computer security worlds.22 Washington has quietly secured the strategic high ground in the nation’s communications sector, using intelligence both to guard and to exploit that advantage. The importance of that access for the nation’s intelligence function needs no reiteration here.
Cyber operations grew out of and still resemble EW, as noted earlier, with one key difference. Traditional EW aimed to guide, target, or protect weapons systems but remained an activity extrinsic to those weapons as such. Cyberspace, in contrast, includes many of those weapons. The “Internet of things” arrived early in modern military arsenals. Their interconnectivity not only makes them smart but also potentially leaves them vulnerable as an adversary could theoretically find ways to make those systems hinder rather than help operations. Here is another way in which cyberspace operations both learn from and affect intelligence activities.
What Is New in Cyberspace?
So far the parallels described between intelligence activities and cyberspace operations are not merely hypothetical but are already working themselves out in practice around the world. Other parallels can be envisioned as well, at least in listing the possible warning signs that might accompany their emergence over the foreseeable future.
The most obvious and oft-discussed association between intelligence activities and cyberspace operations is the confusion they can cause among those on their receiving end. Human espionage can look quite like subversion or worse as authors such as Sun Tzu and the Indian sage Kautilya noted thousands of years ago. They urged commanders and princes to have their spies assassinate rival chiefs.23 Active intelligence operations, like cyberspace collection campaigns, are by definition quiet but potentially provocative. They can appear similar to preparations for war, and from time to time they have increased tensions between states. But has anyone gone to war over an intelligence operation that was exposed or blew up in a crisis?
Here the parallel with intelligence can be informative. People have gone to war having bad intelligence that was either misconceived or spoofed by the adversary (see the Iraq War in 2003). Wars have started over assassinations, to be sure, but an assassination is by definition a successful operation designed to provoke hostilities and is not the inadvertent cause of them. Outside of these unrepresentative examples, the list gets thin. As noted previously, states by and large do not fight over blown technical collection activities. History yields many such examples of states catching spies or finding wiretaps, telephone bugs, and so on, without those states declaring war in response. The net result of blown intelligence activities is typically the loss (or turning) of the source, sometimes with a well-publicized protest, an expulsion of diplomats, or an execution or two. Even military reconnaissance is not usually dangerously provocative, as a single aircraft or patrol boat can hardly be mistaken for an invasion force. Overflights of the Soviet Union in the 1950s did not provoke a strategic military response by Moscow (apart from the Soviets’ downing reconnaissance aircraft such as Francis Gary Powers’s U2 in 1960). Similarly, aggressive US overflights of Cuba in the Missile Crisis (1962) agitated local air defense and concentrated minds in Moscow and Havana, but they did not prompt Soviet strikes on the United States.
Cyberspace operations gone awry, like intelligence revelations, so far have not provoked wars. The net effect in cyberspace is typically the quiet purging of an implant, the updating of an operating system, or the closing of a port, combined with perhaps a diplomatic complaint, possibly via the press. The reason for this lack of panic and escalation might have been explained (in another context) by the Atlantic Council’s Jason Healey. As he notes, cyberspace operations rarely if ever proceed in isolation. That two states are at odds over some issue certainly assists in the attribution of contemporaneous cyber attacks to one or both of them.24 Although Healey does not explicitly flip this coin, his argument also hints that policymakers virtually always know some context behind the events that places noncrisis cyber developments in perspective, usually by showing that the state allegedly perpetrating the cyber transgression is not currently deploying for war.
Can cyber operations cause instability and even escalate a crisis? Of course, they might, if perhaps only because no one can definitively prove that they will not. What we can say is that no cyberspace operation to date has made a crisis spiral into war. Indeed, the United States has experienced more than its share of cyber penetrations and cyber attacks, yet it has never come close to initiating hostilities over a cyber incident. As far as we know, no one else has either. Some observers might cite Solar Sunrise, the Department of Defense’s name for a 1998 cyber intrusion that originally looked as if Iraq had penetrated US military networks (and which turned out to be an Israeli hacker working with two American teenagers). Solar Sunrise did indeed unfold amid a diplomatic crisis with Iraq, leading American observers to suspect Iraqi complicity, yet it also happened at a time when Defense Department defenses and cyber decision-making were still nascent. The diplomatic net result of Solar Sunrise was nothing. Calmer heads prevailed, and the United States did not strike Iraq over the misattributed intrusion. What Solar Sunrise proves about crisis instability and escalation is anyone’s guess. Nevertheless, every year since 1998, cyber attacks have been misattributed, but so far such mistakes have not caused any wars. One wonders how many years it takes to notice a pattern here.
One note of caution while listing the parallels between intelligence activities and cyberspace operations is that the intelligence-cyber analogy helps to illuminate cyberspace operations but not cyberspace as a war-fighting domain. The analogy also seems stretched when one ranks the relative scales of intelligence activities and cyberspace operations; the former tend to be minute, and the latter look comparatively vast. Other analogies in this volume can help explain such aspects of cyberspace and the events that happen there. Let us then close with the observation that the hitherto tight parallelism between intelligence activities and cyberspace operations could well witness a divergence of potentially strategic consequence. One sees such signs in the lingering reputational damage to the United States and American firms caused by the media’s revelations over the last few years. It is difficult to measure the effects, which are primarily commercial and consist of missed opportunities as much as actual expenses. Much anecdotal evidence points to forfeited sales for American products, and Washington has certainly (for the time being) lost control over the global narrative regarding Internet security and privacy. This development adds a new element rarely if ever seen in traditional espionage cases, and we would be wise to remain sensitive to how it unfolds.
Conclusion
We hardly need an analogy to compare cyberspace operations with intelligence activities, as one exaggerates only a little to say they are mostly the same thing. A biologist might likewise say the same about dinosaurs and birds, for the latter developed from the former with no evolutionary “seam” to distinguish the two types of animals (indeed, they are both members of the dinosauria clade). We also know from Sun Tzu that intelligence is concomitant with force; intelligence guides and sharpens force, making it more secret, subtle, and sometimes more effective. Further, force follows people and wealth; thus, wherever they are, aggressors will try to use force to control those people and to take the wealth.
Cyberspace operations can and do work along the same lines, for the same purposes, and for the same leaders. The steadily growing scale of intelligence activities expanded dramatically with the global diffusion of cyberspace, allowing formerly state-monopolized means and capabilities to be used by almost anyone with an Internet connection. That same diffusion of intelligence tools in cyberspace also made virtually everyone a potential collector of intelligence or a potential intelligence target. The lines between spying and attacking have always been blurry in intelligence activities as well as in cyberspace operations. Both are inherently fragile and provocative. While neither is necessarily dangerously destabilizing in international relations, we must learn to perform cyberspace operations as we learned to perform intelligence activities—that is, with professional skill, with strict compliance with the law, and with careful oversight and accountability.
Notes
Michael Warner serves as the command historian for US Cyber Command. The opinions in this chapter are his own and do not necessarily reflect official positions of the command, the Department of Defense, or any US government entity.
1 The Joint Chiefs of Staff define cyberspace operations as “the employment of cyberspace capabilities where the primary purpose is to achieve objectives in or through cyberspace.” Joints Chiefs of Staff, Joint Publication 3–12 (R), Cyberspace Operations (February 5, 2013), v, http://www.dtic.mil/doctrine/new_pubs/jp3_12R.pdf.
2 See Joint Chiefs of Staff, Joint Publication JP 1–02, Department of Defense Dictionary of Military and Associated Terms (November 8, 2010 [as amended through October 15, 2015], http://www.dtic.mil/doctrine/new_pubs/jp1_02.pdf), which defines intelligence as “the product resulting from the collection, processing, integration, evaluation, analysis, and interpretation of available information concerning foreign nations, hostile or potentially hostile forces or elements, or areas of actual or potential operations” (and the product of this activity and the organization performing it).
3 Michael Warner, “Intelligence as Risk Shifting,” in Intelligence Theory: Key Questions and Debates, ed. Peter R. Gill, Stephen Marrin, and Mark Phythian (London: Routledge, 2008), 26–29.
4 J. C. Masterman broke this story in The Double-Cross System in the War of 1939 to 1945 (New Haven, CT: Yale University Press, 1972).
5 See Josh Chin, “Malware Creeps into Apple Apps,” Wall Street Journal, September 21, 2015.
6 For example, Citizen Lab at the University of Toronto’s Munk School of Global Affairs has done yeoman service tracking spies in cyberspace for nearly a decade. See the nonprofit lab’s assessment of FinFisher’s surveillance software used in more than thirty states: Bill Marczak et al., “Pay No Attention to the Server behind the Proxy: Mapping FinFisher’s Continuing Proliferation” (October 15, 2015), https://citizenlab.org/2015/10/mapping-finfishers-continuing-proliferation/. Several of the larger antivirus and Internet security companies have fielded their own research arms to find and publicize state-based and criminal espionage.
7 A case in point is the FBI’s takedown of the hacktivist group Lulz Security, or LulzSec, by turning one of its leaders in 2011. The hacker in question was Hector Xavier Monsegur, called “Sabu” by other members of LulzSec. The bureau had initially arrested Monsegur in 2011 and made mass arrests of LulzSec members on March 6, 2012. See Mark Mazzetti, “F.B.I. Informant Is Tied to Cyberattacks Abroad,” New York Times, April 24, 2014.
8 See the FBI’s unsealed affidavits here: “LulzSec Indictment Documents,” The Guardian, March 6, 2012, http://www.theguardian.com/technology/interactive/2012/mar/06/lulzsec-indictment-documents-prosecution-complaints.
9 Saul O’Keeffe, “Hacking Underworld Riddled with Secret FBI Informants,” ITProPortal, July 24, 2015, http://www.itproportal.com/2015/24/07/hacking-underworld-riddled-secret-FBI-informants/.
10 Stephen Biddle, Military Power: Explaining Victory and Defeat in Modern Battle (Princeton: Princeton University Press, 2004), 28.
11 At Tannenberg an outnumbered German army defeated two Russian armies in detail after overhearing their plans broadcast en clair. The Russians apparently lacked compatible codebooks. The Royal Navy turned the tables on German ships and naval aircraft by geo-locating their transmissions and monitoring their stereotyped messages, which upon analysis revealed patterns that clearly indicated upcoming operations.
12 I treat this in more detail in “Notes on the Evolution of Computer Security Policy in the US Government, 1965–2003,” IEEE Annals of the History of Computing 37, no. 2 (April–June 2015).
13 Gina Bari Kolata, “Computer Encryption and the National Security Agency Connection,” Science 197 (July 29, 1977): 438.
14 US Senate, Select Committee on Intelligence, “Involvement of NSA in the Development of the Data Encryption Standard,” 95th Cong., 2d sess., April 1978.
15 The San Bernardino, California, attack occurred in December 2015, when two individuals, Syed Rizwan Farook and Tashfeen Malik, killed fourteen civilians and injured twenty-two others in a shooting at the Inland Regional Center. The two assailants were killed in a gunfight with police. After police recovered Farook’s cell phone, the FBI asked Apple to unlock the device, as the bureau believed that information related to the attack was on the phone. This request launched a nationwide debate regarding whether Apple should unlock the device. The dispute ended when the FBI purchased a vulnerability to access the device for more than $1 million. For more information, see Adam Nagourney, Ian Lovett, and Richard Pérez-Peña, “San Bernardino Shooting Kills at Least 14; Two Suspects Are Dead,” New York Times, December 2, 2015, http://www.nytimes.com/2015/12/03/us/san-bernardino-shooting.html; and “FBI Paid More than $1M for San Bernardino iPhone ‘Hack,’ ” CBS News, April 21, 2016, http://www.cbsnews.com/news/fbi-paid-more-than-1-million-for-san-bernardino-iphone-hack-james-comey/.
16 Witness, for example, recent allegations over dual elliptic curve deterministic random bit generator, or Dual_EC_DRBG encryption, as well as FBI director James Comey’s public warnings that his bureau is “going dark” because it cannot unlock the encryption in perpetrators’ smartphones. The NSA insists it uses publicly available encryption suites for its own data. “NSA relies on the encryption and standards we advocate for and advocate for the encryption standards that we use,” Anne Neuberger, then director of the agency’s Commercial Solutions Center, told a radio audience in 2013. “[W]hat we recommend for inclusion in those cryptographic standards, we use ourselves in protecting classified and unclassified national security systems.” See “Threat Information Sharing Builds Better Cyber Standards, Expert Says,” Federal News Radio Custom Media, October 3, 2013, 5:05 p.m., http://federalnewsradio.com/technology/2013/10/threat-information-sharing-builds-better-cyber-standards-expert-says/.
17 “Purdue’s Gene Spafford was correct, but early, when he likened network security in the absence of host security to hiring an armored car to deliver gold bars from a person living in a cardboard box to someone sleeping on a park bench.” See Daniel E. Geer Jr., “Cybersecurity and National Policy,” Harvard National Security Journal 1 (April 7, 2010).
18 US Senate, Committee on Government Operations, “Staff Study of Computer Security in Federal Programs,” 95th Cong., 1st sess., February 1977, 135–37, http://babel.hathitrust.org/cgi/pt?id=mdp.39015077942954;page=root;view=image;size=100;seq=3.
19 The NSA “will initiate a transition to quantum resistant algorithms in the not too distant future. Based on experience in deploying Suite B [encryption algorithms], we have determined to start planning and communicating early about the upcoming transition to quantum resistant algorithms. Our ultimate goal is to provide cost effective security against a potential quantum computer.” See NSA, “Cryptography Today,” January 15, 2009, https://www.iad.gov/iad/programs/iad-initiatives/cnsa-suite.cfm.
20 For more on this, see my recent article, “Notes on Military Doctrine for Cyberspace Operations in the United States, 1992–2014,” Cyber Defense Review (Army Cyber Institute), August 27, 2015, http://www.cyberdefensereview.org/2015/08/27/notes-on-military-doctrine-for-cyberspace/.
21 Jonathan Reed Winkler, Nexus: Strategic Communications and American Security in World War I (Cambridge, MA: Harvard University Press, 2008).
22 Warner, “Notes on the Evolution,” 10–12.
23 Sun Tzu, The Art of War, trans. Samuel B. Griffith (New York: Oxford University Press, 1971 [1963]), ch. 13. See also books 1, 2, and 13 of Kautilya’s The Arthashastra, trans. L. N. Rangarajan (New Delhi: Penguin Books India, 1992).
24 Jason Healey, ed., A Fierce Domain: Conflict in Cyberspace, 1986 to 2012 (Washington, DC: Cyber Conflict Studies Association, 2013), 265–72.
From Understanding Cyber Conflict: 14 Analogies, edited by George Perkovich and Ariel E. Levite and published by Georgetown University Press. For additional information about the book: http://press.georgetown.edu/book/georgetown/understanding-cyber-conflict.
Nonlethal Weapons and Cyber Capabilities
Scholars have considered many analogies for cyber capabilities, grappling with how these capabilities may shape the future of conflict.1 One recurring theme in this literature is the comparison of cyber capabilities to powerful, strategic capabilities with the potential to cause significant death and destruction.2 This theme is understandable. Reports of malware that can penetrate air-gapped networks and cause physical effects can easily stimulate worst-case thinking. Moreover, relative silence from senior government leaders about cyber capabilities can fuel speculation that nations are amassing devastating arsenals of malware.3 Increasing connectivity from consumer products to critical infrastructure control systems creates the prospect of widespread vulnerability across societies.4 Analogies to different methods of state-to-state coercion are therefore quite common.
However, no one has ever been killed by a cyber capability. With this in mind, perhaps another set of analogies for cyber capabilities—not destructive, strategic capabilities but those that are nonlethal—should be considered. The US Department of Defense for decades has developed a range of nonlethal weapons for its forces, yet to our knowledge, scant academic work to date has considered how nonlethal weapons might provide some additional conceptual insight into cyber capabilities.
In this chapter, we examine nonlethal weapons and cyber capabilities and suggest that for conceptual purposes it may be useful to analogize between them across four areas: their ability to incapacitate, the reduced collateral damage they inflict, the reversibility of their effects, and their ability to deter. In so doing, we show the usefulness and the limits of analogizing cyber capabilities to nonlethal weapons. Ultimately, we conclude that these four areas of convergence between nonlethal weapons and cyber capabilities make for a novel conceptual analogy that would serve policymakers well as they consider future employment of cyber capabilities.
In our conclusion, however, we highlight one important limitation of this approach: Department of Defense leaders have faced difficulty in gaining approval to use nonlethal capabilities. We briefly explore reasons why nonlethal weapons have so seldom been authorized and offer some observations as to why cyber capabilities may be easier to employ in the future. We base this distinction on the fact that most nonlethal weapons target opposing personnel, whereas most cyber capabilities target opposing matériel.
Before commencing our analysis, we offer one preliminary note about terminology. Already we have noted that we examine cyberspace “capabilities” as opposed to cyber “weapons.” The distinction is not pedantic. When we write of nonlethal “weapons,” the intent of these tools is in clearer focus—to inflict bodily harm or physical damage.5 However, the cyber tools discussed in this chapter are not always weaponized ex ante. Instead, they offer certain capabilities: some that may be used offensively, some in self-defense, and still others for penetration testing. Because code is not inherently weaponized, we use the term “capabilities” to cover the full of range of what technologies in cyberspace have to offer.
Characteristics of Nonlethal Weapons
To more fully understand the proposed analogy between nonlethal weapons and cyber capabilities, we must first understand the basics of nonlethal weapons. The Department of Defense defines nonlethal weapons as “weapons, devices, and munitions that are explicitly designed and primarily employed to incapacitate targeted personnel or materiel immediately, while minimizing fatalities, permanent injury to personnel, and undesired damage to property in the target area or environment.”6 Cyber capabilities are excluded from this definition. Nonlethal weapons can provide operating forces with options to de-escalate situations, minimize casualties, and reduce collateral damage. By providing commanders with these additional options, nonlethal capabilities can be of unique value, sometimes proving to be more appropriate than their lethal counterparts.
Nonlethal weapons are often divided into two categories depending on their direct target. First, many nonlethal weapons are identified as serving a “counter-personnel” role because they target the human body itself. A notable example is oleoresin capsicum spray, which is more commonly known as pepper spray. When sprayed at a target, the chemical compounds in the spray act as an irritant to the eyes, causing tears, pain, and temporary blindness. This effect makes it more difficult for the target to engage in combat or other threatening activities.
The second category of nonlethal weapons targets machines, not people. An example of this sort of capability is the so-called spike strip. Derived from the older caltrop—which was used as a counter-personnel, counter-animal, and counter-vehicle weapon—the spike strip comprises long, upward-facing metal barbs linked together in a long chain. Each barb is sufficient to puncture the tires of many vehicles; so, when laid across a roadway, the spike strip can slow or stop vehicle movement until the tires have been replaced. Many spike strips are designed to gradually let the air out of affected vehicles’ tires, minimizing the harm done to passengers and reducing the risk of collateral damage.
Across both counter-personnel and counter-machine nonlethal weapons, four characteristics are evident. First, their primary purpose is to incapacitate their targets. Second, they do so with minimal collateral damage, and, third, in a way that is often temporary or reversible. Finally, nonlethal weapons can serve as a limited deterrent in tactical situations. These characteristics are key points of comparison in making the analogy to cyber capabilities.
Operational History of US Nonlethal Weapons
One can trace the origins of nonlethal weapons in warfare to the development of modern chemistry, which began in the eighteenth century. By the mid-nineteenth century, consideration was given to using chemical weapons in the Crimean and US Civil Wars.7 To be sure, chemical weapons would eventually become quite deadly, but initially the intent behind their use was not to kill but to force the enemy to disperse. Militaries apparently did not embrace using chemicals in warfare until World War I, when the German army launched the first chemical weapons attack on April 22, 1915, near Ypres.8 As the United States entered the war, it institutionalized its chemical munitions research and development into a Chemical Warfare Service with the US Army.9 Among the chemical weapons developed during the war, multiple armies used tear gas, which remains a nonlethal weapon in today’s law enforcement and military arsenals.10
At the war’s conclusion, the US Army rapidly demobilized its chemical weapons corps and seemed poised to all but abandon research into this class of weaponry.11 The army’s experts secured employment in civilian jobs, and surplus material was either sold or transferred to other parts of the government.12 Thus concluded the US Army’s initial efforts to explore how gas could be used as a chemical, nonlethal weapon.13 Thereafter, the 1925 Geneva Protocol prohibited the use of chemical weapons in war.14
Even without this protocol, it seems unlikely that tear gas–related chemical agents would have been as effective in World War II, at least in the European theater. The rise and increasing adoption of motorized and mechanized forces neutralized the utility of chemical agents to disperse forces from fixed positions.15 However, militaries used smoke as a tactical, nonlethal enabler during World War II, often to obscure their own positions rather than to force the enemy to reposition.16 Variants included white phosphorus, smoke pots, oil smoke generators, aircraft-delivered smoke tanks, and even colored smoke munitions for signaling.17
Development of chemical agents continued after World War II. The use of herbicides and other agents during the Vietnam War, while not deemed to violate the 1925 Geneva Protocol, proved to be sufficiently controversial and damaging that President Gerald Ford issued an executive order renouncing the first use of herbicides and riot control agents in war.18
Other technologies emerged that offered militaries options between “don’t shoot” and “shoot to kill.” The United Kingdom used rubber and plastic bullets in Northern Ireland in the 1970s. Indeed, by one account the British military fired 55,834 rubber bullets between 1970 and 1975.19 During Desert Storm, the United States fired cruise missiles filled with carbon fiber that disrupted Iraq’s power stations.20 In March 1991 Secretary of Defense Dick Cheney asked his lieutenants Paul Wolfowitz and Zalmay Khalilzad to lead a Non-Lethal Warfare Study, but it is unclear what, if anything, came of this examination.21
Just how useful nonlethal weapons could be was perhaps most clearly demonstrated during the US Marine Corps’ presence in Somalia in the mid-1990s. Their commander, Lt. Gen. Anthony Zinni, in a 1994 hearing spoke of the virtues of nonlethal weapons. “Non-traditional operations,” he said, “often involve police-like actions that would be best dealt with by non-lethal means. Crowd control, demonstrations, petty theft, acts of urban violence in populated areas, are examples of situations that could best be handled all or in part by non-lethal weapons. . . . These non-lethal means also permit forces to demonstrate resolve or provide a show-of-force without endangering lives.”22
A year later, Zinni’s Marines provided cover when several thousand United Nations (UN) forces withdrew from Somalia. The former had trained to use a variety of nonlethal weapons, including pepper spray, flash bangs, and road spikes.23 To control hostile crowds, they were equipped with foam guns and sticky guns, as well as hard sponge projectiles.24 The Marines also warned the local populace that they possessed these nonlethal weapons. Ultimately, the mission to secure the extraction of the UN forces was successful. No Marines were killed.25 Zinni noted afterward, “Our experience in Somalia with non-lethal weapons offered ample testimony to the tremendous flexibility they offer to warriors on the field of battle.”26
Later in the 1990s, the Defense Department attempted to institutionalize research and development for a broader array of nonlethal weapons.27 Yet few capabilities were available to support US forces after they invaded and occupied Iraq in 2003. A 2004 Council on Foreign Relations task force on nonlethal weapons found that these weapons “could have helped to reduce the damage done by widespread looting and sabotage.”28 Its report was one of the last major studies of the US military’s use of nonlethal weapons. There is little evidence that prioritization or resources have changed since then.
With this history of experimentation but not integration in mind, we return to the analysis of how four qualities of nonlethal weapons, especially those that are counter-matériel, make for a conceptually useful analogy to cyber capabilities.
Incapacitation
Nonlethal weapons incapacitate their targets by attacking critical parts of the targeted machine, such as tires on a vehicle, and disabling them. Cyber attacks can work in the same way, attacking critical parts of a computer system and either overwhelming them or disabling them. Information security professionals have long argued that a cyber operation can do harm in one of three ways.29 First, it can target the confidentiality of data in a computer system, stealing sensitive data and perhaps making it public. Second, it can target the integrity of a computer system by inputting malicious commands that adversely (and clandestinely) affect its functionality or by corrupting important data. Third, it can target the availability of a computer system, disabling access to it at a critical time.
An example of the incapacitation function is the cyber operation that accompanied the purported Israeli air strike on Syria in 2007. The cyber operation corrupted the integrity of the Syrian air defenses. While operators of the Syrian air defense system believed their radar was functional and that it presented them with an accurate display of the area, in fact the radar systems did not show the Israeli jets entering Syrian airspace.30
A more common example of incapacitation via cyber operation is known as a denial of service attack, which targets the availability of important computer services by overwhelming them with data. An ocean of incoming data prevents the targeted systems from responding to legitimate requests. Finally, some capabilities achieve an incapacitating effect by targeting both the integrity and the availability of a target. For example, the 2014 attack on the Sands Casino in Las Vegas targeted the integrity of critical computer code and adversely impacted the availability of the overall system. When this critical code was erased or corrupted, the affected computers did not function.31
By definition, cyber capabilities target machines. As a result, it is more difficult, but not impossible, to imagine a cyber capability that is directly counter-personnel. One possible lethal capability is code that manipulates a vital medical device, such as a pacemaker. Indeed, in 2007 Vice President Cheney had the wireless functionality on his pacemaker disabled out of fear that it could be attacked.32 More broadly, weaknesses in the Internet of things could allow malicious code to incapacitate critical devices at critical times, leading to the possibility of targeted attacks with a direct effect on personnel.33 Even if cyber capabilities are not lethal now, if these sorts of attacks become more achievable, they might be more lethal in the future as well.
Whether an attack has lethal effects or not, electronic systems targeted by cyber capabilities might in some instances be so important to an individual that incapacitating the system could have debilitating counter-personnel effects. For example, targeting cellular phone networks or other communications systems can affect an individual’s ability to coordinate illegal, hostile, or otherwise dangerous behavior. It could also perhaps be argued that targeting confidential systems, such as the theft of data from personnel databases, has an effect on personnel and could be used for blackmail. In this last case, however, the delay between operation and effect is substantially longer than is the case for most nonlethal weapons. Thus, on the matter of incapacitation, the analogy is strongest between counter-matériel nonlethal weapons and cyber capabilities that attack the integrity and availability of targeted systems.
Minimization of Collateral Damage
Similar to nonlethal weapons, some cyber capabilities can be deployed to minimize collateral damage. When it comes to malicious computer code, this sort of minimization can take one or both of two forms—first, preventing the spread of computer code beyond the target and, second, minimizing the harm the code causes to nontarget systems if it does in fact spread.
On the first point, intermediate systems are commonly breached in a cyber operation as stepping-stones to reach the target. This is especially true if direct access to the target is denied. For example, as a means of getting malicious code into a facility that is not connected to the Internet and is thus harder for an attacker to access, the authors of Stuxnet reportedly targeted a number of Iranian contractors who were servicing the country’s nuclear program.34 But such intermediate infections can be difficult to control in cases where the capability’s propagation mechanism, or the code it uses to spread from machine to machine, is automatic. In the Stuxnet case, the code spread beyond the original authors’ intent, reaching other systems and eventually coming to the attention of the information security community.35
Second, authors of malicious code have shown some capability to minimize the harm such code can do, even if it spreads. For example, the authors of Stuxnet, Gauss, and other malicious code placed targeting guidance in the code.36 This targeting guidance prevented the code from launching its most significant and damaging payloads unless the malware arrived at the correct target. While reports indicate these mechanisms were not perfect at preventing all ill effects, they automatically and substantially constrained the damage done by the malicious code once it spread.37 It is worth noting, however, that adding such constraints requires a great deal of information about the particulars of the target system, information that will likely need time and previous operations to collect.38
Another important area of overlap between nonlethal weapons and cyber capabilities is related to minimizing collateral damage. Policy guidance offered by the US Defense Department does not prioritize cyber capabilities or nonlethal weapons over potentially more destructive kinetic ones. While cyber capabilities, at some point in the future, might offer a commander the ability to achieve military-relevant effects with only a minimal risk for collateral damage or loss of life, the complexity of computer networks at present greatly complicates the confidence a commander can have in the ability to achieve precise effects exactly when desired. Battle damage assessment is subject to similar limitations. In some instances, therefore, a commander would reasonably prefer non-cyber capabilities over a vast arsenal of cyber capabilities if the former could give greater odds for the success of an operation.
Based on these examples, given enough effort, time, and operator ability, sophisticated cyber capabilities present some prospects for minimizing collateral damage to systems besides the target. However, it is hard to generalize this point and argue that this central characteristic of nonlethal weapons can be a characteristic of all cyber capabilities. In addition, failures to prevent collateral damage do occur. Especially with capabilities as new and complex as cyber ones, the unintended consequences of particular capabilities may cause additional or unexpected damage. On the matter of collateral damage, then, the analogy is as much aspirational as operational. Some cyber capabilities are narrowly targeted and may be wielded carefully by sophisticated actors, but certainly not all of them are.
Reversibility
The analogy functions similarly when it comes to reversibility, for some cyber capabilities, but not all, are reversible. We identify four categories of reversibility: capabilities that are not reversible; capabilities that are reversible after some reasonably constant period of time, depending on environmental conditions; capabilities that are reversible at the discretion of the operator; and capabilities and their effects that could be reversible by the target but require some time, material, or effort to do so.
Various nonlethal weapons fall into each of the four categories. For example, in the first category, some kinds of nonlethal munitions do harm to the body that, though not fatal, cannot be undone; however, they are comparatively rare. For example, a rubber bullet could possibly cause some harm to the body that is not easily undone. In the second category, flash bang grenades and tear gas cause paralysis for a time, but their effects eventually dissipate. In the third category, operators can turn on and off electronic jamming, lasers, or sonic capabilities. And in the last category, the spike strips discussed earlier require the target to acquire new tires.
Cyber capabilities exist in three of the four categories. In the first category, some sabotage attacks are difficult to reverse easily, especially if they destroy critical material or data. Stuxnet is an example, though it was substantially more destructive than nonlethal weapons are. We do not know of any cyber capabilities that fall into the second category, which sees effects dissipate over time, depending on environmental conditions.
Other cyber capabilities, such as ransomware, fall into the third category because they paralyze systems until an operator directs otherwise. When ransomware affects a system’s capability, important data is encrypted in such a way that the legitimate user cannot access it until the criminal operating the ransomware decrypts it—usually for a fee. Capabilities that have an intentionally intermittent or time-bound effect would also fall into this third category. Still other cyber capabilities, such as some wiping operations, are best placed in the fourth category, as the target may be able programmatically to reverse it but would require a substantial amount of effort or time to do so. For example, a target might possibly recover data from “wiped” hard drives, depending on how the wiping attack was done, but it is beyond the capabilities of most ordinary users.
It is worth noting that some capabilities exist in both the third and fourth categories. For example, denial of service attacks—which overwhelm a target with meaningless data—can be not only turned off by an operator but also thwarted by the target’s taking certain countermeasures.
From this analysis, we conclude that the qualities of reversibility that are most often intended when using nonlethal weapons are often similar to the most frequent kinds of cyber capabilities employed today. With some rare but important exceptions, such as attacks that destroy physical infrastructure, the damage caused by even some data-destroying cyber capabilities is often reversible in that computers and systems can be repaired with sufficient time and resources. However, as a practical matter, most victims of such attacks may find replacing rather than repairing their malfunctioning systems is more prudent. Given that the majority of contemporary compromises of confidentiality, integrity, and availability of data are perpetrated through reversible means (like denial of service), we feel the analogy to nonlethal weapons has value in this area of analysis.
Deterrence
Analogizing nonlethal weapons and cyber capabilities in the area of deterrence is possible but not as straightforward as the preceding three areas of analysis. Deterrence is an important but reasonably narrow concept when it comes to nonlethal weapons. For cyber capabilities, questions of deterrence are more complex, as applications converge with and diverge from the concept’s use in nonlethal weapons. Much has been written about deterrence of cyber capabilities as well as about using these capabilities for deterrence; thus, we briefly provide an outline of the underpinning of deterrence and examine how the analogy applies.39
When considering deterrence, the initial questions to consider are, whom do we wish to deter from doing what, and what would we like them do to instead? Any discussion about deterrence must be tailored around this “deter whom from doing what” foundation. During the Cold War, the term “nuclear deterrence” was often shorthand for “deterring the Soviet Union from launching a nuclear-armed attack.”40 But this case of deterrence can obscure the fact that other kinds of deterrence exist. While the Cold War case mostly involved deterrence of a specific actor, some deterrents are general and apply to large groups of actors.
Similarly, while nuclear deterrence is absolute—that is, seeking to stop any use of an atomic weapon—other deterrents are restrictive and seek to minimize the effects and occurrence of an unwanted activity as much as possible while acknowledging implicitly that some will occur.41 Deterring crime is an example that is both general and restrictive: police do not always know which individual in society is a would-be criminal, and they also recognize that despite measures to deter its occurrence, some amount of crime is inevitable.
The two traditional methods of deterrence are cost imposition and denial.42 Deterrence by cost imposition operates via a (tacit or explicit) credible threat of retaliation to such a degree that the attacking state would find commencing the unwanted activity prohibitively costly. Deterrence by denial operates by convincing an adversary that even if it does not fear cost imposition, the benefits it seeks will be checked due to effective defenses. Together the two can make certain actions unappealing. Deterrence by denial can reduce the chances of success, while fear of retaliation can make certain actions prohibitively costly.
Nonlethal weapons can function, depending on the capability, as deterrents by denial or by threatening cost imposition. Many counter-matériel and counter-personnel capabilities impose comparatively minimal costs on an adversary but can reduce or deny the adversary’s capability to carry out an unwanted action.
A tactical example from Somalia demonstrates that nonlethal lasers functioned as a means of threatening total retaliation, signaling to potential adversaries that they had been identified and would be neutralized if they attacked US forces. That is, a laser beam shined on a target warned that a bullet could follow.
Some cyber capabilities also can work as deterrents by denial or deterrents by cost imposition, depending on the capability. China’s Great Firewall is an example of deterrence by denial. The system, which actively intercepts unwanted Internet activity in Chinese networks and prevents it from connecting to blocked servers, aims not only to prevent but also to deter actions that the Chinese government deems undesirable. It is a scalable and general deterrent across the broader population rather than a narrowly crafted one for a small group of actors. Still, it is restrictive rather than absolute, as the Chinese surely know that some individuals find their way around the firewall.
China’s so-called Great Cannon is an example of deterrence by cost imposition. In 2015 members of the popular code repository and software development site GitHub, to which anyone can upload code or text, began uploading New York Times articles and other content the Chinese viewed as subversive. In response, while leveraging their position of privilege on the Chinese Internet that is made possible by the Great Firewall, Chinese actors launched a massive denial of service attack and took GitHub offline for a time. By imposing costs on GitHub, the Chinese carried out a form of deterrence by cost imposition to GitHub and similar sites, though they ultimately ceased the attack without changing GitHub’s behavior.43
Cyber capabilities, in some circumstances, can send a signal threatening greater non-cyber cost imposition. For example, a nation may reveal a cyber operation to another state as a means of showing that it can access the latter’s strategically important networks. While it is unclear if Stuxnet was intended to have such a psychological effect, apparently the program introduced doubt into the minds of Iranian engineers, and the worm’s revelation potentially impacted later nuclear negotiations.44 In other cases, cyber capabilities—such as the capacity to send a message to anyone entering a certain area—can directly carry a warning. In 2014 protestors in Kiev received text messages of this sort.45
Nonlethal weapons and cyber capabilities are similar in that deployment of some forms of each can enable various kinds of deterrence. But a key difference emerges: nonlethal weapons, because they are more limited in their potential damage, are seldom the objects of deterrence. While hypothetically possible, it seems impractical for one entity to devote resources to deter another’s employment of nonlethal weapons. The stakes are usually just too low. The threat of nonlethal weapons against American troops is not sufficiently serious to warrant either issuing powerful threats to impose costs or creating sufficient defenses to deny an adversary’s benefit.
However, it is somewhat easier to conceive of situations where the United States might wish to deter another entity’s use of nonlethal weapons by implementing denial. For example, if US forces embarked on a stabilization mission where the local population had demonstrated a desire or capability to employ nonlethal weapons, the United States might wish to demonstrate powerful defenses that easily blunt the effectiveness of those weapons.
Cyber capabilities, because they are potentially more destructive or—in the case of data theft—strategically damaging without being destructive, are different in kind. Nonlethal weapon deterrence yields a one-way question: How can nonlethal weapons be useful for deterrence? Cyber deterrence yields a two-way question: How can cyber capabilities be useful for establishing deterrence generally, and how can an adversary’s use of cyber capabilities be deterred but not necessarily with cyber means?
As a result, the analogy between the two is attenuated. When asking how to deter the use of cyber capabilities by others, it is important not to limit oneself to thinking of one’s own cyber capabilities. All elements of national power, including political clout, economic sanctions, kinetic retaliation, and cyber defenses, should be included in the deterrence discussion. Offensive cyber capabilities may be part of this calculus, but many are likely too subtle or too limited to fully act as a deterrent on their own. The analogy to nonlethal weapons here points to the need for a broader discussion of cyber deterrence.
Conclusion
A clear theme runs through this analysis: areas of overlap in both characteristics and in function exist between nonlethal weapons and cyber capabilities. These areas of overlap strengthen the case for the proposed analogy and point to the possibility that lessons learned about nonlethal weapons may be usefully applied to cyber capabilities. In short, recognizing how new and different cyber capabilities are, we need not consider them with an entirely blank slate. Analogizing to nonlethal weapons can be a valuable approach.
With that said, some cyber capabilities do not fit the analogy particularly closely—for example, those capabilities that do not seek to incapacitate a target (instead, they might steal data from it), those capabilities that do not seek to minimize collateral damage, and those capabilities that are irreversibly destructive. For discussions of these kinds of capabilities, nonlethal weapons are less obviously useful.
Another, more practical kind of limitation to this analogy concerns the employment of these weapons and capabilities. For reasons that remain largely elusive to the authors, the use of nonlethal weapons by US military forces has been restricted. Several military officers have informally observed and stressed that gaining authorization to employ lethal force was often easier than that for nonlethal force despite the latter’s promise of lower collateral damage and only temporary effects. The question that remains unanswered in our research is, why are nonlethal weapons not better integrated and employed?
Further research into this question may be aided by bringing in literature on path dependency and the “stickiness” of entrenched traditions—or, in this case, the greater familiarity of employing kinetic, conventional weapons. Additional research may also tell us more about the inflexibility of military targeting procedures, which may have been designed to weigh specific variables in the context of a kinetic action but may be insufficiently malleable to more completely consider the authorization of nonlethal capabilities. These questions are important to examine, as they may tell us about the willingness and the process to employ cyber capabilities in the future.
Any answer to this question is going to depend on the type of nonlethal weapon in question and on the nature of the international legal regime that restricts those weapons. For example, the United States does not use riot-control agents in combat due to its commitments under the Chemical Weapons Convention.46 Nor does the United States employ lasers to blind individuals, in compliance with the terms of the 1980 Convention on Certain Conventional Weapons.47
Further exploration of why nonlethal weapons have been deployed so seldom by US forces should be considered separately in more detail. For our purposes, it is worth noting that the lack of explicit international law around the employment of cyber capabilities may enable commanders to deploy possible future tactical capabilities with more freedom than they have with nonlethal weapons. In addition, cyber capabilities are—at least up until now—counter-matériel capabilities. Nonlethal weapons are both counter-matériel and counter-personnel. As such, the focus of cyber capabilities on counter-matériel missions may eventually give leaders less cause to eschew authorizing their tactical employment. This distinction may change, however, as wearable and related technologies create a new attack vector and open the possibility of cyber capabilities becoming counter-personnel capabilities.
Regardless of the reasons that inform the US military’s decision to employ nonlethal weapons in only a limited fashion, practitioners would be wise not to take the nonlethal-cyber analogy too far for fear that, for whatever reason, cyber capabilities might become another instrument of power that is unwieldable even when they are the most appropriate tools available.
Indeed, despite very real concerns about a coming conflict in cyberspace, some of the most promising features of cyber capabilities are also common with other nonlethal weapons: their effects need not be permanent and could possibly be so narrowly tailored that collateral damage is all but eliminated. As with any other instrument of military power, cyber capabilities should be used only as a last resort. But when military coercion is required to secure US interests, cyber capabilities—like nonlethal weapons—may offer US military commanders the opportunity to do so in ways that greatly reduce the incidence of death and destruction on all sides of a future conflict.
Notes
The views expressed in this publication are those of the authors and do not necessarily reflect the official policy or position of the Department of Defense or the US government.
1 Joseph Nye, “Nuclear Lessons for Cyber Security?,” Strategic Studies Quarterly 5, no. 4 (Winter 2001); Joseph Nye, “Cyber Power” (Boston: Belfer Center for Science and International Affairs, May 2010), http://belfercenter.ksg.harvard.edu/files/cyber-power.pdf; and Emily Goldman and John Arquilla, eds., Cyber Analogies (Monterey, CA: Naval Postgraduate School, 2014).
2 Michael S. Goodman, “Applying the Historical Lessons of Surprise Attack to the Cyber Domain: The Example of the United Kingdom,” in Goldman and Arquilla, Cyber Analogies; Nye, “Cyber Power”; Joel Brenner, Glass Houses (New York: Penguin, 2014); and Richard A. Clarke and Robert Knake, Cyber War: The Next Threat to National Security and What to Do about It (New York: HarperCollins, 2010).
3 Michael Daniel, “Heartbleed: Understanding When We Disclose Cyber Vulnerabilities,” The White House Blog, April 28, 2014, http://www.whitehouse.gov/blog/2014/04/28/heartbleed-understanding-when-we-disclose-cyber-vulnerabilities; and Richard Clarke et al., “The NSA Report: Liberty and Security in a Changing World,” President’s Review Group on Intelligence and Communications Technologies (Princeton: Princeton University Press, 2013).
4 Rolf Weber, “Internet of Things: New Security and Privacy Challenges,” Computer Law & Security Review 26, no. 1 (2010); and Andy Greenberg, “Hackers Remotely Kill a Jeep on the Highway—with Me in It,” Wired, July 21, 2015, http://www.wired.com/2015/07/hackers-remotely-kill-jeep-highway/.
5 The Oxford English Dictionary states that a weapon is “a thing designed or used for inflicting bodily harm or physical damage.”
6 Ashton B. Carter, “DOD Executive Agent for Non-Lethal Weapons (NLW), and NLW Policy,” Number 3000.03E (Washington, DC: Department of Defense, April 25, 2013), 12, http://www.dtic.mil/whs/directives/corres/pdf/300003p.pdf.
7 Leo P. Brophy, Wyndham D. Miles, and Rexmond C. Cochrane, The Chemical Warfare Service: From Laboratory to Field, United States Army in World War II (Washington, DC: Center of Military History, US Army, 1988).
8 Ibid., 2.
9 Ibid., 12.
10 Ibid., 70.
11 Ibid., 24.
12 Ibid., 25.
13 To characterize the overall effect of chemical weapons in World War I as nonlethal would be misleading, as the United Nations Office for Disarmament Affairs notes that chemical weapons employed in that conflict eventually killed more than 100,000 individuals. See United Nations Office for Disarmament Affairs, “Chemical Weapons,” https://www.un.org/disarmament/wmd/chemical/.
14 United Nations Office for Disarmament Affairs, “Protocol for the Prohibition of the Use in War of Asphyxiating, Poisonous or Other Gases, and of Bacteriological Methods of Warfare,” 1925, https://www.un.org/disarmament/wmd/bio/1925-geneva-protocol/.
15 Brophy, Miles, and Cochrane, Chemical Warfare Service, 72–73.
16 Smoke is a nonlethal weapon, according to the definition used in this chapter, as it does not directly inflict bodily harm or physical damage. This reference is merely to note the evolution of how chemicals came to be used in World War II. Brophy, Miles, and Cochrane, Chemical Warfare Service, 197.
17 Ibid., 197–225.
18 Gerald Ford, “Executive Order 11850—Renunciation of Certain Uses in War of Chemical Herbicides and Riot Control Agents,” Federal Register, April 8, 1975, http://www.archives.gov/federal-register/codification/executive-order/11850.html.
19 Andrew Sanders and Ian S. Wood, Time of Troubles: Britain’s War in Northern Ireland (Edinburgh: Edinburgh University Press, 2012), 127.
20 Richard Pike, Phantom Boys: True Tales from Aircrew of the McDonnell Douglas F-4 Fighter-Bomber (London: Grub Street Books, 2015), 105.
21 Barton Reppert, “Force without Fatalities,” Government Executive, May 1, 2001, http://www.govexec.com/magazine/magazine-defense/2001/05/force-without-fatalities/8992/.
22 Senate Armed Services Committee, Nomination of Maj. Gen. Anthony C. Zinni, USMC, for Appointment to the Grade of Lieutenant General and to Be the Commanding General, 1st Marine Expeditionary Force, 103rd Cong., 2nd sess., June 16, 1994, Hrg. 103–873, 32.
23 Richard L. Scott, “Conflict without Casualties: Non-Lethal Weapons in Irregular Warfare” (thesis, Naval Postgraduate School, 2007), 6–7.
24 Nick Lewer and Steven Schofield, Non-Lethal Weapons: A Fatal Attraction? (London: Zed Books, 1997), 20.
25 For more on this episode, see F. M. Lorenz, “Non-Lethal Force: The Slippery Slope to War?,” Parameters, 1996, 52–62.
26 Reppert, “Force without Fatalities.”
27 Graham T. Allison and Paul X. Kelley, “Nonlethal Weapons and Capabilities” (Washington, DC: Council on Foreign Relations, 2004), 13–18.
28 Ibid., 1.
29 For one of the earliest articulations of this idea, see David D. Clark and David R. Wilson, “A Comparison of Commercial and Military Computer Security Policies,” Proceedings of the 1987 IEEE Symposium on Research in Security and Privacy, 1987, 184–94.
30 John Leyden, “Israel Suspected of ‘Hacking’ Syrian Air Defences,” The Register, October 4, 2007, http://www.theregister.co.uk/2007/10/04/radar_hack_raid/.
31 Ben Elgin and Michael Riley, “Now at the Sands Casino: An Iranian Hacker in Every Server,” Bloomberg, December 11, 2014, http://www.bloomberg.com/bw/articles/2014-12-11/iranian-hackers-hit-sheldon-adelsons-sands-casino-in-las-vegas.
32 Dan Kloeffler and Alexis Shaw, “Dick Cheney Feared Assassination via Medical Device Hacking: ‘I Was Aware of the Danger,’ ” ABC News, October 19, 2013, http://abcnews.go.com/US/vice-president-dick-cheney-feared-pacemaker-hacking/story?id=20621434.
33 Weber, “Internet of Things”; and Greenberg, “Hackers Remotely Kill.”
34 Kim Zetter, Countdown to Zero Day: Stuxnet and the Launch of the World’s First Digital Weapon (New York: Crown, 2014), 97.
35 Ibid.
36 Ibid.; and Kasperky Lab, “Gauss: Abnormal Distribution,” SecureList, August 9, 2012, https://securelist.com/analysis/publications/36620/gauss-abnormal-distribution/.
37 Rachel King, “Stuxnet Infected Chevron’s IT Network,” Wall Street Journal, November 8, 2012, http://blogs.wsj.com/cio/2012/11/08/stuxnet-infected-chevrons-it-network/.
38 Zetter, Countdown to Zero Day.
39 For a sampling of perspectives, see Will Goodman, “Cyber Deterrence: Tougher in Theory Than in Practice?,” Strategic Studies Quarterly, Fall 2010, 102–35; Murat Dogrul, Adil Aslan, and Eyyup Celik, “Developing an International Cooperation on Cyber Defense and Deterrence against Cyber Terrorism,” in 2011 3rd International Conference on Cyber Conflict, ed. C. Czosseck, E. Tyugu, and T. Wingfield (Tallinn, Estonia: Cooperative Cyber Defence Centre of Excellence, 2011), 43; and Amit Sharma, “Cyber Wars: A Paradigm Shift from Means to Ends,” Strategic Analysis 34, no. 1 (2010).
40 To be sure, the United States also sought to deter the Chinese from undertaking similar activity, but in large part the goal was to influence the perceptions of the Soviet leadership that it should not undertake a nuclear-armed attack.
41 For a fuller explication of the two variables here—absolute versus general deterrence and specific versus restrictive deterrence—and an application to cyber operations, see Ben Buchanan, “Cyber Deterrence Isn’t MAD; It’s Mosaic,” Georgetown Journal of International Affairs, International Engagement on Cyber IV, 15, no. 2 (2014): 130–40.
42 See also how the concept of entanglement relates to the calculation of an action’s costs. Joseph Nye, “Can China Be Deterred in Cyber Space?,” Foreign Policy Association (blog), April 6, 2016, http://foreignpolicyblogs.com/2016/04/06/can-china-be-deterred-in-cyber-space/.
43 Bill Marczak et al., “China’s Great Cannon,” Research Brief (Citizen Lab and Munk School of Global Affairs, University of Toronto, April 10, 2015), https://citizenlab.org/2015/04/chinas-great-cannon/.
44 David Sanger and William Broad, “Unstated Factor in Iran Talks: Threat of Nuclear Tampering,” New York Times, March 21, 2015, http://www.nytimes.com/2015/03/22/world/middleeast/unstated-factor-in-iran-talks-threat-of-nuclear-tampering.html.
45 Heather Murphy, “Ominous Text Message Sent to Protesters in Kiev Sends Chills around the Internet,” New York Times, January 22, 2014, http://thelede.blogs.nytimes.com/2014/01/22/ominous-text-message-sent-to-protesters-in-kiev-sends-chills-around-the-internet/.
46 Allison and Kelley, “Nonlethal Weapons and Capabilities.”
47 Ibid., 53.
From Understanding Cyber Conflict: 14 Analogies, edited by George Perkovich and Ariel E. Levite and published by Georgetown University Press. For additional information about the book: http://press.georgetown.edu/book/georgetown/understanding-cyber-conflict.
Cyber Weapons and Precision-Guided Munitions
The development of precision-guided munitions (PGMs)—guided gravity bombs and cruise missiles, in particular—has had profound implications for warfare. Such weapons tend to cause much less collateral damage than their unguided predecessors do, and because they can remain effective when used from a distance, they can also reduce casualties sustained by the attacker. Thus, PGMs have altered national-level decision-making by lowering the political threshold for the use of force and by slowing the likely loss of public support during a sustained military campaign. PGMs have also increased the tactical effectiveness of military operations. They have dramatically improved force exchange ratios (at least against an adversary without these weapons) by reducing the likely number of weapons required to destroy individual targets. In doing so, they have eased logistical requirements and increased the pace at which military operations can be conducted.
Following the 1991 Gulf War, which provided the first high-profile demonstration of the effectiveness PGMs, these weapons were widely seen—both in the United States and abroad—as revolutionary (or, at least, as the technological component of revolutionary military changes).1 Almost twenty-five years later, a number of analysts have argued that cyber weapons are effecting another revolution in military affairs.2 This controversial claim is inspired, at least in part, by the analogy between PGMs and cyber weapons.
The similarities between PGMs and sophisticated cyber weapons are striking.3 Cyber weapons also offer the potential of exquisite precision because, if well-designed, they may affect only specific targets and inflict carefully tailored effects.4 Information technology (IT) is ubiquitous in military operations. As a result, the use of cyberspace for military purposes can confer potential tactical advantages to an attacker, including by further improving force exchange ratios, while placing few, if any, additional demands on the logistical network needed to supply frontline forces. Moreover, the use of cyber weapons involves minimal risk to the lives of the service personnel who “deliver” them and, in general, is likely to cause fewer civilian casualties than even the most carefully designed and executed kinetic attack.5 As a result, they could further lower the threshold for the use of force. Overall, in fact, states’ reasons for wanting cyber weapons are very similar to their reasons for wanting PGMs.
For all the benefits of cyber weapons, they undoubtedly have limitations too. The possibility that cyber weapons can be employed in highly discriminating ways does not imply they must be; like PGMs, cyber weapons can be used in indiscriminate ways. Indeed, many publicly known cyber attacks to date have had distinctly imprecise effects on the target system (for example, by destroying entire computers) and have caused collateral damage against undetermined numbers of other systems and users. That said, there is also reason to suppose that the public record is not representative of cutting-edge cyber capabilities, since more discriminate attacks are easier to hide.
Setting aside the technical and operational challenges of achieving precision in practice, this chapter seeks to exploit the analogy with PGMs to understand some of the other potential limitations of cyber weapons and how militaries might respond to them either by mitigating them or by capitalizing on them. The focus is on three challenges to the effective employment of PGMs and their cyber analogies. The first two challenges—intelligence, surveillance, and reconnaissance (ISR) and battle damage assessment (BDA)—relate to the effectiveness of enabling capabilities. The third challenge is the difficulty of achieving the political objectives for which a war is fought using only standoff attacks.
The Need for Effective Intelligence, Surveillance, and Reconnaissance
An important distinction is drawn in the physical sciences between precision and accuracy. The claim that the population of the United States is 62,571,389 is very precise, but it is not remotely accurate. Similarly, PGMs are almost invariably precise—in the sense that they almost always hit their aim points (or at least very nearby)—but because their intended targets may not always be located at those aim points, PGMs are not always accurate.
To ensure that PGMs are accurate, the location of the intended target must be known both correctly and precisely.6 The ISR capabilities used to locate targets are, therefore, every bit as important as a weapon’s guidance and navigation system. The development of various technologies for acquiring overhead images has made the process of locating stationary, aboveground targets much easier, but it has not guaranteed success. For example, during the bombing of Yugoslavia in 1999, US military planners knew the street address of the Yugoslav Federal Directorate for Supply and Procurement was Bulevar Umetnosti 2.7 However, because of a combination of human error and out-of-date maps and databases, these planners incorrectly identified the building that corresponded to this address. As a result, although the weapons used in the subsequent strike did indeed hit their intended aim points, they destroyed not a legitimate military target but the Chinese Embassy.
Identifying the location of other types of targets—mobile and underground targets, in particular—is a much tougher problem. The challenge was illustrated during the 1991 Gulf War by the “great Scud hunt,” in which Coalition forces attempted to destroy Iraq’s Scud missiles and their mobile launchers. Coalition aircraft flew about 1,460 sorties against Scud-related targets—about 215 against the mobile launchers themselves—without scoring a single confirmed kill.8 The Gulf War Airpower Survey attributes the cause of this failure to inadequate ISR and, in particular, “the fundamental sensor limitations of Coalition aircraft.”9 These limitations were compounded by effective Iraqi tactics, such as the use of decoys, which complicated the task of an already inadequate ISR system. Since 1991 significant improvements in ISR (as well as in tactics) have been central to enhancing—at least to some extent—the ability of advanced militaries to destroy dispersed mobile forces, as evidenced by Israel’s moderately successful campaign to hunt down Hezbollah’s mobile rocket launchers in the 2006 Lebanon War.10
Intelligence collection is a similarly important enabling capability for cyber attacks. It contributes to identifying how to penetrate the target IT system, to understanding the system sufficiently well to create a weapon payload with the desired effect, and to ensuring that the payload’s effects are limited to the target network.
IT systems are most commonly penetrated as the result of human error. An attacker, for example, might send phishing emails containing a link that, if clicked on, causes malware to be installed. Such attacks are much more likely to be successful if the attacker exploits intelligence about targeted users’ names, contacts, and behavioral characteristics—an approach known as “spear phishing.” For example, a 2015 report by the cybersecurity firm FireEye details several recent spear-phishing attacks against Southeast Asian governments involved in territorial disputes with China.11 These attacks appeared to exploit relatively detailed intelligence about targeted users. Much more detailed intelligence can be required to penetrate more sophisticated defenses. For example, to penetrate IT systems at Iran’s Natanz enrichment plant, which are surrounded by an air gap, the perpetrators of the Stuxnet attack—believed to be the United States and Israel—reportedly first infected computers belonging to contractors. Personnel employed by these contractors then inadvertently transmitted the virus to Iran’s enrichment control system on USB flash drives (other infection strategies were apparently employed too).12 This approach could have been developed only with detailed knowledge about the organizational structure of Iran’s enrichment program. Of course, not all infection strategies rely on user error, but most (if not all) others usually require detailed intelligence about the target, such as knowledge of “zero-day” vulnerabilities—that is, software or hardware flaws that are unpatched because they are unknown to the vendor.
Developing a payload that has the desired effects often requires equally—if not more—detailed intelligence. Stuxnet is a paradigmatic example. The code aimed to destroy Iranian centrifuges by reprogramming the enrichment plant’s control system so it altered their rotation speed while simultaneously sending falsely reassuring signals to operators. The development of Stuxnet was reportedly preceded by a huge intelligence-gathering operation on the Natanz facility, which itself relied, at least in part, on cyber espionage.13 The Stuxnet code was then tested on actual P-1 centrifuges (which are very similar to the IR-1 centrifuges operated by Iran). In one sense, Stuxnet—an exceptionally complicated and sophisticated virus—is something of an extreme example. However, it may well be representative of the challenges associated with developing cyber weapons that can have real-world effects similar to those of extremely precise kinetic weapons.14 Indeed, that the Stuxnet code also migrated into nontarget machines underscores the practical challenges of achieving precision, while the fact that the code did not activate and thus disrupt the functioning of these machines demonstrates the possibility and importance of sophisticated target reconnaissance and malware engineering.
There are, of course, important differences in intelligence collection for cyber and PGM strikes. Usually, one major purpose of intelligence collection in planning a kinetic strike is to identify the exact location of the target; by contrast, the physical location of an enemy IT system is rarely a concern in planning a cyber attack.
The consequences of intelligence failures are also potentially dissimilar. Poor intelligence about the target of a kinetic attack—as the 1999 bombing of the Chinese Embassy in Belgrade typifies—can lead to high costs in the form of civilian deaths, diplomatic fallout, and reputational damage. For two reasons, the consequences of poor intelligence for a cyber attack are likely to be less significant than for a kinetic attack. First, the distinct chance is that a cyber attack based on poor intelligence will have no effect whatsoever. To be sure, this outcome is not guaranteed; poor intelligence can lead to the cyber equivalent of collateral damage. A 2008 cyber attack by the United States against a terrorist website in Saudi Arabia, for example, is reported to have disrupted more than three hundred other servers because the target IT system was insufficiently understood.15 However, good programing can presumably minimize the risks of collateral damage, and even if it cannot, collateral damage restricted to cyberspace is likely to be less costly than collateral damage in physical space. Second, cyber attacks are more plausibly deniable than kinetic attacks are. As a result, the reputational cost of launching a cyber attack that causes collateral damage is likely to be less as well.
That said, it is also possible that cyber attacks will be held to a higher standard than kinetic strikes and thus raise the cost of intelligence failures, even if cyber collateral damage is indeed comparatively modest. In fact, precisely because the development of PGMs has changed expectations about what constitutes acceptable collateral damage, advanced states are now held to a much higher standard in assessing whether the application of kinetic force has been proportionate and whether sufficient care has been taken to discriminate between military and civilian targets. Given the potential for cyber weapons to be even “cleaner” than PGMs, cyber operations may be held to a still higher benchmark—at least where they are conducted by states with the capability to develop discriminating weapons.16
In any case, there are some interesting analogies about collecting intelligence for cyber operations and for kinetic strikes. One particular challenge of acquiring intelligence for cyber attacks is the inherent mutability of IT systems. For example, security protocols and antivirus software can be improved, zero-day vulnerabilities can be discovered and (usually) patched, software can be updated, and hardware can be replaced. As a result, a cyber weapon cannot remain effective indefinitely, and predicting how long it will remain potent is impossible. In this way, a particularly apt analogy from the physical world is the challenge of gathering intelligence for targeting a mobile asset. Locating a mobile target while it happens to be stationary makes striking it much easier, but given the difficulty of predicting when the target will next move, the window of opportunity for conducting the attack may be of an inherently unpredictable duration.
Given the challenges of targeting mobile assets, many nations have responded to the development of PGMs by increasing the mobility of their military forces (even though mobile systems are almost inevitably more lightly armored than their stationary equivalents and hence easier to destroy if their location is discovered). The analogous approach to cyber defense is to focus resources not only on hardening the IT system—that is, identifying and patching vulnerabilities—but also on regularly modifying an IT system simply for the sake of changing it, a strategy that has been termed “polymorphic cyber defense.”17 This approach attempts to render an attacker’s knowledge of the target system obsolete almost as soon as it is obtained. One of the leaders in this field called its technology “Moving Target Defense,” making the analogy to the physical world absolutely explicit.18
The primary challenge to polymorphic cyber defenses is probably the risk of introducing bugs that could prevent a system from performing as it should. The scale of this risk presumably depends on how much of the system and which parts of it are changed and on the size of the conceptual space of the allowed changes. Thus, there may well be a potential trade-off between greater security and reduced usability. Where states perceive the sweet spot to be may determine the prospects of polymorphic cyber defenses for military applications.
In the physical world, one approach to overcoming the challenge posed by mobility is to reduce the time between detection and engagement. To this end, sensors and weapons have been integrated into the same platform and, in some systems, given the capability to engage autonomously. Israel’s Harpy unmanned combat aerial vehicle, for example, is designed to loiter and detect enemy air defense radars (which are frequently mobile) and to attack them automatically.19 An analogous cyber weapon would have the capability to detect and exploit vulnerabilities autonomously.20 This author is not qualified to speculate on whether such an “intelligent” cyber weapon could be developed, but the Defense Advanced Research Projects Agency is sponsoring research, including the Grand Cyber Challenge, into cyber defenses that completely autonomously could “identify weaknesses instantly and counter attacks in real time.”21 Such efforts may be dual use: research in detecting cyber vulnerabilities of friendly IT systems and enhancing their defenses could contribute to the development of offensive cyber weapons that can discover enemy IT vulnerabilities.
Beyond mobility, numerous other countermeasures to PGMs have been em-ployed, including air defenses, hardening, deception, interference with navigation and command and control, and human shields. These countermeasures provide fertile ground for further extending the analogy between defenses to PGMs and defenses to cyber weapons (and take it far beyond interference with ISR capabilities), as a few examples demonstrate. Air defenses, which are designed to shoot down incoming PGMs, are analogous to active cyber defenses in which the defender uses its own virus (sometimes known as a white worm) to disable an attacker’s. Another countermeasure in kinetic warfare is interfering with the satellite navigation signals, such as those provided by the US Global Positioning System, that many modern PGMs use. Spoofing, for example, involves transmitting fake navigation signals, which are designed to mislead a weapon about its location. Conceptually, spoofing is similar to sinkholing, a form of active cyber defense that involves redirecting data being transmitted by a virus to a computer controlled by the victim of an attack.
An entirely different approach to defending against PGMs (or, indeed, any other form of kinetic attack) is to raise the political costs of a strike. For example, both states and terrorist organizations have used civilians as human shields by hiding weapons in schools, hospitals, and mosques.22 More prosaically, in every state, many elements of war-supporting infrastructure—including power stations, electricity grids, and oil refineries—are dual use in that they serve both civilian and military purposes. Even if attacking such facilities is legally permissible, it can still be politically costly.
In the cyber world, civilian and military networks also are often one and the same. For example, an overwhelming majority of US military communications data is believed to pass through public networks that also handle civilian data.23 Going further, organizations that have civilian functions can also conduct offensive cyber operations. For example, China’s National Computer Network Emergency Response Technical Team—a body under the Ministry of Industry and Information Technology that is nominally responsible for defending China’s civilian networks from attack—may have been involved in offensive cyber operations.24 This intermingling raises the potential political cost of cyber operations against military targets through the risk of simultaneously implicating civilian assets. The existence of such intermingling inevitably raises the question of whether it is part of a deliberate strategy designed to defend military assets in cyberspace.
The Importance of Effective Battle Damage Assessment
Battle damage assessment is a second enabling capability that is needed to exploit precision to its full extent. Knowledge that a kinetic strike has been successful can avoid wasting resources on unnecessary repeated strikes against the same target. Immediate feedback also enables the attacker to capitalize quickly on the success. For example, if timely confirmation is available that an air defense battery protecting an underground bunker has been destroyed or disabled, mission commanders can exploit the success by authorizing aircraft to attack the bunker before the adversary can take countermeasures (such as evacuation). Conversely, confirmation that the strike against the air defense system was unsuccessful can be used to authorize another attempt to destroy it. The costs of ineffective (or entirely absent) BDA in this scenario could be quite high. If the strike against the air defense system is incorrectly believed to have been successful, the lives of the pilots sent to attack the bunker will be at risk. If the strike was successful but its outcome cannot be confirmed, mission commanders may waste resources on further strikes as well as an opportunity to destroy the bunker.
As a general rule, the more discriminating a strike is, the more difficult BDA becomes. The particular challenges of BDA for PGMs became apparent in the 1991 Gulf War. To give an example, overhead imagery proved relatively ineffective at assessing the effects of attacks on hardened structures. When these attacks were successful, they generally caused extensive internal damage but very little external damage; often the only visible effect of the attack was a hole made by the incoming bomb.25 Image analysts thus tended to seriously underestimate the effectiveness of strikes against such targets. Thirteen years later, a 2004 report by the US General Accounting Office on the wars in Afghanistan and Iraq highlighted the continued “inability of damage assessment resources to keep up with the pace of modern battlefield operations.”26 The results included the “inefficient use of forces and weapons” and ground advances that were slowed unnecessarily.27
In extreme cases, the lack of effective BDA can have truly major consequences. In early 2011 after the US intelligence community acquired evidence of Osama bin Laden’s whereabouts, senior American officials debated whether and how to attempt to kill him.28 Some of President Barack Obama’s key advisers reportedly recommended using an aircraft-launched standoff PGM. One of the main reasons—if not the main reason—why Obama rejected this course of action was apparently its lack of any reliable way to verify the strike’s success. It could, therefore, have been very difficult to justify the infringement of Pakistani sovereignty, and the United States might have wasted considerable resources in continuing efforts to find bin Laden if he escaped. Obama’s decision to use special forces solved the BDA problem but created other extremely serious risks. For example, if Pakistani troops had captured the Americans, the consequences for relations between Washington and Islamabad (not to mention Obama’s presidency) would have been much more serious than if a standoff munition had been used.
From a tactical perspective, BDA after a cyber attack is important for many of the same reasons as after a kinetic attack. In fact, such assessments may be even more important because cyber attacks can often produce temporary or reversible effects. Therefore, an attacker may need to discover not just whether the attack achieved the desired effect initially but also whether the target IT system is still compromised and its attack undetected.
The strategic importance of cyber BDA is likely to depend on the particular attack scenario. Because the use of cyber weapons is generally more deniable than the use of kinetic weapons and because cyber attacks may sometimes even go undetected (especially if unsuccessful), states may be less concerned about the need to provide ex post facto justifications for a strike, rendering BDA less important for cyber operations than for kinetic ones. Had some (extremely) hypothetical way to kill bin Laden with a cyber weapon been available, for example, it is conceivable that Obama might have opted for it even without a reliable means of conducting BDA. Using a cyber weapon, however, carries the risk that it might spread and infect third-party, or perhaps even friendly, IT systems. BDA would be extremely important to enable rapid action to mitigate the consequences.
Cyber BDA has been discussed very little in the open literature, so any discussion is necessarily fairly speculative.29 Nonetheless, governments must have confronted this question. Israel, for example, is reported to have disabled Syrian air defenses with a cyber weapon, in combination with other tools, before its aircraft destroyed Damascus’s clandestine plutonium-production reactor in 2007.30 Given that the human and diplomatic costs of having its aircraft shot down would have been high, Israel presumably had some means of verifying that it had indeed disabled Syria’s air defenses.
Network exploitations are presumably the principal tool for cyber BDA. (If a cyber attack has physical effects, other techniques for conducting BDA may be possible. Israel, for example, may have been able to monitor the electromagnetic emissions from Syria’s radars.) Indeed, one reason why cyber BDA may be less challenging than physical BDA is that a cyber weapon can potentially be programmed either to conduct an assessment of its own effects or to expropriate information on which such an assessment can be based. By contrast, adding sensors and transmitters for BDA onto a kinetic warhead is extremely difficult, if not impossible.
On balance, however, there are good reasons to expect that cyber BDA is likely to be more challenging than physical BDA, especially for highly precise attacks. (BDA for indiscriminate cyber attacks—against critical infrastructure, say—presents far fewer challenges.) For example, a cyber attack that is designed to prevent an adversary from doing something, such as launching a missile, could present BDA challenges since the attacker might not know whether the cyber weapon had worked until the adversary tried to launch it. More generally, because the effects of many cyber attacks are temporary or reversible, effective BDA cannot rely on a “snapshot” of the target system at a certain moment; instead, continuous monitoring is required. Even if such monitoring is possible, cyber defenses may prevent the information from being sent to the attacker in a timely way. For example, if a cyber weapon is transported across an air gap in a physical storage device, information relevant to BDA could potentially be transported in the same way in the opposite direction; but such a process could be slow and perhaps too slow to be militarily useful. Finally, if using a cyber weapon reveals its own existence, the owner of the targeted IT system can take steps to secure its network and make it less visible, potentially defeating any exploitation being used for BDA. More ambitiously, the owner might even try to fool the attacker by allowing it to exfiltrate deliberately misleading information about the effectiveness of the attack.31
Overall, cyber BDA appears to be both important and difficult. Moreover, efforts to defeat BDA perhaps could become a significant feature of cyber warfare. To this author’s knowledge, defeating BDA has not been a major focus of states’ attempts to undermine advances in PGMs, but efforts to defeat ISR capabilities could have that effect. By contrast, it seems plausible that states could invest significant resources in trying to defeat cyber BDA by developing rapid response capabilities. Indeed, the US military already has in place “Cyber Protection Forces . . . [to] defend priority [Department of Defense] networks and systems,” although whether these forces are tasked with attempting to foil adversary BDA attempts is unknown.32
Could Cyber Warfare Be Strategic?
Wars are fought for a political purpose. From almost as soon as aircraft were developed, proponents of airpower argued, or hoped, that it would prove to be strategic—that is, have the capability of effecting political objectives by itself. Before the advent of precision-guided weapons, decades of practical experience largely discredited these advocates. Large-scale conventional bombing—including during World War II, the Korean War, and the Vietnam War—may have been called strategic, but this description can be applied accurately only to its scale, not to its effects.33 To be sure, dumb bombs have been useful military tools on occasion, but with the probable exception of the atomic bombs dropped on Japan in 1945, they never proved decisive.
The development of PGMs has revived belief in the strategic value of standoff attacks—at least if one goes by the actions of technologically advanced states. The United States and its allies have largely relied on air-delivered PGMs and ship- and submarine-launched cruise missiles as their sole or primary military tools in multiple wars: Yugoslavia in 1999, Libya in 2011, and the conflict against the so-called Islamic State that has been waged in Iraq and Syria since 2014. Additionally some senior US military officers expressed hope, both publicly and privately, ahead of the 1991 Gulf War, that the air campaign would force Iraq to withdraw from Kuwait.34 The tendency to want to count on standoff strikes is not exclusively an American one. Israel’s 2006 war in Lebanon and Saudi Arabia’s ongoing involvement in the civil war in Yemen also both started as standoff operations, with ground forces deployed only after PGMs proved ineffective at achieving political objectives.
Standoff operations may be extremely attractive to decision makers, but as these examples demonstrate, they have rarely been effective. The bombing of Yugoslavia in 1999 is the one indisputable success, although it was a close-run thing. Seventy-eight days of bombing were required—much more than originally anticipated—by which time the Coalition was close to collapse. Understanding the reasons why standoff strikes with PGMs have failed to achieve their goals casts light on the question of whether cyber weapons could prove to be strategic.35
There are two ideal-type strategies by which the employment of PGMs or cyber weapons could effect political change.36 A compellence strategy seeks to inflict pain and demonstrate the willingness to inflict more with the aim of convincing an adversary to concede. A denial strategy, by contrast, seeks to weaken the military forces that an enemy is using to prosecute a conflict (and, perhaps, the enemy regime’s grip on power). In the real world, these strategies can become indistinct. For example, attacks against a state’s military-industrial sector can be justified as denial but may also have, intentionally or otherwise, a punitive effect on civilians. Such attacks are exemplified by both Allied and Axis bombing campaigns in World War II and by much more targeted strikes, such as those against Yugoslavia’s electricity and water system in 1999.37 Conversely, a denial strategy involving strikes against exclusively military targets would administer significant punishment if the adversary’s leadership values its own grip on power and its military forces more than it does its citizens’ lives and well-being.38
Almost by definition, a denial strategy cannot precipitate political change if only standoff weapons—whether kinetic or non-kinetic—are employed. Even if standoff strikes succeed in degrading an adversary’s military capabilities, deployed forces are still required to capitalize on this weakness. In 2001, at the start of the Afghanistan war, for example, US airpower played a significant role in weakening Taliban forces, but an armed opposition with broadly equivalent fighting skills to the Taliban was still needed to take and hold territory in physical battles.39 This opposition force took the form of the Northern Alliance, assisted by US special operations forces. Conversely, Saudi-led airstrikes against Yemen, which began in March 2015, failed to restore President Abdrabbuh Mansour Hadi to power after he had been deposed in a Houthi-led rebellion in large part because he lacked a ground force to take advantage of the strikes. Riyadh apparently hoped that the strikes would spark an anti-Houthi tribal uprising, but it did not occur.40 As a result, foreign-trained Yemeni fighters were inserted into Yemen in May 2015 and followed by forces from Saudi Arabia and the United Arab Emirates in progressively larger numbers.41
Similarly, even if cyber attacks prove highly effective at disrupting an enemy’s military operations, physical force will almost certainly be required to exploit this disruption. To be sure, the scenarios in which cyber attacks might prove useful could be very different from the Afghan or Yemeni scenarios since potential adversaries with cyber vulnerabilities range from non-state actors to sophisticated nation-states. But in all cases, success would surely demand a physical force in addition to a cyber force. In fact, against a sophisticated state, such as Russia or China, very considerable physical force might be needed as the state’s military would probably remain formidable even after its networks had been compromised—and not least because, in such a conflict, US networks would probably be compromised too.42
A second issue is whether cyber weapons could be used to punish an adversary until it submitted. Much of the existing debate on this point revolves around essentially technical questions.43 How plausible are cyber attacks against critical infrastructure? If such attacks did take place, would they cause large-scale death and long-lasting damage, or would their effects be less costly and more temporary? An even more fundamental question needs to be addressed: Even if cyber attacks against critical infrastructure were relatively easy and even if such attacks caused massive and long-lasting damage, would they actually be effective at compellence?
The history of punitive kinetic attacks demonstrates that, under some circumstances, states (and non-state actors) can withstand astonishing levels of punishment without conceding. To be sure, whether highly damaging cyber attacks were effective at compellence might well depend on what was at stake and the commitment of society at large to the cause. As the bombing of Yugoslavia in 1999 demonstrates, standoff operations can sometimes be effective in forcing one state to bend to another’s will. But as the conventional bombing of British, German, and Japanese cities during World War II also illustrates, much greater levels of death and destruction can prove insufficient. Given that cyber weapons are unlikely to inflict costs on anything approaching that scale—even if the direst predictions about their destructive potential are realized—it should not be assumed that they would be effective tools for compellence.
Moreover, compellence may be even more difficult with cyber weapons than with kinetic weapons for at least one reason: compellence does not rely on inflicting pain per se but on the threat to keep doing so until an adversary concedes.44 Meting out some punishment may well be necessary to make such a threat credible, but inflicting even high levels of pain may not establish credibility if the victim believes that the attacker is unwilling or unable to continue. This theoretical problem could become a real complication in a campaign of cyber compellence since, after the first wave of attacks, the victim might be able to take steps that would make further attacks much more difficult. Most obviously, the victim could analyze the virus (or viruses) that perpetrated the attacks and the means by which its IT systems were penetrated and use this information to patch vulnerabilities. Next, it could implement enhanced cybersecurity measures to reduce generic vulnerabilities, and it could try to “hack back” against the perpetrator to disrupt further attacks. Such steps would reduce the likelihood of compellence being successful. Again, however, there could be no guarantees. The time required to analyze the cyber weapon could be too long for the results to be useful in preventing further attacks.45 Even if the analysis could be completed quickly, its utility might be limited if the attacker had developed multiple cyber weapons that all worked in different ways. Nonetheless, the basic point remains: compared to kinetic compellence, cyber compellence faces additional challenges.
To be sure, steps to enhance the cybersecurity of critical infrastructure are highly worthwhile. Although the repeated unsuccessful attempts at compellence with kinetic weapons suggest that cyber compellence might also prove unsuccessful, it still might be attempted. Meanwhile, some actors, including terrorists, may try to attack critical infrastructure for reasons other than compellence. Nonetheless, understanding the challenges of cyber compellence is useful in constructing more effective cyber defenses. Specifically, rapid response capabilities that enable a state to analyze cyber attacks on critical infrastructure quickly and use that information to prevent further attacks would be particularly useful in defeating attempts at compellence. Indeed, the US Department of Defense has recently stood up “National Mission Forces . . . [to] defend the United States and its interests against cyberattacks of significant consequence.”46 While the exact task of these forces is not publicly known, simply their existence might contribute to deterring attempts at compellence.
***
Focusing on the analogy between cyber weapons and PGMs risks giving the incorrect impression that the former are simply a new kind of the latter. They are not; further, they have many important differences, both obvious and subtle. Cyber weapons can often reach their targets effectively and instantaneously, though they can also be designed to have a delayed effect. Kinetic weapons generally travel much more slowly than cyber weapons, but if and when the former reach their targets, they usually have an almost instantaneous effect. PGMs are also limited by range, a concept without much meaning in cyberspace. Some cyber weapons can create reversible effects, whereas the effects of kinetic weapons are almost always irreversible.
More subtly, cyber vulnerabilities can usually be addressed relatively quickly. Thus, it is unlikely that a cyber weapon can be used repeatedly over the course of a multiday conflict without becoming obsolete. In fact, a cyber weapon might be effective only once. As a result, even if a state has stockpiled many different cyber weapons, it likely will face strong pressures to be highly selective in their employment. By contrast, while using a kinetic weapon certainly can provide an adversary with information that is useful in developing countermeasures, exploiting such information generally takes much longer than in the case of cyber weapons (the development of a new air defense system, for example, typically takes years). Therefore, advanced states can and do stockpile PGMs of a given type in large quantities and are increasingly using such weapons by default instead of dumb bombs. Indeed, as conflicts proceed, states tend to use ostensibly precise weapons in increasingly less selective ways, vitiating the putative special purpose of these weapons and depleting their stocks.
Another potential false impression is that based on the experience of PGMs, cyber weapons are unlikely to have significant implications for warfare. While it is still far too early to assess with any confidence exactly how military operations in cyberspace will change armed conflict, that such changes will be far-reaching seems entirely possible. Indeed, for all the limitations associated with PGMs, plenty of evidence shows that their development does represent a revolution in military affairs. These weapons are not usually able to achieve war aims by themselves, but they have altered leaders’ calculations about the use of force and have thus altered national strategies. Moreover, because of the challenges associated with the effective employment of PGMs, the precision revolution is still incomplete. As states further develop ISR and BDA capabilities (and overcome other barriers), PGMs can be expected to become more potent at the tactical level and perhaps even at the strategic level too.
Similarly, the advent of cyber warfare will probably further lower the threshold for the use of force. Senior officials—at least in the United States—have said as much. In 2014, for example, Eric Rosenbach, then an assistant secretary of defense, stated, “The place where I think [cyber operations] will be most helpful to senior policymakers is what I call in ‘the space between.’ What is the space between? . . . You have diplomacy, economic sanctions . . . and then you have military action. In between there’s this space, right? In cyber, there are a lot of things that you can do in that space between that can help us accomplish the national interest.”47
Yet the analogy with PGMs suggests that the ability of states to employ cyber weapons effectively is likely to lag their desire to use them. In fact, it may take decades not only for states to understand the limitations of cyber weapons and whether and how these limitations can be overcome but also for the full implications of cyber warfare to become apparent.
Notes
1 For example, Andrew F. Krepinevich, “Cavalry to Computer: The Pattern of Military Revolutions,” The National Interest 37 (Fall 1994): 30–42.
2 For example, Joseph S. Nye Jr., “Nuclear Lessons for Cyber Security?,” Strategic Studies Quarterly 5, no. 4 (Winter 2011): 18, http://www.airuniversity.af.mil/Portals/10/SSQ/documents/Volume-05_Issue-4/Nye.pdf; and Kenneth Geers, Strategic Cyber Security (Tallinn, Estonia: NATO Cooperative Cyber Defence Centre of Excellence, June 2011), 112, https://ccdcoe.org/publications/books/Strategic_Cyber_Security_K_Geers.PDF.
3 This comparison has been discussed in, for example, Peter Dombrowski and Chris C. Demchak, “Cyber War, Cybered Conflict, and the Maritime Domain,” Naval War College Review 67, no. 2 (Spring 2014): 85–87, https://www.usnwc.edu/getattachment/762be9d8-8bd1-4aaf-8e2f-c0d9574afec8/Cyber-War,-Cybered-Conflict,-and-the-Maritime-Doma.aspx; and Andrew F. Krepinevich, Cyber Warfare: A “Nuclear Option”? (Washington, DC: Center for Strategic and Budgetary Assessments, 2012), 7–12, http://csbaonline.org/uploads/documents/CSBA_e-reader_CyberWarfare.pdf.
4 For the purposes of this chapter, a cyber weapon is defined as a computer program designed to compromise the integrity or availability of data in an enemy’s IT system for military purposes, and a cyber attack is defined as the use of a cyber weapon for offensive purposes. A cyber weapon may be used by itself or in concert with other weapons (kinetic or otherwise). Its effects may be felt purely in cyberspace or in physical space too. Cyber exploitation that compromises only the confidentiality of data is not considered a form of cyber attack.
5 Tim Maurer, “The Case for Cyberwarfare,” Foreign Policy, October 19, 2011, http://foreignpolicy.com/2011/10/19/the-case-for-cyberwarfare/.
6 In technical jargon, the target location error should not be significantly larger than the weapon’s circular error probable.
7 George Tenet, “DCI Statement on the Belgrade Chinese Embassy Bombing,” testimony to the Permanent Select Committee on Intelligence of the US House of Representatives, July 22, 1999, https://www.cia.gov/news-information/speeches-testimony/1999/dci_speech_072299.html.
8 Barry D. Watts and Thomas A. Keaney, “Effects and Effectiveness,” in Gulf War Air Power Survey, vol. 2 (Washington, DC: US Government Printing Office, 1993), pt. 2, 331–32, http://www.dtic.mil/dtic/tr/fulltext/u2/a279742.pdf.
9 Ibid., 340.
10 Uzi Rubin, The Rocket Campaign against Israel during the 2006 Lebanon War, Mideast Security and Policy Studies 71 (Ramat Gan: The Begin-Sadat Center for Strategic Studies, Bar-Ilan University, June 2007), 19–21, https://besacenter.org/mideast-security-and-policy-studies/the-rocket-campaign-against-israel-during-the-2006-lebanon-war-2–2/.
11 FireEye and Singtel, Southeast Asia: An Evolving Cyber Threat Landscape, FireEye Threat Intelligence (Milpitas, CA: FireEye, March 2015), 13, https://www.fireeye.com/content/dam/fireeye-www/current-threats/pdfs/rpt-southeast-asia-threat-landscape.pdf. In one attack, for example, one state’s air force was targeted with “spear-phishing emails that referenced the country’s military and regional maritime disputes . . . [and that] were designed to appear to originate from email accounts associated with other elements of the military.” The report implies—but does not state explicitly—that China was responsible for the attacks.
12 Kim Zetter, Countdown to Zero Day: Stuxnet and the Launch of the World’s First Digital Weapon (New York: Crown, 2014), 337–41; and Nicolas Falliere, Liam O. Murchu, and Eric Chien, W32.Stuxnet Dossier, Version 1.4 (Cupertino, CA: Symantec Security Response, February 2011), especially 7–11, https://www.symantec.com/content/en/us/enterprise/media/security_response/whitepapers/w32_stuxnet_dossier.pdf.
13 David E. Sanger, “Obama Order Sped Up Wave of Cyberattacks against Iran,” New York Times, June 1, 2012, http://www.nytimes.com/2012/06/01/world/middleeast/obama-ordered-wave-of-cyberattacks-against-iran.html.
14 William A. Owens, Kenneth W. Dam, and Herbert S. Lin, eds., Technology, Policy, Law, and Ethics Regarding U.S. Acquisition and Use of Cyber Attack Capabilities (Washington, DC: National Academies Press, 2009), 118; and Krepinevich, Cyber Warfare, 43–44.
15 Ellen Nakashima, “U.S. Eyes Preemptive Cyber-Defense Strategy,” Washington Post, August 29, 2010, http://www.washingtonpost.com/wp-dyn/content/article/2010/08/28/AR2010082803312.html.
16 One worrying possibility is that nations with fewer resources will focus on simpler, less discriminating cyber weapons that contain fewer safeguards against their spread. Not only are such weapons likely to cause much more collateral damage than more sophisticated cyber weapons would but also, unlike with PGMs, such damage might be felt far from the physical location of the target.
17 Dudu Mimran, “The Emergence of Polymorphic Cyber Defense,” dudumimran.com (blog), February 10, 2015, http://www.dudumimran.com/2015/02/the-emergence-of-polymorphic-cyber-defense.html.
18 Morphisec, “What We Do,” http://www.morphisec.com/what-we-do/. Interestingly, this technology is designed to “fit around” existing Windows-based operating systems.
19 “IAI Harpy,” Jane’s Unmanned Aerial Vehicles and Targets, September 30, 2015, www.ihs.com.
20 Existing cyber weapons may be able to execute preplanned attacks autonomously, but that is a much less stressing task.
21 Defense Advanced Research Projects Agency (DARPA), “Seven Teams Hack Their Way to the 2016 DARPA Cyber Grand Challenge Final Competition,” July 8, 2015, http://www.darpa.mil/news-events/2015–07–08.
22 For example, Terrence McCoy, “Why Hamas Stores Its Weapons inside Hospitals, Mosques and Schools,” Washington Post, July 31, 2014, https://www.washingtonpost.com/news/morning-mix/wp/2014/07/31/why-hamas-stores-its-weapons-inside-hospitals-mosques-and-schools/.
23 Michael Gervais, “Cyber Attacks and the Law of War,” Journal of Law & Cyber Warfare 1, no. 1 (Winter 2012): 78–79.
24 Bill Marczak et al., “China’s Great Canon,” Research Brief (The Citizen Lab and Munk School of Global Affairs, University of Toronto, April 2015), 11, https://citizenlab.org/wp-content/uploads/2009/10/ChinasGreatCannon.pdf.
25 Watts and Keaney, “Effects and Effectiveness,” 30–47.
26 US General Accounting Office, Military Operations: Recent Campaigns Benefited from Improved Communications and Technology, but Barriers to Continued Progress Remain, GAO-54–547 (Washington, DC: General Accounting Office, June 2004), 24, http://www.gao.gov/new.items/d04547.pdf.
27 Ibid, 23–24.
28 Mark Bowden, “The Hunt for ‘Geronimo,’ ” Vanity Fair, October 12, 2012, http://www.vanityfair.com/news/politics/2012/11/inside-osama-bin-laden-assassination-plot.
29 For rare examples, see Martin C. Libicki, Conquest in Cyberspace: National Security and Information Warfare (Cambridge: Cambridge University Press, 2007), 87–90; and Maj. Richard A. Martino, “Leveraging Traditional Battle Damage Assessment Procedures to Measure Effects from a Computer Network Attack” (graduate research project, Air Force Institute of Technology, Air University, June 2011), http://www.dtic.mil/cgi-bin/GetTRDoc?Location=U2&doc=GetTRDoc.pdf&AD=ADA544644. Much of the available literature on BDA focuses on defensive BDA, or a victim’s assessment of the effects of a cyber attack on its own networks. This task is easier than an offensive BDA since the attacker has no guarantee of being able to maintain access to the victim’s networks.
30 David A. Fulghum, Robert Wall, and Amy Butler, “Cyber-Combat’s First Shot: Israel Shows Electronic Prowess: Attack on Syria Shows Israel Is Master of the High-Tech Battle,” Aviation Week & Space Technology 167, no. 21 (November 26, 2007): 28–31.
31 Libicki, Conquest in Cyberspace, 88–90.
32 US Department of Defense, “The Department of Defense Cyber Strategy” (Washington, DC: Department of Defense, April 2015), 6, http://www.defense.gov/Portals/1/features/2015/0415_cyber-strategy/Final_2015_DoD_CYBER_STRATEGY_for_web.pdf.
33 See, for example, Richard Overy, The Bombers and the Bombed: Allied Air War over Europe, 1940–1945 (New York: Viking, 2014).
34 Watts and Keaney, “Effects and Effectiveness,” 15, 341, 378.
35 For the explicit or, through analogy with nuclear weapons, implicit case that cyber weapons are strategic, see, for example, Geers, Strategic Cyber Security, 15; and Mike McConnell, “Mike McConnell on How to Win the Cyber-War We’re Losing,” Washington Post, February 28, 2010, http://www.washingtonpost.com/wp-dyn/content/article/2010/02/25/AR2010022502493.html. For similar claims in the Russian and Chinese military literature, see Krepinevich, Cyber Warfare, 3–4.
36 For a classic theoretical discussion, see Robert A. Pape, Bombing to Win: Air Power and Coercion in War (Ithaca: Cornell University Press, 1996), ch. 2.
37 Philip Bennett and Steve Coll, “NATO Warplanes Jolt Yugoslav Power Grid,” Washington Post, May 25, 1999, https://www.washingtonpost.com/wp-srv/inatl/longterm/balkans/stories/belgrade052599.htm.
38 Within the nuclear deterrence literature, the term “aspects of state power” has been used to describe what dictatorial regimes are hypothesized to value. Michael Quinlan, Thinking about Nuclear Weapons: Principles, Problems, Prospects (Oxford: Oxford University Press, 2009), 126.
39 Stephen D. Biddle, “Allies, Airpower, and Modern Warfare: The Afghan Model in Afghanistan and Iraq,” International Security 30, no. 3 (Winter 2005/2006): 161–76.
40 David B. Ottaway, “Saudi Arabia’s Yemen War Unravels,” The National Interest, May 11, 2015, http://nationalinterest.org/feature/saudi-arabias-yemen-war-unravels-12853.
41 Michael Knights and Alexandre Mello, “The Saudi-UAE War Effort in Yemen (Part 1): Operation Golden Arrow in Aden,” Policywatch 2464 (Washington, DC: Washington Institute for Near East Policy, August 10, 2015), http://www.washingtoninstitute.org/policy-analysis/view/the-saudi-uae-war-effort-in-yemen-part-1-operation-golden-arrow-in-aden.
42 The question of how militaries would fare without their IT systems is discussed in Martin C. Libicki, “Why Cyber War Will Not and Should Not Have Its Grand Strategist,” Strategic Studies Quarterly 8, no. 1 (Spring 2014): 29–30, http://www.dtic.mil/get-tr-doc/pdf?AD=ADA602105.
43 For various different perspectives in this debate, see Jon R. Lindsay, “Stuxnet and the Limits of Cyber Warfare,” Security Studies 22, no. 3 (July 2013): esp. 385–97, 402–4; Krepinevich, Cyber Warfare, 39–65; and Richard A. Clarke and Robert K. Knake, Cyber War: The Next National Security Threat and What to Do about