Table of Contents

Introduction

This chapter aims to provide a schematic and generic description of the nexus between attribution and characterization in cyber attacks. Attribution is when an entity is named as being responsible or accountable for an act—for example, the theft of personnel data from another state’s computer networks.1 Whereas characterization refers to how an entity interprets or understands a digital anomaly detected in one’s systems—recognizing the possibility that rather than a malicious cyber intrusion into one’s systems, it could be the product of human error, technical failure, or natural events.2

This chapter highlights the centrality of the interaction between these two diagnostic endeavors in the analytic phase following the discovery of anomalies.3 It further considers how this exchange precedes and subsequently influences any serious policy deliberation of policy responses to cyber attacks. The chapter then considers the interplay between conclusions that emerge from this analytical phase and the framing of the options for response, as well as the policy choices that follow.

Process of Inquiry

The discovery of a serious functional anomaly in one’s digital systems (both governmental and corporate) typically leads to a vexing process of inquiry designed to characterize the event and determine its causes and consequences. How different governments conduct such inquiries varies greatly in time, sequence, style, and participants. However, it typically involves certain core functions, processes, dilemmas, and choices.4 These could be conveniently summarized, for heuristic purposes, as a sequential effort to address a series of core questions to characterize (and interpret) the event. The chart below aims to provide a comprehensive bird’s-eye view of the entire process; the narrative that follows elaborates on key features of every link in this chain.

(A more detailed version of this chart appears in the appendix of this chapter.)

Malicious or Deliberate?

Is the detected cyber anomaly the result of malicious action? Or has it been triggered by a technical failure, an innocent human error, or possibly a natural occurrence? On its face, this determination seems like a no-brainer. In practice, though, arriving at a definitive answer may not merely involve considerable time and effort, but also some anxiety until such an answer emerges—especially when the perpetrator of an anomaly tries to mask a malicious action as a technical or human error.

Ariel (Eli) Levite
Levite was the principal deputy director general for policy at the Israeli Atomic Energy Commission from 2002 to 2007.
More >

Some publicity may occur during this period, accompanied by confusion, potentially inconsistent statements, and conceivably even a measure of deceit in an effort to buy time and/or shift blame away from (or toward) the incident.5 Publicity during this phase will likely neither be sought nor welcome; it can be embarrassing and/or can limit policymakers’ choices in determining their response.6 Yet it might be unavoidable, especially if and when private sector entities are involved in either detecting the anomaly or absorbing its effects.

Most importantly, for some actors, the default option might be to treat a cyber anomaly, once discovered, as if it were caused by foul play (at least) until proven otherwise.7 For others, the opposite may be true. (As, for example, seems to have been the case with Iran’s initial discovery of malfunctions in their centrifuge operations that later were attributed to Stuxnet.) Regardless, it must be noted that initially characterizing an event as a possible malicious action makes it more challenging to later credibly dismiss such a misplaced interpretation if and when it is proven to have originated from more benign—if not necessarily less ominous (in terms of consequences)—causes.

In any event, an investigation into the pervasiveness of the phenomenon (where and how widespread is the anomaly) may help determine the root cause of the cyber anomaly and whether it was caused by a malicious action of some sort. Yet we must also acknowledge the real possibility that even a serious and lengthy analysis might fail to remove all uncertainty about the true causes of some cyber anomalies. Suspicion may persist for a long time, especially when vulnerabilities exposed in information technology (IT) systems are traced back to technical or human failures that could either be triggered by unintentional human neglect or attributed to design flaws, deliberately placed bugs, or vulnerabilities.8

Motive

Assuming an investigation suggests a deliberate action with malicious intent, it is bound to take a while to assess its true consequences. While the effort to do so is ongoing, the natural next step is to ascertain the motive behind the adversarial action. Was it driven by criminal aims of one kind or another? Is it perhaps an action by a disgruntled employee? Is it a protest by anarchists or another ideological opponent? Or is it motivated by the national security of a foreign power or its proxy? If the latter, two other sets of questions quickly arise, each calling for an elaborate follow-up effort to figure out the precise motive.

June Lee
June Lee is program coordinator and research assistant for the Technology and International Affairs Program.

The first set of questions aims to establish whether or not the case involves a straightforward cyber espionage operation, such as information collection. If so, is it a typical information collection effort or part of a state-sponsored effort at commercial espionage? Alternatively, is it designed to lay the ground for an attack—or even carry one out? Or, perhaps, it is intended to communicate a certain message or convey a signal; in which case, what is the message? A related intent would be to shape the perception of the recipient(s) in other ways (that is, an influence operation), begging the question: To what end? Does it intend to sway election results or cast doubt on their veracity? To sow confusion and chaos or foment dissent to weaken an adversary? Additionally, an effort is likely to be made to understand how the perpetrators see their own action: as an unprovoked (cyber) move, a retaliatory action in response to something done to them (be it in the cyber domain or elsewhere), or as a defensive action (preventive or preemptive) to a move the perpetrator expects you to make against them.

The second set of questions digs deeper into the modalities of an attack, seeking to establish whether the perpetrator intended (and tried to design) the attack to be targeted, discriminate, temporary, reversible, and/or one-off. Conversely, the perpetrator may have intended to produce more widespread and/or persistent effects, or at least opened the way (by omission or commission) for these to be followed by (possibly unrelated) others who would seek to leverage the opportunity.

It is important to note that, in recent times, there have been many cases in which the perpetrators (especially those who are agents or proxies of a state) have not tried to mask their actions but have tried to conceal their true intentions.9 The latter, for example, have presented their actions as ransomware while their true intentions were to cause harm. Naturally, such tactics complicate the characterization and attribution challenge, though it seems doubtful whether such attempts can hold water over time. A thorough investigation of the specific case, additional information (deliberately and unwittingly) released over time by the perpetrators, and considerations of contextual factors (such as geopolitical developments) are likely to ultimately yield critical insights into the underlying motivations of the attackers.

Identity

Another pressing matter is the identity of the perpetrator. And even more importantly, who stands behind them (who could be located far apart nationally, geographically, or institutionally)? And at what level of seniority was the operation approved or, at least, assisted/tolerated? Naturally, the first set of motivation-related questions already begins to touch on these questions, insofar as the effort to characterize an action must factor in the identity of the perpetrator. But often, just as in police investigations of criminal behavior,10 the process works in reverse order—namely, the likely motivation inferred from an action’s parameters provides some clues as to the likely identity of its perpetrator.

Typically, the effort to analytically attribute an action draws on two sources of input: technical forensics and intelligence information. The art and science of cyber forensics has advanced a great deal in recent years; so has the sophistication that goes into concealing the true identity of a perpetrator or even impersonating an attacker’s identity, at times going as far as to try to pin blame for an attack on a specific third party.11 These parallel developments have resulted in an open-ended competition between the two sides.12 While forensic examinations of tactics and procedures are invaluable in sorting out the identity of cyber attackers, intelligence often remains indispensable to confidently arriving at the identity of the perpetrator and, even more importantly, ascertaining who stands behind them (as well as to quickly respond to such attacks). The odds of attaining such intelligence might be enhanced by (but not confined to) broad network surveillance, the persistent forward deployment and monitoring of sensors, and especially penetration of adversary networks and trajectories from which such attacks are likely to come.13

As we have discovered over the past year, however, these efforts have hardly proved adequate to discover and respond in time to especially sophisticated network intrusions, such as the SolarWinds, Microsoft Exchange, and Colonial Pipeline attacks. Two other factors come into play here. The first is whether the perpetrator or others have taken credit for the cyber action, or categorically denied any culpability for it. As a rule of thumb, both are typically suspect. The role of forensics—as well as intelligence—is to help prove or disprove either one. The second factor is whether the cyber actions are singular or unique; are they distinguishable from others that clearly fit into a broader context, pattern, or the well-recognized modus operandi of a specific actor? The former naturally proves more difficult to pin down with confidence, let alone quickly.

Either way, there is a clear synergy between the two processes of ascertaining the identity of the perpetrator and the motivation behind their action. The ultimate goal of this analytic phase is not merely to identify the perpetrator(s), even if they try to conceal or masquerade their identity, but rather to provide as definitive an answer as possible on two core characterization issues: whether the attack ought to be viewed as a de facto or even de jure hostile state action; and, if so, whether it represents a clear policy choice by the government—rather than the accidental, mistaken, or overly zealous (or corrupt) operation of a state organ or its proxy.14

A major obstacle that needs to be overcome in order to arrive at a definitive answer to both questions is the prevalent practice of some states to use proxies or other nonstate agents to undertake cyber attacks on their behalf. In a manner not dissimilar to the historical phenomenon of privateers,15 states not only work out some arrangements—such as dividing the loot with the proxies, providing them cover and other mutually beneficial arrangements—but at times even empower proxies through provision of some state assistance, such as penetration tools or other material means.16 In these cases, forensics and intelligence alone may not yield a definitive answer to these questions. And here is where the nature of the regime in which these nonstate operatives reside serves as a useful guiding tool and feeds into the due diligence process of assigning accountability.

As a general point, the more cyber intruders that operate from a territory tightly governed by a regime that effectively surveils also tightly monitors its population, the higher the likelihood that they are—at a minimum—benefiting from acquiescence of its state organs. This probability rises much higher when the state, which enjoys unprecedented police powers over its own cybersecurity and other laws and arrangements, also systematically and consistently fails to investigate and curtail the activities in question in the face of repeated warnings and allegations. This does raise a more fundamental issue regarding how states interpret their own duty of care to prevent cyber attacks from their territories, by their citizens, or employing their nationally based or produced products and services (as well as their international obligation, legal capacity, and operational capability to implement their obligations in this realm).

Gravity

The next phase is determining the gravity of an action. While the preceding phases do feed into and set up a factual (or, at the very least, empirical) foundation on which this question could be addressed, this phase lends itself to a far more subjective determination than the preceding ones. Furthermore, it usually involves many other types of participants as well as considerations. In particular, six additional criteria come into play here to assess and characterize the gravity of an incident. These are:

  1. The adversary’s aim(s) and intended effect(s).
  2. The actual effects of the action (which might be bigger, smaller, more localized, more widespread, more enduring, or more fleeting than the perpetrators may have intended).
  3. The targets engaged (such as whether critical infrastructure was attacked).
  4. The modalities employed in the attack.
  5. The extent to which the operation violated agreed-upon (or, at the very least, desired) norms and other obligations undertaken by the perpetrator.
  6. Whether the action represents (or is, at least, likely to become) a broader/bolder pattern of behavior or is merely a one-off action.

We need to bear in mind that these criteria may not all align in the same direction and could potentially produce a mixed evaluation of gravity. Moreover, it is also common to have different individuals and institutions assign different weights to the various indicators of gravity. Further, there is a tendency to address these issues on an ad hoc basis—possibly because of so-called defensive procrastination that is common in high-stress situations—and as part of a calculated strategy to retain a measure of flexibility while waiting for the parameters of the situation to crystallize or mature. Either way, this review often produces a degree of inconsistency and unpredictability in the final judgment of gravity.

Decisionmaking Considerations and Follow-Up Actions

While assessment of gravity (in addition to intent and identity of the perpetrator and their motivation) undoubtedly constitutes an important input into the policy decisionmaking process regarding if and how to respond to the adversarial cyber action, this process has to factor in several additional considerations as well. Especially noteworthy in this context are the following issues, ranging from technical and operational all the way to strategic and political:

  1. The level of confidence in the attribution as well as the assessment of the adversary’s intent.
  2. The extent to which the characterization and attribution rely on sensitive sources and methods that could be compromised if revealed.
  3. Whether there are operational benefits associated with keeping the incident and/or its nature/perpetrator secret (such as tracking the perpetrators, feeding them misinformation, or encouraging their complacency).
  4. Whether public revelation of the incident (or its specific presentation in a certain light) could become a public or political liability that forces unpalatable policy choices. An alternative consideration is whether covering up the incident or inaction in response could also become such a liability.
  5. Whether public revelation of the incident, the identity of the perpetrator, and/or their intention could yield strategic or political benefits. For example, could it influence the adversary’s behavior in a desirable direction? Or is it necessary as a step in responding to the attack in certain ways (for example, to lay the ground for imposing sanctions or indicting the culprits)? Or could it be leveraged to enhance one’s political standing and agenda?
  6. The likely economic and other ramifications of public revelation of the incident and its characterization in certain ways. For example, would public attribution prevent businesses from receiving insurance payments for damages because insurers can then legitimately claim that the cyber event was an act of war and therefore not covered?
  7. The response options available for response to the attack (besides public condemnation), and how these might be affected by publicity—or lack thereof—around the event.
  8. Whether a response—especially a public one—might trigger a reaction from the perpetrator of the attack (potentially others, too) that might dangerously escalate the situation or create other liabilities.

This undoubtedly is a daunting list of issues to grapple with, consisting of issues that go well beyond deciding whether to publicly acknowledge an attack and whether to attribute the action to a specific, named hostile actor. Officials have to agonize a great deal over whether to publicly characterize an adverse cyber action as a state operation. If so, they also must decide how to portray it (such as, a normal intelligence effort, commercial spying, an armed attack, or even warlike action). Not in the least because such actions may serve more than one purpose or their function may evolve over time. The answers to these questions inform not only how gravely policymakers view an action but also their willingness (or determination) to respond and the direction of such a response.

There obviously are profound consequences that follow each of these choices and ensuing designations. Many strategic, political, and operational considerations—including subjective judgments—affect the ultimate decisions.17 The nature of this process largely explains why most states and decisionmakers typically opt for an eclectic approach toward public attribution and characterization, even at the expense of some inconsistency in how they approach these issues from one case to another.18 It also accounts for the considerable variation we observe in how specific they are when they do go public about malicious cyber events, and the unpredictability in the options they pick to publicly name attackers and characterize their actions.19

Key Takeaways

This brief review of the process of assessing and debating how to respond to adversarial cyber actions offers a few telling insights:

  1. It suggests that the attribution process is, in fact, no more than one (albeit, an important) element in a much broader effort to characterize cyber attacks, debate their significance, and agonize over how one ought to respond to them.
  2. This process inevitably weaves together many considerations beyond the capacity to establish who has carried out the attack and toward what end.
  3. The sheer complexity of the calculus that determines whether, when, and how to go public about such an attack makes it unreasonable to expect a consistent public characterization and attribution policy to emerge that would hold firm across time, space, and circumstances.
  4. The weight of the considerations that affect the choices on public attribution and characterizations also implies that it would be difficult to externally lobby policymakers inclined to go public to refrain from so doing, unless they are offered a credible alternative that would go a long way toward addressing core interests and concerns.
  5. These factors hold true not merely for government officials but also for some corporations that provide digital services and platforms for their customers. For example, some such actors may consider it part of their duty of care not only to inform their customers about attacks and breaches but also to dissuade perpetrators from sustaining such conduct.20 Other private sector players may be inclined to release such information as part of an effort to brandish their cybersecurity credentials. Still others may conversely feel that their corporate interest would be best served by refraining from attributing attacks to their current or prospective customers.

Notwithstanding the inherent inconsistency in decisions on whether and how to go public about cyber intrusions, four clear patterns emerge from analysis of the track record of Western governments in handling public characterization and attribution of cyber attacks:

  1. They are generally reluctant to go public about these events unless they feel compelled to do so because the event is serious enough or already in the public domain, and they can point to some process of managing and responding to the event.
  2. When senior government officials do elect to publicly acknowledge adversary cyber attacks, they more often than not characterize events as state action without actually (and certainly not initially) naming the culprit, even when they have already reached a high level of confidence about the identity of the perpetrator. This is likely because it allows decisionmakers time to consider how to respond to such events, not in the least to leave some elbow room to explore quiet diplomacy to dissuade the adversary from undertaking further incursions.
  3. When they do go a step further to name the culprits, government officials not only seem confident about their judgment but also conscious of the requirement to back it up by publicly releasing some details and taking some measures in response.
  4. Some U.S. allies, who are otherwise reluctant to call out cyber attackers, may nevertheless engage in public attribution out of deference to U.S. requests for them to do so.

These trends suggest that the three cumulative requirements—to concede publicly that an adversary has managed to penetrate sensitive digital networks, to back up assertions about the character and identity of the perpetrator(s), and to take some action(s) in response—seem to dampen (though not eliminate) the enthusiasm for going public in general and for making false or unsubstantiated allegations, in particular.

Going forward, two issues are worth exploring further.

The first is whether an official policy of public attribution does indeed serve the national interests of states that undertake them, at least insofar as shaping the behavior of their cyber adversaries is concerned. And if the answer to this question is less than universal and clear cut (as this chapter’s analysis implies), what type of developments might alter the incentive structure for governments to engage in public attribution? In particular, can ascendancy of the “duty of care” of nation-states in cyberspace be effectively expanded to comprise prevention, investigation, and prosecution of perpetrators of attacks operating from their territory, that are their national citizens, or homegrown enterprises? If so, might it present a credible alternative to official public attribution toward those states that adopt this norm?

Second, since this chapter solely considers official attribution and characterization by governments, it implicitly draws attention to a closely related issue: the rationales that underlie public attribution by private sector entities and what role they might play in civilizing cyberspace. This issue is worthy of separate discussion.

Appendix

Notes

1 Herbert Lin, “Attribution of Malicious Cyber Incidents,” Hoover Institution, 2016, https://www.hoover.org/sites/default/files/research/docs/lin_webready.pdf.

2 Ben Buchanan, The Cybersecurity Dilemma (Boston, MA: Oxford Scholarship Online, 2017), https://oxford.universitypressscholarship.com/view/10.1093/acprof:oso/9780190665012.001.0001/acprof-9780190665012.

3 Microsoft’s “Digital Defense Report” provides an illustrative example of the interaction of these two analytic processes. See: “Microsoft Digital Defense Report,” Microsoft, October 2021, https://query.prod.cms.rt.microsoft.com/cms/api/am/binary/RWMFIi.

4 See Egloff and Smeets for a recent effort to conceptualize consistent factors constraining and enabling states’ public attribution practice. Florian J. Egloff and Max Smeets, “Publicly Attributing Cyber Attacks: A Framework,” Journal of Strategic Studies (Spring 2021), https://www.tandfonline.com/doi/full/10.1080/01402390.2021.1895117.

5 Jon Bateman’s contribution to this volume highlights some of the effects publicity may have on government official’s decisionmaking. See Jon Bateman, “The Purposes of U.S. Government Public Cyber Attribution.”

6 Note that public attribution through press leaks is either directed by senior government officials (plant of classified information) or unauthorized by government (leak by an official acting independently). See: David E. Posen, “The Leaky Leviathan: Why the Government Condemns and Condones Unlawful Disclosures of Information,” Harvard Law Review 127, no. 2 (Winter 2013), https://harvardlawreview.org/2013/12/the-leaky-leviathan-why-the-government-condemns-and-condones-unlawful-disclosures-of-information/.

7 Ben Buchanan elaborates on the “cybersecurity dilemma”—the challenge of assessing states’ intent in cyberspace means that actions taken for defensive reasons may be misinterpreted and inadvertently lead to escalation. See: Buchanan, The Cybersecurity Dilemma.

8 An interesting case in point is the UK’s National Cyber Security Centre report on Huawei 5G network equipment, that in contrast with widespread U.S. assertions established that the security flaws uncovered originated in serious engineering flaws and inadequate security culture. See: “Huawei Cyber Security Evaluation Centre Oversight Board: Annual Report 2018,” Government of the UK’s official website, July 19, 2018, https://www.gov.uk/government/publications/huawei-cyber-security-evaluation-centre-oversight-board-annual-report-2018.

9 “Microsoft Digital Defense Report,” Microsoft, 47–69.

10 Renze Salet, “Framing in Criminal Investigation: How Police Officers (Re)construct a Crime,” Police Journal: Theory, Practice, and Principles 90, no. 2 (Fall 2016): 128–42, https://journals.sagepub.com/doi/full/10.1177/0032258X16672470.

11 Ellen Nakashima, “Russian Spies Hacked the Olympics and Tries to Make It Look Like North Korea Did It, U.S. Officials Say,” Washington Post, February 24, 2018, https://www.washingtonpost.com/world/national-security/russian-spies-hacked-the-olympics-and-tried-to-make-it-look-like-north-korea-did-it-us-officials-say/2018/02/24/44b5468e-18f2-11e8-92c9-376b4fe57ff7_story.html.

12 Heajune Lee, “Strategic Publicity?: Understanding US Government Cyber Attribution,” Stanford Digital Repository, Spring 2021, https://purl.stanford.edu/py070wt8487.

13 For discussion of the alleged utility of forward defense strategy in defending the United States against cyber attacks, see: Paul M. Nakasone and Michael Sulmeyer, “How to Compete in Cyberspace,” Foreign Affairs, August 25, 2020, https://www.foreignaffairs.com/articles/united-states/2020-08-25/cybersecurity.

14 Luca Follia and Adam Fish, Hacker States (Cambridge, Massachusetts: MIT Press, 2020), https://mitpress.mit.edu/books/hacker-states.

15 For a fascinating discussion of the privateering analogy to cyberspace proxy actions see: Florian J. Egloff, “Cybersecurity and the Sage of Privateering,” Carnegie Endowment for International Peace, October 16, 2017, https://carnegieendowment.org/2017/10/16/cybersecurity-and-age-of-privateering-pub-73418.

16 Tim Maurer, Cyber Mercenaries (Washington, DC: Cambridge University Press, 2018), https://www.cambridge.org/core/books/cyber-mercenaries/B685B7555E1C52FBE5DFE6F6594A1C00.

17 Egloff and Smeets, “Publicly Attributing Cyber Attacks.”

18 Florian J. Egloff, “Public Attribution of Cyber Intrusions,” Journal of Cybersecurity 6, no. 1 (Fall 2020) https://academic.oup.com/cybersecurity/article/6/1/tyaa012/5905454.

19 Lee, “Strategic Publicity?”

20 “Microsoft Digital Defense Report,” Microsoft.