Table of Contents

Uncertainty and mischaracterization could have quite different consequences. Uncertainty among decisionmakers does not necessarily lead them to misperceive an adversary’s capabilities or intentions (on the contrary, it suggests a realistic understanding of their state’s limited ability to collect and analyze intelligence). Therefore, uncertainty about how a warhead was armed would probably not raise the risk of inadvertent escalation in a crisis or conflict—in fact, it might even enhance deterrence and make deliberate escalation less probable (see box 2).

Box 2: Warhead Ambiguity and Psychological Ambiguity

Most decisionmakers are likely to assume, in the absence of clear evidence to the contrary, that ambiguous weapons are nuclear-armed. A few, however, may accept uncertainty and plan on that basis. Such decisionmakers probably find themselves in a condition of psychological ambiguity—a concept that, confusingly, is different from warhead ambiguity but nonetheless relevant.

Psychologists distinguish between two types of uncertainty: risk and ambiguity. A risky situation is one in which the probabilities of the various possible outcomes are known (such as a gambling game that involves choosing a ball from an urn containing equal numbers of red and black balls). By contrast, ambiguity arises when the probabilities of those outcomes are unknown (for example, a game in which the player is not told the ratio of red to black balls). People tend to have an aversion to ambiguity (though the experimental evidence is not definitive).1 Thus, they are generally willing to accept smaller gains to avoid being placed in an ambiguous situation.

If intelligence analysts are unable to characterize an adversary’s weapons because of warhead ambiguity, decisionmakers are likely to find themselves in a psychologically ambiguous situation. After all, estimating probabilities would be inherently challenging (and few people voluntarily try to assign explicit probabilities in this way). In such cases, ambiguity aversion could manifest itself as enhanced deterrence. So, for example, if U.S. intelligence analysts were unable to characterize some threatening Chinese mobile missiles, ambiguity aversion might reduce the likelihood that U.S. leaders would decide to attack the missiles (compared to a risky situation in which the analysts estimated the probability that the missiles were conventionally armed). That said, if other options for combating the threat (such as relying on missile defense) also presented ambiguities, ambiguity aversion might not have any net effect.

Unfortunately, mischaracterization is significantly more likely to occur than uncertainty because of the tendency among decisionmakers to assume, without clear evidence to the contrary, that ambiguous weapons are nuclear-armed.

In a crisis or conflict, mischaracterization—whether a false positive or a false negative—could increase the likelihood of escalation in two ways: First, and perhaps more importantly, one state might misread the other’s intentions regarding the use of nuclear weapons. Second, one state might wrongly assess the other’s military capabilities. This section focuses on inadvertent escalation, which could occur if the mischaracterization were an unintentional consequence of ambiguity. States might seek to induce uncertainty or mischaracterization deliberately to try to enhance deterrence—though doing so would not be risk-free (see box 3).

Assessing Intent

In any conflict between two nuclear-armed states, the risk of inadvertent escalation would be increased if either one misjudged the likelihood of its opponent’s using nuclear weapons. Indeed, to avoid this kind of misperception, a belligerent considering nuclear use might attempt to signal its resolve first by, for example, dispersing nuclear-armed delivery vehicles in the hope that its adversary would back down or, at least, reach some kind of accommodation.

Warhead ambiguity could obfuscate such signaling operations and increase the already significant challenges of communicating and assessing intent. False negatives could lead to an intended signal being missed. False positives could lead a state to wrongly believe that its adversary was issuing a signal or even secretly preparing for nuclear use.

False negatives. A nuclear signal might be missed if it was sent using ambiguous delivery systems that the intended recipient wrongly concluded were conventionally armed. Signaling is a part of both the United States’ and China’s defense doctrines. The 2018 U.S. Nuclear Posture Review, for example, states that U.S. nuclear forces must have “the capacity to display national will and capabilities as desired for signaling purposes throughout crisis and conflict.”2 Chinese doctrine, meanwhile, embraces an apparently similar concept, termed “campaign deterrence,” involving nuclear or nonnuclear missile operations to “display the possession of the capacity to deliver inexorable, unstoppable, disproportionate force.”3 Official and unofficial Russian sources, by contrast, have been largely silent on signaling. However, in a 2015 interview, Putin stated that he had been ready to place Russian nuclear forces on combat alert if the 2014 operation to annex Crimea had run into trouble.4 This acknowledgment suggests that Moscow also plans to signal prior to nuclear use.

Even without warhead ambiguity, nuclear signaling can be challenging. Nuclear signals have frequently failed to achieve their purpose—sometimes because they did not attract the attention of the adversary. During the Berlin Crisis of 1958–1959, for instance, president Dwight Eisenhower and other U.S. principals were likely unaware that the Soviet Union had deployed nuclear-armed SS-3 missiles to East Germany to try to signal Moscow’s willingness to risk nuclear war if Washington did not concede to the Kremlin’s demands over the status of Berlin.5 The Soviet Union may have issued other nuclear signals—including one related to the invasion of Czechoslovakia in 1968—that the United States simply missed.6 Subsequent improvements in ISR capabilities notwithstanding, the fog of war in an actual conflict between the United States and Russia or China could still create real challenges for detecting nuclear signals.

BOX 3: Intended Mischaracterization

Uncertainty or mischaracterization may not always be an unintentional consequence of ambiguity. States can have incentives to exploit ambiguity deliberately in an effort to raise the risk of escalation and thus enhance prewar or intrawar deterrence. After all, the danger of a conventional war getting out of hand in ways that neither side can fully control—“the threat that leaves something to chance,” as Thomas Schelling called it—is precisely what may lead a state to think twice before contravening another’s important interests.7

Although Russia and the United States make threats that leave something to chance in various ways, China is most plausibly trying to do so through the exploitation of warhead ambiguity. Specifically, there has been speculation in the United States that Beijing is trying to deter Washington from launching attacks on China’s conventionally armed missiles by raising the risk that the United States might unintentionally destroy Chinese nuclear weapons in the process. Chinese experts generally dispute this claim. Indeed, China’s decision to field nuclear and conventional missiles in separate brigades supports their position (though the possibility that those missiles share a command-and-control system does not). That said, two Chinese scholars, Li Bin and Tong Zhao, who agree that Beijing did not deliberately set out to entangle its nuclear and nonnuclear forces, argue that China “is now discovering that such entanglement is potentially useful . . . and is correspondingly reluctant to . . . [embark] on a process of separation.”8

If China does seek to deter U.S. attacks on its conventional forces through warhead ambiguity—and, to be emphatic, whether it does is unclear—then it must aim to make Washington at least uncertain about the United States’ ability to distinguish nuclear from nonnuclear delivery systems. But Chinese efforts to create such doubt could have unwelcome side effects. Specifically, in a crisis or conflict, to increase the difficulties the United States faced in characterizing Chinese missiles, China might disperse its nuclear and nonnuclear missile forces simultaneously—potentially leading nuclear missiles to be deployed earlier than they otherwise would be. Such a deployment might be motivated exclusively by a desire to protect China’s conventional missiles and not be intended to pose a direct nuclear threat to the United States. However, in that eventuality, Washington might conclude that Beijing was seriously considering nuclear use, creating a form of misinterpreted warning that could catalyze an escalation spiral (as discussed elsewhere in this chapter). The risks in this case would depend, in part, on whether China understood that the dispersal of its nuclear missiles could be misinterpreted by the United States. If it did, Beijing would be more likely to understand any U.S. response, making escalation management somewhat less demanding. If it did not, misperceptions on both sides could raise the risk of further escalation.

The use of ambiguous delivery systems for signaling would further exacerbate these challenges. Signaling operations could fail because the intended target mischaracterized the delivery systems involved as conventionally armed. Indeed, U.S. doctrine indicates that signaling operations would likely involve aircraft, which necessarily means the employment of dual-use assets (as the United States does not reserve any type of aircraft for nuclear operations).9 It is less clear what types of missiles China might use for signaling purposes, and Russia’s plans are murkier still. Nonetheless, if Beijing or Moscow were to consider launching nuclear attacks against regional targets, it might employ one of the ambiguous systems listed in table 2.

The signaler could take steps to mitigate the risk of ambiguous assets’ being mischaracterized. For example, in theory, the United States could keep some dual-use bombers out of the conventional fight and reserve them for nuclear operations so that alerting them or flying them toward the conflict, if it came to that, would hopefully constitute a clearer signal. However, for this approach to be effective in practice, the intended recipient would have to penetrate the fog of war and obtain timely intelligence on aircraft that might be located deep within U.S. territory.10 Attacks against the intended recipient’s ISR capabilities would exacerbate these difficulties.

A nuclear signal might be missed if it was sent using ambiguous delivery systems that the intended recipient wrongly concluded were conventionally armed.

Moreover, even if the option to hold aircraft in reserve exists or were created, there is no guarantee that Washington would actually exercise it: given the exigencies of conducting a large-scale conventional conflict against China or Russia, U.S. decisionmakers might choose to focus all available resources on the conventional fight and accept or ignore the increased risk of a signaling failure. Alternatively, the United States might hold aircraft in reserve, nominally for signaling purposes, but end up employing them for conventional operations as part of an intensifying air campaign or as a way to replace bombers that had been shot down, creating a particularly serious risk of a false positive.

Another way to enhance the clarity of signals would be to issue public or private statements to explain the signaler’s intent. The signaler, for example, could provide details about the particular units involved and indicate that they were nuclear-armed. However, the intended recipient could interpret the statements as bluffs if, for whatever reason, it failed to detect the signaling operation itself—after all, signaling operations are needed precisely because talk, by itself, is cheap.

Moreover, historically, the statements accompanying nuclear signals have shied away from a high degree of specificity, which would be necessary to clarify warhead ambiguity.11 Some signals were not accompanied by statements at all. For example, in 1969, U.S. president Richard Nixon initiated the so-called Madman Alert, a global alert of U.S. nuclear forces, to pressure the Soviet Union and North Vietnam into negotiating a tolerable settlement to the Vietnam War, but he did not issue any public or private threat to explain the signal’s meaning. As a result, the Soviet Union likely failed to understand the message correctly. At the time, Moscow happened to be embroiled in a serious border dispute with Beijing and may have misinterpreted the alert as a warning against attacking China.12 In fact, there does not appear to be even a single example of a nuclear threat that provided specific details about an accompanying signaling operation. Even the nuclear threats Soviet premier Nikita Khrushchev made in 1958 and 1959, which were among “the clearest . . . issued in the atomic age,” did not hint at the accompanying deployment of SS-3 missiles.13

States’ reluctance to make specific nuclear threats will likely continue. This reluctance stems partially, as in the case of the Madman Alert, from signalers’ desire to avoid domestic and international opprobrium—a consideration that admittedly might not carry much weight in an actual conflict. By contrast, other motivations for vagueness would persist and might even become stronger. Militaries would likely oppose revealing information that might compromise operational effectiveness, and decisionmakers might want to avoid boxing themselves in.14

In short, even if states adopted measures to clarify the meaning of signaling operations, the likelihood of such signals being missed would be higher if sent with ambiguous, rather than nuclear-only, assets. Furthermore, in a real conflict, signalers would face pressures not to implement such measures fully or even at all.

If a nuclear signal were missed because of warhead ambiguity (or for any other reason), the breakdown in communications could spark inadvertent escalation.15 Specifically, the signaler might conclude that its message had been received but ignored, when, in fact, the intended recipient had actually missed or misinterpreted it. The signaler might respond aggressively, including, perhaps, by living up to its threat to use nuclear weapons. If the intended recipient had been willing to come to terms, rather than face a nuclear attack, the escalation would have been entirely avoidable.

False positives. If one state in a conflict detected conventional operations being conducted by its adversary, but, because of warhead ambiguity, wrongly assessed that the weapons involved were nuclear-armed, it might conclude that the opponent was issuing a nuclear signal or even surreptitiously preparing for nuclear use. Either way, the observing state would overestimate the likelihood of its adversary’s using nuclear weapons—a form of misinterpreted warning.

The misinterpreted warning created by an unintentional false positive could play out in two ways (assuming it had a significant effect, which it might not). The apparent introduction of nuclear weapons into the conflict might induce the observing state to behave in a more cautious or accommodating way to try to reduce its adversary’s perceived incentives to resort to nuclear use. This effect might be termed inadvertent deterrence or inadvertent compellence, depending on the circumstances (deterrence involves threats intended to persuade an adversary not to do something; compellence involves threats intended to force an adversary to act). Alternatively, the observing state might take countermeasures to try to deter or prevent its adversary from using nuclear weapons or to mitigate the consequences of such use. These steps could feed an escalation spiral or even precipitate nuclear use.16

There is one overarching reason to worry about the possibility of inadvertent escalation. An unintentional false positive would result from a state’s deploying nonnuclear weapons that it was not expecting to be mischaracterized.

Whether an unintentional false positive generated inadvertent deterrence or inadvertent escalation would likely depend on the specific circumstances of the conflict and the adversaries’ capabilities, plans, and perceptions. That said, there is one overarching reason to worry about the possibility of inadvertent escalation. An unintentional false positive would result from a state’s deploying nonnuclear weapons that it was not expecting to be mischaracterized. Consequently, it probably would not have considered how to conduct the deployment in a way that minimized the risks of escalation—by, for example, explaining the deployment’s purpose or locating the ambiguous weapons out of range of particularly sensitive targets, such as the adversary’s nuclear command-and-control assets.

In the extreme case, a misinterpreted warning could result directly in nuclear use. Specifically, if the observing state were seriously concerned that it was about to become the victim of a nuclear attack, it might use nuclear weapons first in a limited way, either to try to terrify its opponent into backing down or to destroy the weapons it thought would be used in the attack. However, the more likely consequence of a misinterpreted warning would be an escalation spiral. The observing state’s response to a false positive could catalyze further escalation because it would risk appearing to its opponent as needlessly provocative, if not entirely disproportionate. The observing state, for example, might disperse vulnerable nuclear forces or make public or private nuclear threats, lending a nuclear dimension to a conflict that, as far as its opponent was concerned, had been purely conventional. The observing state could also launch nonnuclear operations designed to prevent its adversary from using the ambiguous weapons that had sparked the false positive. If such operations included attacks on dual-use command-and-control capabilities, they would risk being interpreted as preparations for nuclear use instead of an attempt to prevent the target from using nuclear weapons.17

Historically, nuclear operations and threats have often induced reciprocal escalation, even if the resulting spirals were terminated short—often far short—of nuclear use. The catalysts of most of these abortive spirals were true positives. However, they are relevant to understanding the consequences of false positives because a state that had mischaracterized an opponent’s weapons would not be aware of its mistake and so would respond similarly, whether the positive was true or false. When the United States alerted its nuclear forces in 1960, 1962, and 1973, the Soviet Union probably either responded in kind or made preparations to do so.18 Similarly, in October 1969, Chinese nuclear forces were placed on alert in response to Soviet nuclear threats that led Beijing to believe a nuclear attack was imminent. Meanwhile, in August 1978, the alert level of forces at some U.S. Strategic Command bases was raised after two Soviet SSBNs approached the U.S. coast.19

In fact, the 1973 incident was part of an escalation spiral that was initially catalyzed by a false positive.20 The U.S. alert took place on October 24, 1973, in the final days of the Yom Kippur War between Israel and a coalition of Arab states. Today, this alert is usually explained as a warning to Moscow against sending troops to Egypt.21 While that interpretation is unquestionably correct, it is only part of the story. Secretary of defense James Schlesinger clearly indicated that the United States had a second objective when he stated, at a press conference on October 26, that the alert was triggered by “other indicators [apart from apparent preparations for troop movements] of military intelligence nature into which I shan’t go.”22

These “other indicators” were almost certainly evidence of Soviet nuclear warheads being shipped to Egypt. In 2016, historian Tim Naftali rediscovered the suspected shipment using newly declassified documents, but, in the years after the 1973 alert, it was an integral part of the narrative.23 For example, it was highlighted in contemporary news reports, including a November 1973 article on the front page of the New York Times.24 It was also discussed in early scholarly analyses of the alert—most significantly, in a 1977 article by William Quandt, a National Security Council staffer during the Yom Kippur War.25

But compelling evidence that the warhead shipment never took place continues to be overlooked. The United States concluded that the Soviet Union was transferring nuclear warheads to Egypt after detecting radiation, apparently emanating from a Soviet freighter, the Mezhdurechensk.26 Initially, at least, the CIA found this evidence to be persuasive and, on October 26, reported to Nixon that the ship was “probably” transporting nuclear warheads.27 But the agency started to walk this conclusion back almost immediately. By October 30, it could only assess that “there is . . . at least the possibility that the Soviets have introduced nuclear weapons into the Middle East.”28 In fact, this later assessment contains no unredacted evidence that the ship was carrying warheads but does include “strong arguments against the Soviets shipping nuclear weapons to Egypt.”29 In the ensuing months, indications that the shipment was actually a false positive became even stronger. According to historian Jeffrey Richelson, testing showed that the radiation detector involved in the incident “was less than completely reliable” and would “often ‘detect’ such radiation when it was not present.”30

One final piece of evidence against the warhead transfer is that neither the Soviet Union nor Russia ever acknowledged that it occurred. After the end of the Cold War, the United States learned numerous details about Soviet nuclear activities. For example, within months of the Soviet Union’s collapse, former Soviet officials had informed their U.S. counterparts about the previously unknown shipment of nuclear-armed cruise missiles to Cuba in 1962. By contrast, in the decades since the 1973 alert, no evidence from the Soviet Union or Russia about a warhead shipment to Egypt has emerged. On the contrary, the one English-language account of the crisis that was written by a former Soviet official mentions media reports about the “transport of nuclear material” but then explicitly denies that the Politburo even discussed “the deployment . . . of weapons of mass destruction.”31

Thus, in the final analysis, it seems likely that a false positive contributed to what is sometimes seen as the most dangerous moment in the second half of the Cold War. Readings from an unreliable radiation detector were interpreted as a shipment of nuclear weapons to Egypt. This assessment contributed to a U.S. nuclear alert, which, in turn, may have led the Soviet Union to issue “a preliminary command . . . to the portion of the rocket forces that needed the most time to prepare for combat.”32 Today, a false positive created by operations involving ambiguous delivery vehicles could also catalyze an escalation spiral.

Assessing Capability

In any crisis or conflict, each state would collect intelligence about the other’s military capabilities to help inform strategy and tactics. By degrading the quality of intelligence information, warhead ambiguity—especially if it led to a false negative—could increase the likelihood of a state’s initiating potentially escalatory military operations whose dangers it had underestimated.

Unintentional attacks on an opponent’s nuclear weapons are one potential danger. In a U.S.-China conflict, for example, the United States might launch attacks against China’s conventionally armed ballistic missiles, which are intended to undermine U.S. power projection capabilities. If, however, the United States misidentified nuclear-armed missiles as conventionally armed ones, it might end up inadvertently targeting China’s nuclear forces.33 If limited in their extent, such strikes could not undermine China’s nuclear deterrent by themselves. However, Beijing might worry that the strikes were the first wave of a wider campaign.34 To try to coerce the United States into desisting, China might issue nuclear threats or, in the worst case, engage in limited nuclear use (its no-first-use policy notwithstanding).35 That said, even less dramatic responses, such as mating warheads with missiles or initiating a launch-under-attack alert, could increase the risk of escalation and nuclear use later on.36

Unintended threats to nuclear forces are not the only risk associated with conventional operations that a state might underestimate because of false negatives. Another is that one state might fail to discover that its opponent had deployed tactical nuclear weapons and, as a result, launch an operation that precipitated their use. During the Cuban Missile Crisis, for instance, the United States planned and prepared for an invasion of Cuba. In fact, had the crisis not ended when it did, it is entirely possible that those plans would have been put into action—either as a deliberate choice or because large-scale air strikes would, in McNamara’s assessment, have been “almost certain to lead to an invasion.”37 Throughout the crisis, however, the United States was entirely unaware of the eighty or so nuclear-armed SSC-2A coastal defense cruise missiles that were deployed on Cuba. Soviet forces might have used these weapons against invading U.S. forces.

By degrading the quality of intelligence information, warhead ambiguity— especially if it led to a false negative—could increase the likelihood of a state’s initiating potentially escalatory military operations whose dangers it had underestimated.

One complication of assessing the effects of imperfect knowledge in this particular case is that Kennedy and the other U.S. principals were likely aware that Soviet forces had fielded very short-range, nuclear-armed Luna rockets (contrary to many recent descriptions of the crisis).38 They presumably anticipated the possibility of an invasion being met with a nuclear response. Moreover, if any nuclear use, more or less inevitably, would have precipitated a general nuclear war, it would have been irrelevant whether the Soviet Union responded to an invasion with nuclear-armed cruise missiles instead of (or in addition to) the anticipated nuclear-armed rockets. But the inevitability of escalation is debatable. If Soviet nuclear use had been limited to Lunas on Cuban soil, it is feasible to imagine any nuclear exchange being entirely confined to the island. By contrast, because SSC-2A missiles could have attacked U.S. ships at sea and the U.S. naval base at Guantánamo Bay—both of which lay beyond Cuban territory—their use could have made escalation management even more difficult.39 For this reason, the United States’ lack of awareness of the nuclear-armed SSC-2A missiles may have enhanced the escalation risks.

Though probably less significant, false positives could also have escalation consequences if they led one state to underestimate an adversary’s conventional capabilities. For instance, if NATO were losing a conflict against Russia and wrongly assessed that deployed SSC-8 missiles were nuclear-armed, it might ignore them because it judged Russian nuclear use to be extremely unlikely. If those missiles were actually conventionally armed, however, and were used to hinder NATO operations significantly, pressure on the alliance to escalate the conflict could grow. Precisely how NATO might do so would depend on the circumstances, but even if nuclear threats or nuclear use did not come into play immediately, an expansion of the geographic scope or intensity of the conflict could make them more likely later on.

Notes

1 Daniel Ellsberg, “Risk, Ambiguity, and the Savage Axioms,” Quarterly Journal of Economics 75, no. 4 (November 1961): 643–669. For a review of the experimental evidence, see Stefan T. Trautmann and Gijs van de Kuilen, “Ambiguity Attitudes” in The Wiley Blackwell Handbook of Judgment and Decision Making, eds. Gideon Keren and George Wu (Chichester: John Wiley & Sons, 2015).

2 U.S. Department of Defense, Nuclear Posture Review, February 21, 2018, 44, https://media.defense.gov/2018/Feb/02/2001872886/-1/-1/1/2018-NUCLEAR-POSTURE-REVIEW-FINAL-REPORT.PDF.

3 This translation is from Michael S. Chase and Andrew S. Erickson, “The Conventional Missile Capabilities of China's Second Artillery Force: Cornerstone of Deterrence and Warfighting,” Asian Security 8, no. 2 (2012): 123–124. See, more generally, Second Artillery Corps, The Science of Second Artillery Campaigns, ch. 10.

4 Laura Smith-Spark, Alla Eshchenko, and Emma Burrows, “Russia Was Ready to Put Nuclear Forces on Alert Over Crimea, Putin Says,” CNN, March 16, 2015, https://www.cnn.com/2015/03/16/europe/russia-putin-crimea-nuclear/index.html.

5 Todd S. Sechser and Matthew Furhmann, Nuclear Weapons and Coercive Diplomacy (Cambridge: Cambridge University Press, 2017), 134–136.

6 Bruce G. Blair, The Logic of Accidental Nuclear War (Washington, DC: The Brookings Institution, 1993), 23–26. Blair’s focus is on alerts, which may be conducted for reasons other than signaling. However, assuming the description relayed to Blair is correct, the most likely explanation for the 1968 alert is signaling.

7 Thomas C. Schelling, The Strategy of Conflict (Cambridge, MA.: Harvard University Press, 1960), ch. 8.

8 Tong Zhao and Li Bin, “The Underappreciated Risks of Entanglement: A Chinese Perspective” in Acton, ed., Entanglement.

9 Joint Chiefs of Staff, Nuclear Operations, Joint Publication 3-72, June 11, 2019, II-3, https://fas.org/irp/doddir/dod/jp3_72.pdf.

10 U.S. heavy bombers are based in Louisiana, Missouri, and North Dakota.

11 For a review of the history of nuclear signals, see Sechser and Furhmann, Nuclear Weapons and Coercive Diplomacy, chs. 5–6.

12 Scott D. Sagan and Jeremi Suri, “The Madman Nuclear Alert: Secrecy, Signaling, and Safety in October 1969,” International Security 27, no. 4 (Spring 2003): 176–179.

13 Sechser and Furhmann, Nuclear Weapons and Coercive Diplomacy, 135.

14 The desire to maintain flexibility runs contrary, of course, to the very purpose of signaling, which is to lock decisionmakers into a future course of action, albeit conditionally. However, politicians frequently want to have it both ways, partly because they may not decide in advance how they will react if a threat is ignored. A recipe for escalation would be a decisionmaker’s deciding that a vague threat was, in fact, meant seriously only after the other side appeared to have ignored it.

15 International relations theorists generally accept that the outbreak of war can be caused by miscalculations about an opponent’s willingness to fight. See, for example, James D. Fearon, “Rationalist Explanations for War,” International Organization 49, no. 3 (Summer 1995): 393–395. The argument here is an analogous one, in which the threshold in question is the use of nuclear weapons rather than the use of force. While Fearon stresses the rational incentives for states to underplay or exaggerate their resolve, misperceptions about resolve increase the risk of escalation regardless of their cause.

16 For a related discussion about a somewhat different type of misinterpreted warning, see Acton, “Escalation Through Entanglement,” 72­–73.

17 Because the ambiguous weapons were not nuclear-armed, nonnuclear attacks directed solely against them would probably not represent a significant escalation. For more on the widespread use of dual-use command-and-control capabilities, see Acton, “Escalation Through Entanglement,” 63–66 and 78–82.

18 Blair, The Logic of Accidental Nuclear War, 23–26; and Scott D. Sagan, The Limits of Safety: Organizations, Accidents, and Nuclear Weapons (Princeton, NJ: Princeton University Press, 1993), 143–144.

19 Michael S. Gerson, The Sino-Soviet Border Conflict: Deterrence, Escalation, and the Threat of Nuclear War in 1969 (Alexandria, VA: Center for Naval Analyses, 2010), 28–52, https://www.cna.org/CNA_files/PDF/D0022974.A2.pdf; and David M. Alpern with David C. Martin, “A Soviet War of Nerves,” Newsweek (January 5, 1981), 21.

20 This discussion was originally published in James M. Acton and Nick Blanchette, “The United States’ Nuclear and Non-Nuclear Weapons Are Dangerously Entangled,” Foreign Policy, November 12, 2019, https://foreignpolicy.com/2019/11/12/the-united-states-nuclear-and-non-nuclear-weapons-are-dangerously-entangled/.

21 For a summary of the evidence, see Sechser and Furhmann, Nuclear Weapons and Coercive Diplomacy, 222–223.

22 James Schlesinger, transcript of news conference, October 26, 1973, available from U.S. Department of State, Bulletin LXIX, vol. 1795 (Washington, DC: Government Printing Office, November 19, 1973), 621, https://babel.hathitrust.org/cgi/pt?id=uiuo.ark:/13960/t8jd6fc91&view=1up&seq=705.

23 Tim Naftali, “CIA Reveals Its Secret Briefings to Presidents Nixon and Ford,” CNN, August 26, 2016, https://www.cnn.com/2016/08/26/opinions/secret-briefings-to-presidents-from-cia-naftali/.

24 John W. Finney, “Officials Suspect Russians Sent Atom Arms to Egypt,” New York Times, November 22, 1973, 1.

25 William B. Quandt, “Soviet Policy in the October Middle East War—II,” International Affairs 53, no. 4 (October 1977): 596–597. See also, for example, Barry M. Blechman and Douglas M. Hart, “The Political Utility of Nuclear Weapons: The 1973 Middle East Crisis,” International Security 7, no. 1 (Summer 1982): 137.

26 Finney, “Officials Suspect Russians Sent Atom Arms to Egypt”; and Jeffrey T. Richelson, “Task Force 157: The U.S. Navy’s Secret Intelligence Service, 1966–77,” Intelligence and National Security 11, no. 1 (January 1996): 117–119. For other background, including the name of the Soviet freighter see CIA, “Soviet Nuclear Weapons in Egypt?,” memorandum, October 30, 1973, 1, https://www.cia.gov/library/readingroom/docs/1973-10-30D.pdf.

27 CIA, “The President’s Daily Brief,” October 26, 1973, RDP79T00936A0118600200103, CREST System, https://www.cia.gov/\library/readingroom/docs/DOC_0005993967.pdf.

28 CIA, “Soviet Nuclear Weapons in Egypt?” 1–2.

29 CIA, “Soviet Nuclear Weapons in Egypt?” 5.

30 Richelson, “Task Force 157,” 119.

31 Victor Israelyan, Inside the Kremlin During the Yom Kippur War (University Park, PA: Pennsylvania State University Press, 1995), 191–192.

32 Blair, The Logic of Accidental Nuclear War, 25. On balance, the account by former Soviet official Victor Israelyan, neither supports nor contradicts Blair. Israelyan states that the Soviet Union “abstain[ed] from a military demonstration in response to” the U.S. alert, but the “preliminary command” described by Blair does not really constitute such a demonstration. Israelyan, Inside the Kremlin During the Yom Kippur War, 193. Moreover, it is unclear from Israelyan’s account whether defense minister Andrei Grechko or a subordinate needed (or, indeed, sought) Politburo permission to issue a preliminary command. If not, Israelyan would likely not have known about it.

33 Talmadge, “Would China Go Nuclear?” 73–75; Wu, “Sino-U.S. Inadvertent Escalation,” 5–6 and 7–8.

34 Cunningham and Fravel, “Assuring Assured Retaliation,” 22. See also Talmadge, “Would China Go Nuclear?” 84–90.

35 For the U.S. Department of Defense’s concerns about “ambiguity surrounding the circumstances under which China’s no-first-use policy would apply,” see Office of the Secretary of Defense, Military and Security Developments Involving the People’s Republic of China 2019, 66.

36 For more on a launch-under-attack alert, see Kulacki, The Chinese Military Updates China’s Nuclear Strategy.

37 May and Zelikow, eds., The Kennedy Tapes, 364. On October 27, the Joint Chiefs called for such airstrikes no later than October 29 (see page 351). See page 443 for May and Zelikow’s assessment of the likelihood of military action.

38 Coleman, “The Missiles of November, December, January, February,” 8–12.

39 Coleman, “The Missiles of November, December, January, February,” 15.