The 7th Annual Strategic Multi-layer Assessment (SMA) Conference was held at Joint Base Andrews from 13-14 November 2013.

The theme of the conference was Over a Decade into the 21st Century...What Now? What Next? The conference was focused on global megatrends and their implications in all spheres of national security.

It is no exaggeration to state that the world today is a very different place than it was barely 12 years ago when the war against al Qaida and its affiliates began. As we move forward, continuing advances in various spheres such as the sociotechnical world will present both challenges and opportunities. The conference examined these and related themes and highlighted new insights from the social and neurosciences.

The BRAIN Initiative, Neuroscience and Implications for National Security Agenda

To conduct deterrence operations, or to manage crises and escalation, it is necessary to predict how an adversary will decide to respond to our actions. Effective deterrence and escalation management thus crucially depends on an understanding of psychology.

This talk described three insights from neuroscience, which help us to predict how an adversary will decide to respond to our actions, and then four simple rules for using neuroscience to address such issues.

The first insight is that an action's impact on one’s decision-making is crucially modulated by a specific quantity associated with that action: this quantity is the difference between what happened and what was expected. This quantity is known as the “prediction error” associated with an action. It has been a core finding in neuroscience over the past 15 years that “prediction errors” are central to the mechanisms by which humans and other animals understand, learn, and make decisions about the world. The prediction error associated with an event modulates the impact that the event has on decision-making; the bigger the prediction error, the bigger the impact on subsequent decision-making.

This provides a simple framework that explains a wide variety of historical cases. Consider the case where an event occurred and was not expected, so is associated with a large prediction error. German air raids on London in the First World War using zeppelins were small-scale, but as they were so unexpected, they had a large impact and caused panic. There were demands for factories to be closed down if they risked further raids and members of the public assaulted officers of the Royal Flying Corps in the street for allowing these terrifying zeppelins through. Extrapolating from this, highly influential airpower theorists in the inter-war period suggested more powerful and recurrent bombing would, largely through psychological impact, have a paralyzing effect and rapidly make them collapse.

So what actually happened? In the Second World War, recurrent bombing clearly exerted much greater destructive power—for example, during the “Blitz” on London—but given its expected nature, it exerted much more limited psychological impact than had been anticipated.

This prediction error framework also simplifies across a wide variety of important strategic phenomena. For example, the psychological impact of surprise is just one instance of prediction error, where something happens, and it is not well expected.

This framework can be used in a China-US contingency—for example, over Taiwan—to calibrate the impact one's actions will have on the adversary. It predicts domain specific effects, where actions in less well understood domains (e.g., cyber or space) will have an inherently larger psychological impact. It predicts cross-domain responses will also have a larger psychological impact than anticipated, as responses are more likely expected in the same domain as the original action.
The take-home message here is to understand prediction errors and use them as a tool to implement and interpret signals. This speaks directly to the challenge General Fay raised today at the end of his talk here: to better understand communication.

The second insight is that decisions are the product of multiple, describable decision systems in the brain. The idea that multiple decision systems contribute to choice is not new: Plato, Freud, and more recently Danial Kahneman suggested it. The point is that now we are able to specify how these systems work. We think there are essentially three decision systems—none of them are rational in the economic sense and only one decides based on the potential consequences of actions. The point, however, is that these systems are well described, not just an endless variety of heuristics and biases. Again, this insight explains a variety of historical cases, and makes specific predictions about an adversary's behavior.

The third insight is that the “social brain” can exert powerful influences on decision-making. An important example is social motivations, such as the motivation to reject unfair treatment. In a classic example known as the ultimatum game, one individual gets an amount (e.g., $10) and proposes a split (e.g., $9 for her, $1 for the other). The other individual then decides to either accept the offer, in which case both get the split as proposed or reject the offer in which case both get nothing. Humans tend to reject low, unfair offers and pay to do so. Individuals pay to punish fairness even when the stakes are many months’ salary.

An earlier panel discussed the importance of inequality, and this explains why inequality matters. Historically, in China, the "unequal treaties" during the "century of humiliation" play a powerful role in current narratives and motivations, and we see a similar influence in Iran. Now, in planning, when trying to predict an adversary's motivations and decision-making, we can ask a specific question: will this be seen as fair to leaders, key interest groups or the public?
And so I have given you a flavor of three insights from neuroscience, which helps us to predict how an adversary will decide to respond to our actions.

Next, he described four general rules for using neuroscience to address such issues.

First, are we sure enough of the neuroscience? There is a plethora of ideas and findings in a field like neuroscience. Here, I used only core findings from the neuroscience of decision-making.

Second, does it matter in the real world? Such findings may be very convincing in individuals making particular decisions, for example in a lab – but in the real world, with all its complexities and existing structures, and unintended or unpredictable consequences, we may not see such an effect. Here I have provided a wide variety of historical cases across many different contexts.

Third, even if it is true in the real world, is it worth adding to the policy process? Given all the many important considerations when developing or using policy, adding yet another consideration can carry a big opportunity cost. Here, for example, the idea of prediction errors replaces and simplifies across a wide range of important phenomena.

Finally, what does the neuroscience add that psychology does not already give us? There is the general concept of “consilience”—an idea may be more robust if it is supported by both psychology and neuroscience, and neuroscience can help choose between otherwise similarly plausible behavioral explanations. There are also specific arguments, for example, about the importance of universalism. If we know prediction errors play an important role in decision-making across a wide variety of different species, including in humans, then it is much more likely that they play an important role in, for example, both the U.S. and China. This is important when thinking about generalizability within countries or cultures—for example, where key policy-makers have usually undergone an involved selection process—and so may differ from the general population.

This gives a flavor of how insights from neuroscience help understand an adversary's decision-making - and do so in a way that can be usefully, simply operationalized.

This speech can be found on page 97 of the full workshop text.