Lines of neon colors intersecting on a dark blue background

iStock

article

How AI Might Affect Decisionmaking in a National Security Crisis

In a time-sensitive U.S. national crisis, AI would impact the speed, perception, and groupthink of bureaucratic decisionmakers.

Published on June 17, 2024

Imagine a meeting of the U.S. president’s National Security Council where a new military adviser sits in one of the chairs—virtually, at least, because this adviser is an advanced AI system. This may seem like the stuff of fantasy, but the United States could at some point in the not-too-distant future have the capability to generate and deploy this type of technology. An AI adviser is unlikely to replace traditional members of the National Security Council—currently made up of the secretaries of defense, state, and the chairman of the Joint Chiefs of Staff. But an AI presence at the table could have some fascinating—and challenging—implications for how decisions are made. The effects might be even more significant if the United States knew that its adversaries had similar technology at their disposal.

To get a grip on how the proliferation of artificial intelligence might affect national security decisionmaking at the highest levels of government, we designed a hypothetical crisis in which China imposed a blockade on Taiwan and then convened a group of technology and regional experts to think through the opportunities and challenges that the addition of AI would bring in such a scenario. We looked in particular at how the proliferation of advanced AI capabilities around the world could affect the speed of decisionmaking, perception and misperception, groupthink, and bureaucratic politics. Our conclusions were not always what we expected.

AI Could Slow Down Decisionmaking

Because AI systems may be able to accumulate and synthesize information more quickly than humans and identify trends in large datasets that humans might miss, it could save valuable analytic time while offering human decisionmakers better-informed grounds for their judgements. As Deputy Secretary of Defense Kathleen Hicks argued in November 2023, “AI-enabled systems can help accelerate the speed of commanders’ decisions and improve the quality and accuracy of those decisions.”

But discussions in our workshop highlighted several ways AI might also work in the opposite direction.

First, while AI systems can help organize and sift through data, they also produce more of it. This means they can raise as many questions as they answer. Decisionmakers in a crisis would need to spend precious time weighing, integrating, and authenticating these additional data sources and other outputs from the AI. In fact, during our unfolding Taiwan crisis, when we offered our experts a hypothetical AI assistant that could provide possible courses of military action and explain their likely consequences, the experts immediately wanted to know more about the underlying AI system so that they could interpret its recommendations. They needed to understand why the system was making the recommendations it was before they could have a degree of confidence in its prescribed courses of action. They also wanted to weigh the AI’s recommendations with more traditional sources of information—specifically the actual human experts around the table. This meant that AI became just another voice in the process, one that also had to gain the confidence of the decisionmakers.

AI proliferation might also slow decisionmaking by creating uncertainty about adversary intentions and forcing policymakers to ponder whether and how AI might be shaping adversary actions. Deepfake videos, for example, could factor into a crisis in various ways, such as through false news reports aimed at pressuring domestic public opinion or obscuring adversary operations or motivations. Even if policymakers discounted low-quality or suspected deepfake videos altogether—a reasonable possibility given the huge volume of false information that already exists across social media—a public reaction to advanced deepfakes could still intensify public pressure on the White House for more aggressive course of actions. Weighing this public outcry with the uncertainty inherent in an AI proliferated environment would probably impede quick policymaker decisions.

In our scenario, for example, we considered a public reaction to a false video of Taiwan’s president being arrested by China’s security forces. It is easy to see how such a video, even if intended to indicate that China had achieved a coup de grâce in the crisis, could create immediate pressure on congressional leaders for a tougher U.S. response. Uncertainty about the veracity of the video given China’s advanced AI capabilities could complicate and weigh down policymaker decisionmaking processes. Even if inflammatory information were proven false, however, it might be too late to stop the avalanche of public demands, and national security decisionmakers may find themselves with little room to maneuver.

AI Might Combat Groupthink . . . or Make It Worse

AI’s capacity to slow decisionmaking could have negative consequences if the result is that the United States loses the initiative, defaulting to a reactive posture that is one step behind the adversary. But it might also have benefits.

One of these benefits is that, when used effectively, AI-enabled systems can challenge leaders’ underlying assumptions. Decisionmakers in a crisis would probably be reluctant to delegate their final decision to an AI assistant, but the assistant might still be beneficial if it were able to offer out-of-the-box ideas, to “red team” preferred strategies, or to ensure that decisionmakers had considered all major alternatives or key variables. When designed and used in these ways, AI systems could strengthen decisionmaking by breaking up the groupthink that often affects human beings in time-crunched settings. AI might also help to avoid the influence of decisionmaking pitfalls like anchoring bias (which leads individuals to fixate on options presented first) or recency bias (which leads individuals to privilege information heard most recently) by widening the range of options considered. Incorporating and evaluating additional courses of action would be time-consuming and could still slow the deliberative process, but it might be worth it if it produced better decisions.

Unfortunately, AI can also have the opposite effect of encouraging groupthink, especially in situations where the decisionmakers have high confidence—or too much confidence—in the capability of the AI system. In this situation, overconfidence in the technology compared to the fallibility of the human mind could lead decisionmakers to converge toward a single viewpoint—that of the AI. It is not hard to imagine that an intelligence analyst, for example, might hesitate to challenge the AI if it is thought to be all-knowing—one of our experts compared having an AI at the national security meeting to having Henry Kissinger sitting at the table. In other words, pressure to go along with the AI system’s recommendations could also be strong in group settings, especially under time constraints, and this could sideline even the most experienced experts with the most innovative ideas. Clearly this is a situation to be avoided, but just keeping a human in the decisionmaking loop may not be enough to prevent the AI from effectively running the show.

AI Could Reinforce Existing Bureaucratic Advantages

AI decisionmaking aids introduced into the national security process would not be neutral with regard to the roles of the major agencies that currently shape major policy and other decisions—the State Department, Defense Department, and the Intelligence Community. To the contrary, the systems might affect the weight and influence of some of these agencies at the expense of others.

Because AI systems are heavily shaped by the algorithms and assumptions built into them, they are likely to reflect the biases of their developers, even if unintentionally. As a result, the bureaucracy that develops and owns the system could end up with even greater power within the decisionmaking process. An AI assistant developed by the Department of Defense or the Intelligence Community might recommend different courses of action than one developed by an agency outside the bureaucracy.

This might not be a concern if all government agencies were equally well-placed to develop AI systems, but this is not the case. Instead, the bureaucracies with the most funding are most likely to develop the most advanced and capable systems. If this means that the Department of Defense develops and owns the most powerful AI platforms, the result could be a heavier defense influence on decisionmaking during a crisis.

Misperception

Because AI tools offer systematic ways to sift through larger volumes of information, they are often thought to reduce misperceptions. But our discussions highlighted several reasons this might not be the case and pointed to ways that some sources of misperception could be compounded by the proliferation of AI, ultimately escalating a crisis.

In the political science classic Perception and Misperception in International Politics, Robert Jervis argues that human beings tend to view the actions of others as more centralized, coordinated, and intentional than they actually are. When it comes to AI, this tendency to assume intentions that are not actually present could be magnified by widespread overconfidence in the accuracy of AI tools.

We posited a scenario in which decisionmakers have intelligence indicating that the adversary (in this case China) has put an AI system deep into its decisionmaking loop and taken the human out of that loop. An adversary might do this and make it known as a means of demonstrating commitment to a certain course of action, such as a willingness to escalate. In this case, the AI system might serve as a hands-tying mechanism of sorts that precommits the adversary to kinetic action.

In this scenario, uncertainty about the presence and role of the AI system made interpreting the intentions of the adversary very difficult for our experts. Specifically, it became unclear whether adversary moves were determined by an AI or by a human being. In an actual crisis, U.S. policymakers would probably be similarly unsure whether a human or machine is on the other side of the physical or virtual battlefield. Uncertainty about the role and presence of AI would also make signaling more difficult, increasing the risk of misperception and miscalculation and creating a perfect storm for unintended escalation even when both sides prefer to avoid conflict.

In a Taiwan scenario, for example, the U.S. decisionmakers’ judgments of whether or not an AI is making decisions for Beijing could affect their interpretations of China’s actions and shape their responses. If decisionmakers know that the adversary is using AI systems, the overriding tendency would probably be to view risky or aggressive behavior as an intentional design of that system. But that behavior could just as well be driven by a malfunction or mistake in the algorithm. If moves by adversary AI systems are automatically interpreted as intentional, without full consideration of alternative explanations, the chances of escalation may increase. In fact, a separate experiment run by Michael Horowitz and Erik Lin-Greenberg found just this: participants were more willing to retaliate when an adversary’s AI-enabled weapon accidently killed Americans than when only a human operator was involved, demonstrating greater forgiveness of human error than that of machines.

Training and Prior Experience Would Be Key

In reality, of course, AI systems are only as good as the data they are trained on, and even the best AI have biases, make errors, and malfunction in unexpected ways. In the end, they may be no more accurate than human experts, especially where context matters a lot. How much experience a group of policymakers has with the AI beforehand and how well trained they have been about its capabilities could end up being a decisive factor in whether the effects of AI are beneficial or not. This creates a heavy need for training in the use of these tools, a fact increasingly recognized, for example, in the draft policy guidance on government use of AI issued by President Joe Biden’s administration in November 2023.

Hands-on experience with AI-enabled decisionmaking tools and capabilities can educate users about the limits of such systems, as well as increase their familiarity and confidence in leveraging them quickly and advantageously when time is short. Such training can also inform potential users about the contexts in which a given AI tool works well and those where it may fall short and why, preventing misuse that might have negative outcomes. Finally, training that includes information on adversary AI systems could provide decisionmakers with an understanding of adversary capabilities and intent, including how both might create opportunities and challenges.

Clear AI Norms Will Strengthen Crisis Stability

This initial survey of the impact of proliferated AI capabilities on crisis decisionmaking identifies some opportunities but also several risks. Training might mitigate some of these risks, but AI will always have a complicated relationship with decisionmaking and decisionmakers—it will never be an easy plug-in to a decisionmaking process, nor will it replace the need for humans to make high-level strategic choices or to adjudicate between different sources of information.

All of this adds to existing arguments in favor of efforts to establish some form of AI governance akin to an arms control regime—a set of norms and acceptable uses that would govern the development and deployment of AI between the United States and its adversaries, especially China. Given the risks and challenges that emerge as AI systems become more common on and off the battlefield, such a regime could be stabilizing if it can reduce some of the uncertainty that causes misperceptions.

The challenge, of course, will be adopting a set of principles that all relevant parties might agree to, as well as a mechanism for ascertaining compliance. This challenge is greatly magnified by the fact that the leaders in AI innovation are commercial firms, not governments, and by the rapid speed with which AI systems are evolving and advancing. The Biden administration has sketched out a policy to guide military uses of AI and set up an AI Safety Institute to anticipate and mitigate dangerous uses of AI technology. While there is some alignment between the United States and key allies on these issues, to really have an impact, any AI arms control regime would have to include China. The two competitors held preliminary discussions about AI safety and governance in May 2024, but given strained ties and limited dialogues between Washington and Beijing, progress in the near term may remain slow. Still, U.S. policymakers should continue to push forward with willing partners where possible.