This photograph shows the first batch of Ukrainian made drone missiles delivered to the Defence Forces of Ukraine in Kyiv on December 6, 2024, amid the Russian invasion of Ukraine
Source: Getty
article

A Digitized, Efficient Model of War

The conduct of war has increasingly become a fight with a one-dimensional, digital representation of the enemy.

by Rupert Barrett-Taylor and Gavin Wilde
Published on June 3, 2025

This is part of a series on “The Digital in War: From Innovation to Participation,” co-produced by Carnegie’s Democracy, Conflict, and Governance Program and Swedish Defense University.

Battlefields from Ukraine to Gaza have recently been marked, as have many conflicts over the last two decades, by the extensive use of airborne assets, surveillance, and computing power in pursuit of victory. Both precision guided weapons and unmanned vehicles create new and heavy demands on training and logistics, as well as whole organizational structures devoted to finding targets. In this regard, the datafication of the battlefield and the automation of targeting has reached a modern-day zenith, on the heels of decades of theorizing about “information dominance” in warfare.

However, this digital-age collection and targeting process is founded on a premise of fierce optimization and brutal efficiency. The resulting model of warfare is both a product of physical observation and digital construction. It is process-driven, techno-centric, and ultimately premised on being entirely calculable. A model of warfare which demands efficiency above all else not only risks fostering a disregard for pragmatism and efficacy, but is also arguably a subtle cover for the exercise of institutional power and control. This article critiques an overly reductive model of war, in the context of increasing demands for greater automation and applications of artificial intelligence (AI) which are widely presumed to be fixtures in future conflict.

(Efficient) War, What is it Good For?

Since at least the 1950s, availability of large-scale computing has enabled a culture in which organizations, including militaries, manage complexity by seeking underlying, generalizable laws governing their disciplines, relying on quantification and data processing to do so.1 Each organization selects what they consider to be the most appropriate data and processing methods in the hope that computation yields competitive advantage. The underlying assumption is that the adoption of these forms of technology improve efficiency and optimize the tasks of any organization.2 However, this idea rests on a series of contestable beliefs about technology. While marginal in many arenas, there is significant danger in relying too casually on these assumptions in others. For example, large language models (LLMs) and their derivative products have been known to produce unreliable results to queries.3 At the other end of the spectrum: the tragic wrestle between the autopilot and very human pilots on Lion Air flight 610 that ultimately cost almost 200 people their lives. Such cases may be exceptional, but they share a common thread: “the system was responding to faulty data.”4

In the military domain, there is a long history of belief in the power of digital representation, matched to similar cautionary evidence of the dangers inherent to this belief. Technological advances in the Digital Era promised to deliver a toolkit for efficiency to military actors, economizing time and labor, and ultimately saving blood and treasure, while still accomplishing strategic goals.5 Using digital representations in a repeatable process-based framework allows optimization and minimization of friction. In turn this allows wastage to be eliminated, and the speed of military operations to increase. This apes private sector responses to an economic model based on fierce market competition. Computation has thus been situated at the heart of warfare, but the consequences of a drive to efficiency are underestimated. Moreover, what is often unappreciated is the foundations for this model of warfare were laid even before the development of the digital computer, and it is arguable that the current model reliant on computing has only accelerated existing preferences.

Intra-military competition for a dominant warfighting paradigm echoes the tug-of-war between an empirical perspective and an imperial one, as characterized by political scientist James C. Scott: “The relation between scientific knowledge and practical knowledge is . . . part of a political struggle for institutional hegemony by experts and their institutions.” In this regard, efforts to digitize the battlefield are “not just strategies of production, but also strategies of control and appropriation.”6 Thus, by excluding the unmeasurable and nondigitized qualities of the environment, the reification of targeting protects the pursuit of efficiency but also excludes insights that might prompt institutional introspection about accomplishing predefined goals. This risks what former UK signals intelligence chief David Omand warned about, an institutionalized tendency to become “better at counting examples of [digital] human behavior than critically explaining why they are and what it might mean.”7 Insofar as Elting Morison’s observation that military organizations are “societies built around prevailing weapons systems,” the targeting process may become primarily a means to exert influence within these societies—but this may or may not intersect with victory over an enemy in battle.8

As far back as the 1920s, enthusiastic supporters of airpower sought to show how it could defeat a future enemy through systematic bombardment of industrial infrastructure. However, assessments of Second World War strategic bombing against Germany indicate it would not have been successful by itself without the ground campaign in Europe.9 During the 1960s and 1970s the same ideas were linked by the U.S. Air Force to the power of digital computing. Computing was applied by U.S. forces against the North Vietnamese supply system during the Vietnam War by processing data gathered from sensors in the jungles of Laos.10 This expensive and highly technical project was, at best, mildly successful, and easily spoofed by the enemy on the ground. Nonetheless, and despite a lack of conclusive evidence that systematic targeting resulted in concrete effects against enemies by the end of the Cold War, NATO had built a suite of collection and processing capabilities to survey and digitally represent the battlefield.11 It remained anchored in a doctrine that sought to systematically target Soviet military forces, their logistics and their strategic infrastructure beyond the immediate battlefield. However the Soviets may have perceived these capabilities, amending their own planning and technological development in response, this datafied, mechanistic view of warfare remained largely speculative yet ceaselessly pursued by Western forces.

Over time, the resulting Air-Land Battle doctrine became the later and better-known Revolution in Military Affairs, and remained the foundation of modern doctrine right up to its application against insurgents and terrorists throughout the Global War on Terror.12 Operation Desert Storm, in which U.S. precision targeting against key decisionmakers and nodes ostensibly proved decisive, served as a testbed for the theory. The promise of quicker victories, fewer fires, and fewer casualties thanks to superior technology has since proven broadly alluring: U.S. precision munitions comprised 8 percent of U.S. fires in the Gulf War; 29 percent of NATO fires in Kosovo; by the Iraq War in 2003, this number had climbed to 68 percent.13

However, the era of uncontested U.S. military hegemony was arguably a false dawn—application of the same methods and ideas over the next twenty years would highlight the problems and flaws in this set of doctrinal ideas. The question of whether targeting a digital representation of the enemy results in strategic victory over any opponent remains open, while the contribution of so-called precision fires to wartime objectives is disputed. For instance, precision strikes in the Iraq War failed to achieve their objective of eliminating senior Iraqi leaders, killing over 100 civilian bystanders in the process;14 industrial scale targeting of the digital representation of insurgency in Afghanistan for example did not significantly degrade its ultimate ability to retake the country.15 Despite the AI-enabled targeting used by Israeli forces in Gaza, the destruction by late 2023 was nevertheless comparable to twice that of the Hiroshima bombings in 1945.16 The attending civilian casualties, collateral damage, and propaganda aspects of such “precision” at scale are all too often dismissed.17

In Ukraine, targeting plays an adjunct role which is arguably more performative and supportive than by itself likely to defeat Russian forces relying heavily on mass.18 However, targeted precision strike fits Western expectations of what factors determine success in war. These expectations reflect cultures of engineering-led innovation and economic competition rather than territorial defense or conquest. From digital representations of the battlefield emerge targets created from mountains of information, processed through socio-technical systems exploiting multiplicities of software and algorithms. Conventional wisdom driven by economics, expectations of technology and the promise of efficiency offered by computing place an emphasis on narratives of speed and efficiency. Faster computers and better optimized systems should lead to a prioritized array of targets, which should, in theory, represent a direct path to the destruction of the enemy’s ability to wage war. The question remains, however, why this model has not already led to the collapse of Russian defenses or a capitulation by Hamas, as these conflicts drag on with little end in sight. The net outcome of heavy investment in techno-surveillance still appears to be measured in terms of body count and attrition of materiel on both sides. Contrary to the aspirations, precision has, on its own, yet to identify the schwerpunkt promised by data-led military concepts.

Absolutely Nothing (Except Control)

As noted by historian Simon Winchester, a preference for precision over accuracy means the “outcome may not necessarily reflect the true value of the desired end . . . accuracy is true to intention; precision is true to itself.”19 Take, for example, a Swiss-engineered wrist watch: it may keep time down to fractions of a second, but such precision is of no use if the clock is set three hours behind. The relationship and distinction between precision and accuracy, efficiency and efficacy, only become more blurry and complex with the introduction of new technologies and organizational imperatives.

By extension, Western attitudes toward technology and warfare are largely founded on the positivist nature of technological enquiry, which leads to digital construction of the battlefield primarily from data that can be measured and observed—which represent entities presumably behaving according to some underlying fixed, universal laws.20 Data is then transformed through linear processes to constitute a discernable, digital world rather than attempting to translate the complex, physical phenomenon of conflict.21 Underlying this is a cultural presumption that human subjectivity can be hedged against by relying instead upon more “raw” and thus ostensibly more objective data.22 Yet, warfare is unavoidably both material and social. It is subject to forces beyond those measured through sensors, no matter how sophisticated, and these forces play roles which are instinctively known but digitally ignored.23 The target produced from such a system is stripped of its context and is instead constructed from moment to moment from data available to the system. How it arrived in the gaze of the targeting process is unknown and irrelevant. The reason for excluding the unmeasurable and unobservable is twofold, and best understood by conceptualizing the targeting process as a sociotechnical system. When understood through this lens, targeting assemblages do more than just generate knowledge about the battlefield; they are in fact mechanisms of control.

As also noted by Scott, “Any large social process or event will inevitably be far more complex than the schemata we can devise, prospectively or retrospectively, to map it.”24 In this regard, the United States and allies’ brief flirtation in Iraq and Afghanistan with so-called Human Terrain Teams is instructive. These teams were deployed to gather cultural knowledge, complex and qualitative information meant to inform operational choices. In practice, their work was arguably set aside as the conflict became led by remote surveillance and targeting, and less by counterinsurgency practices. The introduction of these unquantifiable insights proved challenging to the established forms of knowledge aggregation prized by the institution, which ultimately reverted back to more familiar and relatively streamlined processes.25 Whatever knowledge this ultimately excluded may or may not have been crucial to victory. But the primarily datafied representation of an adversary acts as a mirror for our own military organizations, driving decisions about how to fight and with what—revealing the degree of control these preferences exert upon our practices.

Efficiency as an End unto Itself

As discussed above, technology can serve wider, sometimes unseen objectives, acting as a conduit for institutional control every bit as much as a neutral, rational conduit for military victory. This is often obfuscated behind the logic of efficiency, which is located as the driving force of technological development. This phenomenon has for decades been explored by philosophers and economists. Frederick Taylor’s Principles of Scientific Management introduced the Industrial Revolution to new ways of maximizing productivity through the identification and sequencing of discrete tasks.26 Martin Heidegger warned that modern technology, rather than serving as an instrument, instead instrumentalizes its users, rendering humans into a “standing-reserve in waiting.”27 Jacques Ellul’s notion of technique—the optimization of everything to make life uniform, calculable, and ultimately predictable—imparts more value to the artificial than the physical.28 Michel Foucault examined how technology objectifies its human subjects, stripping them of agency and turning them into “a function, ceaselessly modified.”29 These themes found resonance in the later work of military theorists interrogating the role of technologically-enabled efficiency in war.

For instance, Martin Van Creveld juxtaposes efficiency (the optimization of time and labor) and efficacy (the ability to achieve a discrete wartime objective).30 Far from being mutually complementary, these two dynamics in fact work at cross-purposes. Critiquing the Pentagon’s rush to introduce electronic mail to every staffer, he highlighted:

“The original reason for introducing computers into the military, and for linking them to each other, was the amount of information needed to manage modern armed forces and modern warfare. Once computers and the networks linking them were available, however, their very existence led to further huge increases in the volume of information to be processed. . . . So it went, in an escalating spiral to which there was no clear logical end. Theoretically the object of the exercise was to attain that kind of perfection of which only technology is capable. In practice, it became increasingly clear that this goal would only be achieved when there was nothing left to perfect at all.”31

Like “the market,” or the thicket of self-reinforcing rules generally referred to as “bureaucracy,” the danger of automation and artificial intelligence is less that they become lethal by becoming sentient or deliberately malicious, more that their users become irreversibly beholden to them through a less-than-deliberative path dependency.32 In an economic context, this might be illustrated by pointing out that the logical extension of maximum efficiency in theory would likely lead to mass unemployment in practice.33

For instance, the retail sector’s push to save on labor costs and streamline inventory management led them to introduce high-tech self-checkout kiosks. The years-long experiment, however, appears to have failed—what stores saved on hiring cashiers and baggers, they ended up losing on costly IT maintenance, physical upgrades and upkeep, theft, and customer satisfaction. “At the bottom of all the supposed convenience” the new tech may have promised, “you do actually just need a lot of people to operate a store.”34 Thus, the goal of efficiency becomes efficiency itself; it is often not in service of pragmatic organizational objectives but becomes another dogma, to be served because more efficiency is seen as inherently better, but is too often conflated with efficacy. As an objective without end, it becomes the perfect shield for other, less obvious organizational goals—including what economist Dan Davies calls “accountability sinks,” which offload human responsibility for outcomes onto technologies or processes.35 The final section of this piece will bring together these different themes.

Conclusion: Technology and the Limits of Observable Data

The logic of targeting that underpins modern conflict obscures myriad contradictions and dissonant objectives. It is driven by a rational but ultimately unrealistic desire to control chaotic wartime environments, as well as to exert institutional control. These instincts tend to rely on technology to flatten complexity—including human agency and causality among friend and foe, alike—in potentially detrimental ways. Moreover, the technologies employed to manage information economies threaten to propagate entirely new ones, demanding ever more resources in a recursive cycle of innovation.36 The insatiable demands for optimization lead militaries to conflate efficiency with proficiency, and to adopt a scientistic view of warfare. Consequently, militaries risk neglecting the physical demands of war in service of ever more granular digital representations. A more critical approach to technology is needed.

Recognizing that a mainstream, institutional and practitioner understanding of the role played by technology in war is shaped by positivism highlights the limits of targeting, and the practical dangers of further application of algorithms. A Western engineering-led philosophical approach to targeting does not allow why the enemy fights to be interrogated, only that they do fight, and that they must be constructed as a system of targeting possibilities. As these preferences are built into the technology used to target the enemy, and software becomes increasingly “black-boxed,” its origins increasingly difficult to understand. The military-defense domain is investing in and integrating greater numbers of AI-supported tools that aid targeting by increasing the volume of data processed, which increases precision, but at what cost to the intent of a campaign? Discussion on what data enters the algorithm and generates the target is closed. Historic preferences for targeting limit further exploration of its consequences. Presenting targeting as the natural evolution of warfare in its current institutional philosophical context closes dissent. Thus, debate is limited to the practical limits of precision weapons rather than systemic interrogation of targeting itself, despite its role in exerting control over entire military organizations and practices. The gulf between the software representation of the enemy, and the enemy themselves in their battlefield reality, is thus ignored in an endless pursuit of both control and optimization.

The conduct of war has increasingly become a fight with a one-dimensional, digital representation of the enemy, extracted from what it is observed to be doing, primarily through sensor input. As targeting continues to abstract war from battle to bureaucracy, what we understand as the enemy is mediated by technology.37 This engineering-derived algorithmic representation of our enemy is destined to be ever-present, thus never defeated, so how can we alter the philosophical lens through which the battlefield is understood? This article has highlighted that not only is the targeting process a means to find the enemy, but it also seeks control over both its own organization and the battlefield. Efficiency and control have been conflated with efficacy and become means to their own ends. Is it possible for militaries to recognize and temper their tendency “to fit men into the machinery rather than to fit the machinery into the contours of a human situation”?38 

Notes

Carnegie does not take institutional positions on public policy issues; the views represented herein are those of the author(s) and do not necessarily reflect the views of Carnegie, its staff, or its trustees.