• Research
  • Emissary
  • About
  • Experts
Carnegie Global logoCarnegie lettermark logo
DemocracyIran
  • Donate
Podcast Episode

Inside the Pentagon’s AI War Machine

In this episode of The World Unpacked, Katrina tells host Jon Bateman about the creation of America’s AI war machine, the rise of Palantir, and the fully autonomous weapons already being tested.

Link Copied
By Jon Bateman and Katrina Manson
Published on Mar 27, 2026

Subscribe on

YoutubeSpotifyApple PodcastsOvercastPlayer FM

Invalid video URL

The U.S. is fighting its first full-scale AI war in Iran — but key details remain largely hidden from Americans and the world. Which military decisions are being automated? How well does AI really perform on the battlefield? Can guardrails prevent fatal errors?

Katrina Manson’s timely and deeply reported new book, Project Maven: A Marine Colonel, His Team, and the Dawn of AI Warfare, lifts the lid on this hidden world.

Transcript

Note: this is an AI-generated transcript and may contain errors

 

Jon Bateman: We are living through a historic moment in the automation of war. Right now, in Iran, AI is enabling U.S. Forces to strike at unprecedented speed and scale. There's also another major battle being fought in the courts over who will control military AI. Officials like Pete Hegseth or private companies like Anthropic. But the debates over AI warfare have been missing something essential. Basic facts. What are the main AI systems being used in war right now? What precisely do they do? How do they really work, or do they work at all? The government won't tell you. So I turned to Katrina Manson, a reporter for Bloomberg who has literally just written the book on America's most important military AI program, Project Maven. She told me never before heard stories about a secret program to turn jet skis into autonomous weapons. Why Palantir is both loved and hated in the Pentagon, and how top U.S. Military planners envision AI changing the course of a future war with China. I'm John Bateman, and this is The World Unpacked. Katrina Manson, welcome to the world unpacked.

 

Katrina Manson: Thanks for having me.

 

Jon Bateman: So you are the author of a spell-binding new book called Project Maven, A Marine Colonel, His Team, and The Dawn of AI Warfare. So let's just start with the basics. What is Project MaVEN and why is it essential to understanding the dawn of AI warfare?

 

Katrina Manson: Project Maven is this effort that the Pentagon launches into in 2017, really the year before, looking for some kind of AI project so that the pentagon can catch up with what it's most worried about, which is the potential for ever having to face China in a conflict. At this point, the US has been fighting in Afghanistan and Iraq against enemies that it considers really unsophisticated enemies. And this is the future of a war that could involve satellites. No air dominance, all these sorts of very daunting elements for a big military as well-funded and as big as the US to consider going up against a near-peer adversary. And I think today people might call them a peer adversary. AI is the tool that people like Deputy Defense Secretary at the time, Bob Work, wants to develop because really what he's looking is autonomy and automation. To make these processes so fast that the US can get ahead of the way the adversary is thinking. They come up with Project Maven, which is a simple and narrow idea, in public at least, to bring AI to rifle through drone feed video footage to look for objects. And at the time you had, in one example alone, five people staring at a screen all day long. Looking at drone video feeds but actually missing a lot and simply not being able to keep up with the amount that the US was filming because it was defending so heavily on

 

Jon Bateman: and drone wars. Nowadays, when people hear the phrase AI, immediately their mind goes to large language models, systems like Claude or ChatGBT, where you type something in, text comes back out. They're highly general systems that can be used to think, research, and reason. But as I understand it, Project Maven started out with a much more narrow version of AI. Computer vision. Looking through cameras to map what objects are happening on the screen. Explain that. Why did the military start with this narrow AI? What was it trying to achieve and what happened with that effort?

 

Katrina Manson: It really comes down to the idea of the colonel who was the chief of Project Maven at the start, the reen colonel, Drew Kukor. They were trying to come up with a pet project that could demonstrate the future of war to really bring on those future technologies. And he came up with Computer Vision. Computer Vision is a really simple idea. It wasn't very easy to pull off at the time and still isn't, but it is essentially just picking out objects from a digital screen. And at the time, there were commercial tools doing this open source. There's even a famous one that they used, I found out in a presentation of, if you imagine a scene from James Bond, Daniel Craig on his motorbike going through the marketplace, I think it's in Istanbul and everything on screen is popping up, popping up saying motorbike, vehicle, basket, marketplace. It is simply just identifying what is in front of you.

 

Jon Bateman: As you know, Katrina, I was working in the Pentagon during this period. My job was supporting the, I was the working for the Chairman of the Joint Chiefs, General Dunford at the time. And I got a briefing from the Project Maven team and was kind of generally aware of what they were doing. And I think your book does a remarkable job of capturing a couple of key themes. One is the broad atmosphere that there were far-seeing people within the Defense Department that could see that AI was the future of warfare and wanted to do what we could to push the U.S. Military into this future as fast as possible and felt a real sense of urgency about it. On the other hand, an awareness that there were so many technical challenges to overcome and so many bureaucratic and institutional challenges to overcome. It often wasn't clear where to start. And so just the idea of this pilot project, let's just start with something very tangible, something that AI is at least, you know, well suited for at the time. They settled on computer vision. They were often looking at drone footage, trying to classify objects. And as you report in the book, this ultimately took them all the way up to being able to dynamically target in the field. Being able to look at a live video feed and identify. A potential hostile actor or a potential civilian to strike or not strike. Tell us about that shift.

 

Katrina Manson: This comes out of a decision that within the Maven team was controversial, which is that Colonel Kuko wanted to push for, I suppose I would call it an almost an everything app, but essentially a common operating picture where you could look at the battlefield, a digital map. These things, of course, already existed in numerous different platforms throughout different services across the US military, but he wanted one that included AI, and he had this idea of a white dot that you can click on. Take the coordinate and target against this sort of everything app that Google was meant to make, a kind of Google Earth for war. Now, when Google, they didn't exactly drop out, but they didn t renew, it was clear that he was never going to get his Google Earth from Google. And he rang up Palantir and he already had connections with Palantire. He believed in their data analytical platform and said, I've got this idea. Can I come talk to you about it. He essentially pitched them, I'm able to show in the book, what has become Maven's smart system, which now really is, I think it's fair to describe it as the almost the operating system of war today, all the windows of war. Every single combatant command has it. It's being used in support of US operations against Iran. Centcom is a really big user of it. And UCOM in support of Ukraine, I think really that's the moment that it gets used operationally. Different examples in Afghanistan, trying to see what they can identify and then target. And it starts to pick up people that human screeners don't see on the video footage. And eventually Maven Smart System becomes a platform that can take in multiple feeds, not just computer vision. But also signals intelligence feed, even open source feeds if they've gone through the right kind of authorization to go in. And I think the most recent public figures are something like 179 data feeds go into Maven Smart System.

 

Jon Bateman: Maven smart system is the evolution of this original project Maven. It has become something close to the prophecy foretold by Colonel Kukor and others of a kind of all-seeing eye that brings together everything that the military is aware of. At least that's the theory. And Katrina, tell me if I have this right, it's actually performing maybe two distinct functions. One is just as an aggregator of data from multiple sources so that you can have a single picture that combines radar from over here, signals from over her, drone footage from over there. And so if you're looking at a single physical location, it can integrate in one view all of those same signals and sources. But then also it's integrating AI to discern and make sense of those signals. By, for example, identifying an object, identifying a belligerent or a friendly force. And then increasingly, the AI is being upgraded through large language models to maybe do things even beyond that. What's your sense of how, if at all, this has actually changed the prosecution of war? If we were to be inside a command center right now in the Iran War, what would we be seeing and noticing. That is different than how that same command center might have operated prior to the advent of Maven Smart System in, for example, a previous iteration of the war on terror.

 

Katrina Manson: In the early days of the Russian invasion of Ukraine, in February 2022, the European command buttressed by the 18th Airborne Corps, which sent people over, set up a targeting cell. Now, in those early days, the AI wasn't very good. It wasn't adapted to Ukraine or to snow or to tanks, but they made a tremendous effort to gather data. To get more satellite footage, to retrain the algorithms overnight, calling up Microsoft and AWS and all those folks and saying, get me a better algorithm, improve this. And they trained it on those long line of tanks that were trying to get into Kyiv and started to find things and were able to start sharing what they were calling, soon calling points of information, anything just short of the legal definition of target so they didn't get in trouble, but they still got in trouble. Sharing it with the Ukrainians and integrating as much as possible that process as well, sometimes down to seconds between the Americans finding something and the Ukrainions acting against it to send a weapon to it. And to your point about what we are seeing today in Iran, the number of targets that the U.S. Has prosecuted, I think the day we're Today, Central Command has just put out its latest figures, 7,000. 800 targets struck. In the first 24 hours of Central Command's operations against Iran, they hit 1,000 targets. Within the first 10 days, I think they were up to 6,000. And of course, they're specifically attempting to target only military targets. So they've to find them a lot will be on the move. They may not already have been under what's called, you know, the already chosen targets. They may be dynamic targets, which means targets that are moving or hadn't been predetermined as a target. And Maven is certainly part of that, I've been able to report. Maven and Claude, the anthropic LLM, the LLMs aren't finding the targets through computer vision, but they are speeding up those processes that allow a quick decision.

 

Jon Bateman: The ability of AI to... Brings speed and scale to decision-making is clear. And the numbers that you're offering are remarkable. Of course, in many wars in the past, the Pentagon itself has become seduced with numbers, body counts, in a belief that that equates to winning the war, right? When, as we know, winning a war is a much more multi-dimensional military political problem that you don't achieve just by making sure that you can prosecute more targets more quickly, although it could help under certain circumstances. So do you have a sense of whether the use of these systems in places like Ukraine or Iran by the U.S., Ukrainians, other allies, has really led to tangible battlefield differences? Is Ukraine... In the fight for longer or taking territory in places that it couldn't because of these systems.

 

Katrina Manson: I think you bring a really important distinction between tactics and strategy. And the claim for AI is not only that it will help tactically, but also that it would help strategically. And at that point, we open a much bigger discussion that the questions you ask of AI become the difficult work now. And that has to be a human decision of where you put, not just whether you take AI's word, but what job do you want AI to do? Because if you just want AI to speed up scale and scope. Yes, you will have more, potentially more casualties. You will hit targets very, very quickly. But if you haven't assessed the things that Russia hadn't assessed sufficiently, like the will to fight. That is where I think people like Kukor really understood that AI could be helpful. And I think if you look at the case of Indo-Pacific Command, which has leaned very heavily into AI only in recent years, Commander there now, Amar Paparo, is very much a proponent of AI. He is calling AI summits. He's calling on industry to do more. He's publicly talked about this idea of hellscape, the idea that he could defend Taiwan from any potential invasion from China. With the help of multiple autonomous and other platforms to buy him what he calls a month. So if ever China did decide to try and invade, they could hold them off for a month for long enough to get other US platforms into the area and also, of course, let diplomacy do its work. There. You have to understand how reasoning models, how asking AI, will this work? That is something that the US is beginning to trial. And it is also very complicated because, as we've all known from hallucinations, from bias, algorithmic drift, this tendency of algorithms to become less accurate with usage over time. Algorithms tend to agree with the questioner. They tend, several research studies have shown towards escalation. So asking an algorithm to try and stave off to help with deterrents is a very complicated task, but they're trying and I think they're not putting so many eggs in that basket that they will rely on those outputs. And there are ways of trying to introduce guardrails. So if you say, should I attack tomorrow, if it was such a basic question, you can ask the AI to check itself before feeding out the answer. But the extent to which those guardrails are in there. And how the U.S. Military starts turning to it, I think, becomes key. The fear, the fragility in this system is if you just think, well, AI can get me so many targets struck, that's all I need to do. And as we know, anyone who starts a short war often ends up with a medium term or a long war.

 

Jon Bateman: Then that seems to have been what has happened with Israel and Gaza, for example. AI has enabled Israel to prosecute speed and scale of targets that is probably beyond what can be responsibly processed by its military. And so instead is helping to create kind of a hellscape in Gaza. That's kind of what you want to avoid. On the other hand, Let me make a qualified case for the utility of using AI in some of these tools and get you to tell me whether it holds up based on what you've seen in your reporting. There is often an ick factor in any new military technology. You know, drones would be an example. Many people feel disturbed or concerned by drones for some of the same reasons as AI, that it seems kind of to take the human out of the equation, make it easier to strike. Kind of reduce some of the risks, increase civilian casualties, and yet drones have pretty much kept or been an important part of keeping Ukraine sovereign and giving it territorial integrity despite this aggressive invasion from Russia. I could imagine a situation in which AI performs the same role with Taiwan if Taiwan is invaded by, you know, an aggressive authoritarian China. And AI is what gives Taiwan or Taiwan and its friends and allies, you know, that month or two months to sort of shore itself up, would that actually not be a great gift to the free world? Because China certainly will be using these tools in its assault.

 

Katrina Manson: So you're asking me if drones are a great gift to the free world?

 

Jon Bateman: I guess I'm asking... How do we balance our understandable concern about the humanitarian implications of new military technology with the fact that these same technologies can be used for just causes, such as defending a beleaguered democratic country under assault by a larger imperialist authoritarian regime, whether that be Russia or China? That there is a kind of balancing act there. Does that make sense?

 

Katrina Manson: Sure. I mean, the Ukraine example is the one you have to start with, because much like trench warfare in World War One, people don't plan for the way wars develop and then have to respond very quickly. And so the effort that the US is now making on developing drones from a very late start from behind, of course, is triggered by watching what has happened in Ukraine. Ukraine and Russia, I think, this year will pump out four million drones each. And so for people who are really championing the cause of drones in a China conflict scenario, of course, that is partly about a deterrence message. There could be a time where a Chinese exercise around Taiwan is just a fig leaf for an actual invasion, and it will be very difficult for the US to tell that moment. And that is actually where AI could come in helpful in those predictive patterns. But in terms of the drones, those drones could be jammed. I mean, that's one of the concerns that the US has. And a cable over sea from a ship isn't going to cut it. And so in a very different scenario, the US is thinking about autonomous drones. It's also thinking about how to overcome jamming. And the effort to develop autonomous drones, drones that can fly on their own, go to a target on their, identify that target and release a weapon. That is much harder tech than the kind of drones that are being used in Ukraine. And that gets into the really kind of ethical existential questions that we're now seeing as the fault line between the Pentagon and Anthropik, this kind of buzz phrase of fully autonomous weapon systems, which the UN Secretary General, for example, has been arguing against since Project Maven, saying these are politically unacceptable and morally repugnant systems. That's exactly what the US is now trying to innovate using AI. And it's really hard. In the book, I'm able to show that AI targeting, automatic target recognition, has not been successful on these drones. Even the MAVEN algorithms were repurposed from looking for things like tanks on, let's say satellite imagery, to trying to develop computer vision. Could sit on the drone and it was fed with a huge amount of data collected from the Indo-Pacific region, essentially looking at Chinese vessels, Chinese destroyers, and they showed that sometimes these algorithms could recognize the Chinese destroyer. And that was shown even to a chairman of the Joint Chiefs of Staff, Dan Kane, last summer. So they were hoping that that's what would work, but it fell flat to some extent because the integrations to get AI onto multiple different drone platforms is very difficult. The algorithms are never delivered on time. You've got bureaucratic infighting, you've got the commercial company saying, well, our algorithms are fine, thanks, we don't need to work with Maven. And what that comes down to is whether you want, whose data you want driving these algorithms and how much and how good it should be. Because in a fight in the Indo-Pacific, it seems that people expect the risk appetite to be much higher than a fight on land. And certainly a fight in a city where we know civilians, to your point in Gaza, we know civilians would be harmed.

 

Jon Bateman: That's interesting. If you could just elaborate on that. Yeah, so is the idea that if we're fighting in the, you know, at sea and in the air over the Western Pacific, the fact that an algorithm is slightly, makes some errors or could miscategorize something is of less of a concern because the objects that it is coming across are more likely to be a hostile enemy to begin with. And there are fewer civilians in sight. Is that Is that right?

 

Katrina Manson: That is the argument that's been put to me. And if it's ever as convenient as we're now at war, everything in this box, this section of the sea.

 

Katrina Manson: Everyone's had time, if you're a commercial ship, to get out the way, or a little civilian boat. Everything here is essentially enemy territory or enemy waters. Our algorithms are good enough to see there's something there, even if it can't detect which specific ship or it might lose track of it. We will, in a wartime situation, be prepared to use those. That's the kind of think the US is at and developing and trying to get better then, but that is the kind of risk margin that they may be dealing with. At that point, you want to be really careful of your own vessels and your own forces because you want make sure that they're not at risk of detecting.

 

Jon Bateman: So maybe if we're talking about the final frontier of algorithmic warfare, which would be fully autonomous targeting and on the weapon itself, we're at a level where the US might choose to deploy something in this more geographic box that it's confident is a war zone, but not in a more complex urban. Maybe you could tell the audience a little bit about the autonomous weapons systems that are under development, that you reported on in this book, that I had never heard of before and I don't think were public before, but you discovered two separate systems that are being developed right now.

 

Katrina Manson: One is called Goalkeeper and one is called Whiplash and these, Goal keeper is an aerial drone that can be launched from the side of a ship and then it was meant originally to defend a ship, but they innovated to try and see if it could become offensive and go after enemy targets. That has been developed over years, that there is some reporting previously on the existence of Goalkeeper, but I think not in the way that it's being used and developed. The Office of Naval Research and others tried to develop these essentially in-house using some commercial tech, but on a, on a kind of gov in. Basis. They tried it out in Ukraine, a rudimentary version of this. The CIA, I report, got these into the region to try them out and see how good that software really can be. The second one, which was also tried out on the Black see is whiplash. And whiplashes is this quite extraordinary idea that the US faces a crisis in shipbuilding from a military point of view. China has many, many more ships and capacity to build ships. So even if there's a crisis and the US industrial capacity has shifted to building ships, it will not be able to catch up or compete with China. The Trump administration is trying to do something about that at the moment, But it, but it's, it can't catch up. Where the US does have an advantage globally is in the production of jet skis. This is a kind of pastime that Americans enjoy.

 

Jon Bateman: Have you ever jet-skied before?

 

Katrina Manson: There you go, there you go. I think I've just skied in the Congo River before and got stuck on a sandbank. So I know they're precarious, you can get upturned. Somebody highly ingenious and unusual, uh, tried to develop these jet skis into autonomous weaponized drone boats. And for the past few years, they have been adding autonomous software plus weapons into jet skis. And I spoke to someone familiar with this effort who said, you know, it's kind of neat. We've got a lot of jet skies. It's neat. We can weaponize them. Which is again, is a kind of extraordinary insight into the way the US is thinking about developing future weapons of war. A rudimentary version of this was tried out in Ukraine, in support of Ukraine, I report. And then there was a jet ski that did wash up on the shores of Turkey. And I report in the book that the Pentagon was very concerned, some in the pentagon were concerned that this would out the effort. Those two projects continue, those two programs, but they're no longer named publicly as Goalkeeper and Whiplash.

 

Jon Bateman: So the Jet Ski program, I guess, the artist formerly known as Whiplash, it's kind of fascinating for a bunch of reasons, one of which is it seems to flip the script of how people often imagine a US versus China conflict playing out in the future. I think one military trope that you'll hear about this military competition is the US is stuck with these small numbers of highly expensive exquisite platforms like aircraft carriers and large naval vessels. And China is, like other adversaries, moving toward a larger number of lower-cost vessels and assets that are more trittable. And so they could kind of swarm us and win a battle of quantity over quality, perhaps augmented by AI decision-making. What is intriguing about this jetski concept is it seems to almost want to up the ante and say, well, actually, it's the U.S. That will be the producers of a mass new type of cheap, attritable drone that we will then use to overwhelm China in a naval conflict. And it brings me back to how you described the original vision. The Drew Kukor and Deputy Secretary of Defense Bob Work had, that the ultimate end goal here is to find some kind of edge or offset to get one over on China on a near-peer or peer military in the most challenging circumstances imaginable. Do you think the Pentagon and its contractors are succeeding in that effort? Do you we're moving toward a world where AI systems, whether it's Maven's smart system, or some of these autonomous weapons under development, will be able to tangibly change the course of a potential conflict like this and actually give the Bob works of the world that edge they're looking for.

 

Katrina Manson: I think there are a couple of things. One is this fear that if you don't have AI, you lose. So as hard as it is to develop and integrate and workflow and figure out policy issues around it and fraught the fear that China will have it. Is partly, I think, was invented a little bit to encourage and allow the US pursuit of it, because the US documents certainly start talking about it before China's military documents do. But the US of course knows China can develop its innovation cycles weirdly much faster than the US, given the US is the kind of innovative capital of the world.

 

Jon Bateman: And just to say a very common pattern in my experience, that our innovation is driven by fears of what the adversary is up to, but ultimately we're the ones to field some of these new icky platforms sooner than they are.

 

Katrina Manson: Um, and you can go back to the, to the bomb with that, both development and usage. And so it's worth very heavily tire kicking the extent to which chicken or egg and is the U S really at the, at the forefront. And Bob Work certainly wanted the U.S. Always to be first in developing, innovating new weapons of war because of this fear that ultimately China and Russia together have the numbers of people. Have the scale of industrial output. That in order to dominate and provide deterrence, the US always needed to be ahead on these weapons. The other thing that speaks to your point about those fears and what China may or may not have is back in 2018, when the first Trump administration brought out its new national defense strategy, when you were in, the focus, the fear was that China had been studying the US war machine for 10 years, looking for weak points. And a weak point was the aircraft carrier. I mean, this thing that was meant to be the item that the US had the most of, of anyone else in the world that could deliver the fleet and forces anywhere was actually a sitting duck under some scenarios. And likewise, the dependence on satellites, if attacked, becomes very dangerous. And with that 2008 test, ASAT test that the Chinese did, that was a wake-up moment as far back as then for the U.S. Sit. They wouldn't necessarily be able to rely on space dominance and that wars in space may happen. I think the US probably did try to really hold back that idea of weaponizing space but it's certainly happening now. And Starlink and Starshield are answers to that low Earth orbit. Multiples of things that, if downed, the system as a whole can keep going is exactly what something like a jet ski or multiple satellites is about. The jet ski idea, if it works, I think there's a very big question about whether any of these technologies can be reined in to such an extent that you'd be comfortable loading it up with a weapon and sending it off and making sure it didn't turn on you. I mean, I do report in the book one time where an autonomous boat in a testing circumstance, so we give a bit of grace for tests, but the autonomous boat upturns the captain. Of another tow boat that's towing it out to sea because it accidentally switches into autonomous mode, loses control, turns, and actually starts coming for the captain who's then flailing in the water with a rope potentially going to- Very dangerous situation. Very dangerous, yeah. And of course, the testing is stopped, they come up with safety procedures, but you really don't want to create autonomy that is going to hurt your own people or go rogue against any kind of enemy.

 

Jon Bateman: This gets at one of the key ethical dimensions that you bring out in the book, and one of the big ideas that you conclude in it, which is, who should be trusted with this technology? Who is a responsible custodian of it, or is anyone? In the American context, one of ways that that battle has been fought is whether the government and the defense department should be the ones in the driver's seat, so speak. What products are being developed, how they will be used versus companies in the private sector, Google, Palantir, Anthropic that are developing some of this technology and at times have had divergent views on what and whether and how it should be used in war. There's of course a celebrated case that you narrate in the book and in great detail about when Many Google employees just became uncomfortable with getting involved in the business of war and effectively pressured their company to drop out of Project Maven. Eventually, Google and others kind of came back in. But now, if you're following the news, people will be aware that a very similar episode has recurred once again with Anthropic. Now, the lead or a leading LLM developer is going through the same ructions with. A new Pentagon, a new Secretary of Defense, a different war. What do you make of this current controversy about Anthropic in the context of everything that you reported with Google? And who is it that can be trusted with these technologies?

 

Katrina Manson: The interesting thing about this one is that Anthropic was leaning very heavily into classified work and wasn't concealing this from its workforce. It had a very explicit kind of moral contextualisation for the way it wanted to do it. And the chief executive, Dario Amadei, has leaned further into that, producing extremely long essays talking about ideas of 50 million geniuses living on an island, what would they want AI to do? Come during these sort of quite extraordinary scenarios to try and explore exactly what these moral conundrums are. And as we know, his red line was fully autonomous weapons systems. LLMs were not ready to be used in this, they needed to be more testing, and mass domestic surveillance. Now, I think we're still trying to understand what the precise concern around mass domestic surveillance is because. Yeah. US isn't supposed to do that, but from prior experiences with the Snowden revelations, we know how much can be learned about the American population and that public data can be layered on top of each other in order to form a very composite, intimate picture of anyone. When it comes to the fully autonomous weapon systems, I think you have to consider the US position for what they've said. One is this castigation of the company as kind of radical left or left-wing nut jobs. Clearly there's a political dimension to this. Daria Amadei, Kamala Harris supporter, lots of the national security team have come from the Biden administration. There is a political divide there. There's a completely different way in the way both institutions, if we think of them that way, are presenting themselves. That's one part. When it comes to the actual existential argument that Clay made that the Pentagon simply cannot work around, these red lines are putting LLMs into fully autonomous tech. It seems that they were close on negotiating something and to some extent the negotiations came down to the word appropriate, what is the appropriate level? Oversight, responsibility, judgment, and the Department of Defense policy on autonomy is that there should be appropriate levels of human judgment over the use of force, which is vague and implies supervision rather than human decision making. So if you are the company that is providing that and you're not comfortable with that liberally vague definition, there is probably scope to question. Is this going to, are we going to contribute to something that is anathema to our own beliefs? And that hasn't been worked out yet, but nor is the technology fully there. You know, Whiplash and Goalkeeper aren't quite there. The Pentagon is pursuing at the moment a hundred million dollar prize challenge that is pitting front-end AI labs against each other to produce something even more astounding, which is voice-controlled autonomous drone swarming is the idea that if you imagine an expeditionary force having a bunch of drones in the air on the sea that can talk to each other and say, go left, that's exactly what they want them to be able to do. The number of things that could go wrong are so high.

 

Jon Bateman: It's so interesting when you think about the Google episode versus the anthropic episode. On the one hand, you could say, well, nothing's changed. We're still fighting the same fight now that we were fighting back then. We still haven't worked out the line between government and private sector control. On the other hand, as you point out, Katrina, the red line that is being protected in each case has really narrowed. I mean, Google's initial red line was, we don't wanna be involved in anything that directly supports taking of life. They gave up on that. And Anthropic is not trying to hold up that red line. They're trying to pull up a much, much narrower red line, they are comfortable directly supporting the taking of a human life. What they're not comfortable with is this very specific way of doing that involving full autonomous weapon systems. So in a way, you could say, actually, a lot has changed between then and now. The government is kind of gradually winning more of this terrain. And the red lines are getting narrower and narrower. The other thing I learned from your book, which I'd really not thought about before, is there's the version of this problem that a government official would have of how to rein in a kind of moralistic defense contractor like Anthropic that maybe has more red lines than you do, right? But there's also a completely other type of problem having to do with the classic vendor lock-in with a company like Palantir, Which is very gung-ho on all of this work. I mean, Alex Karp, you tell some remarkable stories in the book about how almost gleefully bloodthirsty he is to support the Pentagon in all of these endeavors. And yet with him and his company, there's a much more mundane concern that maybe they have become too big to fail, so to speak. In other words, too central to too much of what the Pentagon is doing. That it might actually be difficult to continue to hold them accountable or have fair competition and be able to get best-in-class capabilities from maybe another innovator. What do you make of that problem?

 

Katrina Manson: It's fascinating that Palantir still divides the US military establishment, the US defense establishment. Obviously, back in 2016, they sued to get contracts with the army. They sued the army and that created a lot of bad blood within the army, but also got Palantyr what it wanted, which was these contracts and people at Jukuku heavily believed in Palantir product because they experimented with it and saw that it worked. And people on the front lines in Afghanistan told me stories about panenteer information, just that production of data, getting it to them, just the memory of where incidents had happened, saving their lives. Divide that happened then, interestingly carries on, I think now a little bit more on the style. They are still by some quarters seen as extremely arrogant, too arrogant for the likings of some. Others see Palantir as the solution. And I think one person said to me, eventually you've just got to pick a horse and ride it. And that is a sense of if you are going to have joined up system. You've got to kind of work out a way of who to work with. The way they've tackled it on cloud, of course, which is the underpinning for all this, it's not just about Palantir. I mean, cloud is actually the thing that you've got to deliver and meet up those systems and networks. Obviously the US tried it once with one provider, big fight over contracting there, a lot of influence games and has ended up pursuing four people. Uh, four different companies as part of a cloud contract, uh, at different stages, levels, um, different classifications. Now, whether you can work with a Palantir and a rival and knit together that kind of platform, or whether you just have to choose Palantire or, you know, in 10 years later, recompete it and see if someone else has come up and they get it, uh of questions for, you know, for the Pentagon to think through. Um, but certainly there are people who are concerned the Palantyr is so dominant within the Department of War, as it's calling itself now. But also, they need to develop the tech. They need this, it's almost like if you imagine a tower of Babel for war fighting. You know, you try and put all that effort into one system and eventually, question mark comes tumbling down or can they build the tower? Can they create the single language for the Department of War? There's also, adding to the public comments, even Drew Kukor said he thinks some of Alex Karp's public comments about. Killing and the ease of killing are unfortunate. And Jukuku is someone who thinks Alex Carp is wonderful. So there's no one better placed to try and rein him in, if that's the right language, than someone who admires Palantir, who has fought for Palantyr, to really bring that debate to what is the right way to talk about how easy it is to kill other people, especially if you're a tech company, you're not. Directly working in service, you're not directly risking your neck as some other former service members raised to me. So he's still a polarizing figure, not only for the wider public and for investors, but also within the US defense environment.

 

Jon Bateman: Yeah, again, kind of putting on my Pentagon hat for a moment, you know, if I've got an anthropic problem of like the ethical contractor that I would like to rein in, it seems like history shows that eventually the Pentagon probably will win many of those battles. It has the tools to compel companies to come on board. But the Palantir problem is, um... Much more long-standing problem, the problem of the vendor that becomes too essential and that then either jacks up the prices, stops innovating, doesn't give you what you want. That could become the real systemic vulnerability over time because it's one that the Pentagon has tried and failed to tackle in many other arenas. I want to ask you a big picture question here about the pace with which the U.S. Is incorporating A.I. Because I sometimes hear two very conflicting narratives about this. On the one hand, people who are concerned about the ethical implications of automating war will often say that the US military is heedlessly racing to implement these tools before they've been tested, before they'd been debated by society, before we really know the implications, and that there's a fear of doing too much too fast. On the other hand... People in and around the military often have the opposite fear that we are so bureaucratic and clunky, that we so struggle to change institutional and cultural patterns, that the military is the last to incorporate new technology, and that it's likely that in the next war we'll be way behind the curve on implementing AI. Which one of these two dueling narratives... Match us up with what you've seen in your reporting.

 

Katrina Manson: I think both can be true. So I've certainly spoken to service members who are furious they have to lock their iPhone away before they go into work and then they end up using tech far less sophisticated than the iPhone. I've spoken to people who claim to me that they left the CIA because they could get better spy tech from Amazon than they could deliver it by their own agency. So that fury and that sense of we don't have the stuff that's gonna keep us alive or get us to what we want to achieve continues. And this administration, I think more than Any has publicly said, we will go faster with AI, we're going to accelerate, every single commander is going to have an AI person. But they said that one year after being in and they've got a lot to do and then it's now politicized. So I think any time it becomes politicized, you probably slow down the bureaucracy because you have to deal with that rather than, well, let's test the AI and get it out. And I know there are a lot of people from Project Maven who have gone back in to the department to try and realize this vision for accelerating AI adoption, which comes with a requirement for testing. What that testing looks like on the inside, we just need examples of it. And that's why I wrote the book, because to have the public debate, the other part of your question that you're talking about, is this going hurting forward, far too forward, we don't know unless we have concrete examples of where an algorithm is reliable or not reliable, and the decisions that they've made about when not to rely on AI. Not just, oh, it's okay. A human will always make the decision. That is not sufficient to understand the way in which AI is changing war flows and culture and a propensity to allow a machine's decision or a machine output to take over what a human might do as those decision making cycles become shorter, just how short, was it enough time to make a decision? Would you second-guess the machine first or yourself first? That kind of stuff. Needs to be made public, because that's where the debate happens and where members of the military themselves have concerns, never mind human rights campaigners.

 

Jon Bateman: Yeah, so there's the fog of war that the military itself faces, where during a conflict, it's hard for them to tell what's going on, what's up, what down. And then there's a kind of fog of democratic oversight over a war, the way in which the public itself just has difficult time discerning what the military is doing. I'll say for myself, you know, I study AI for a living, I was a civilian advisor in the Department of Defense, I am really not sure exactly how AI is being used in the war in Iran right now. I get glimpses through reporting from people like you and from the public statement of officials, but your book reveals new details of the use of AI in operations that took place years ago. Do you think it will be years in the future before we develop much of a picture of how AI is being used right now? Because this moment right now is what many people think is the kind of hinge moment. Where the use is being dramatically scaled up in a major U.S. Conflict.

 

Katrina Manson: That focus on accountability is key and transparency. And people like me will keep asking questions and people who actually get access to classified information, the official ways in Congress will, I expect, ask questions, whether the answers are forthcoming or not, presumably is a function of process, of correct process, and also of pressure. Understanding the way in which AI has been used in this war, given the Central Commander has said publicly that they are using AI tools and the Central Command spokesperson has told me that they're using a variety of AI tools to help generate points of interest and the Commander says that AI is helping reduce decisions, processing processes from days and hours down to seconds. Those questions will keep being asked. That's the bit that is my job. And I really look forward to the answers.

 

Jon Bateman: Well, Katrina, I look forward to you continuing to provide people the answers. And I couldn't recommend more highly your book. It's dramatic, it's vivid, it is novelistic, and it poses profound questions about the present and future of war that we will be wrestling with and watching unfold for the rest of our lives. So thanks for taking the time with us today, and I can't wait to see what you report next.

 

Hosted by

Jon Bateman
Senior Fellow and Co-Director, Technology and International Affairs Program
Jon Bateman

Featuring

Katrina Manson
Katrina  Manson

Carnegie does not take institutional positions on public policy issues; the views represented herein are those of the author(s) and do not necessarily reflect the views of Carnegie, its staff, or its trustees.

More Work from The World Unpacked

  • Podcast Episode
    Did Trump Kill International Law – Or Was It Already Dead?

    The Iran War marks the second time in two months that Donald Trump decapitated a country without real legal justification. But is this any different from the many times that past U.S. presidents—and other great powers—have violated international law? 

      • Jon Bateman

      Jon Bateman, Oona A. Hathaway

  • Podcast Episode
    Who’s Running Iran?

    On this special episode of The World Unpacked, Karim and host Jon Bateman go inside Tehran’s power structure as the Islamic Republic faces one of the greatest crises in its 47-year history. 

      • Jon Bateman

      Jon Bateman, Karim Sadjadpour

  • Podcast Episode
    Trump’s Two-Front Battle with Europe and Iran

    Daniel Drezner, Professor of International Politics at The Fletcher School at Tufts University is a leading scholar of global politics, makes sense of these dizzying crises on a new episode of The World Unpacked. He joined Jon Bateman to explain why Europe and the U.S. are still so obsessed with each other, whether Trump’s Venezuela playbook could work in Iran, and how Substack has changed foreign policy decision-making. 

      • Jon Bateman

      Jon Bateman, Daniel Drezner

  • Podcast Episode
    Epstein’s America: How Modern Corruption Works

    Sarah Chayes, who lived in and studied the world’s most corrupt nations, warns that the U.S. is walking the same path. In this episode of The World Unpacked, Sarah tells host Jon Bateman why systemic corruption looks nothing like how we picture it, how anti-corruption advocates are co-opted as enablers, and what to say if someone asks you for a bribe. 

      • Jon Bateman

      Jon Bateman, Sarah Chayes

  • Podcast Episode
    How a Progressive POTUS Would Change the World

    Matt Duss, a former advisor to Bernie Sanders, is a leading figure in progressive foreign policy. On this episode of The World Unpacked, Matt lays out a global vision based on solidarity and harm reduction.

      • Jon Bateman

      Jon Bateman, Matthew Duss

Get more news and analysis from
Carnegie Endowment for International Peace
Carnegie global logo, stacked
1779 Massachusetts Avenue NWWashington, DC, 20036-2103Phone: 202 483 7600Fax: 202 483 1840
  • Research
  • Emissary
  • About
  • Experts
  • Donate
  • Programs
  • Events
  • Blogs
  • Podcasts
  • Contact
  • Annual Reports
  • Careers
  • Privacy
  • For Media
  • Government Resources
Get more news and analysis from
Carnegie Endowment for International Peace
© 2026 Carnegie Endowment for International Peace. All rights reserved.