Podcast

Peril and Promise in the U.S.-China AI Race

by Christopher S. Chivvis and Colin H. Kahl
Published on November 21, 2025

The contest to stay on the leading edge of AI is rapidly taking center stage in America’s strategic competition with China. But what does it actually mean to beat China in AI? Does the United States have the right strategy for navigating this contest? Are policymakers overstating the threats posed by China, or by artificial intelligence itself? And can Washington and Beijing cooperate on areas of high risk even as they compete intensely elsewhere?

In this episode of Pivotal States, Christopher S. Chivvis speaks with Colin H. Kahl, Director of the Freeman Spogli Institute for International Studies and former Under Secretary of Defense for Policy in the Biden administration, to unpack the peril and the promise of U.S.–China competition in AI.

Transcript

Note: this is a rush transcript and may contain errors.

Colin Kahl:

I think a major concern about China dominating the global AI stack is that it will embed Chinese AI models with CCP characteristics into them in a way that actually will not be conducive to our way of life.

Chris Chivvis:

And I'm not saying that there isn't a competition. You make a convincing case that these are all areas where it would be beneficial to the United States to stay ahead. But how far should we go in order to ensure that?

Colin Kahl:

People sometimes talk about this as turning bits into atoms. And this is a place where I think China is most poised to completely eat our lunch.

Chris Chivvis:

I'm Chris Chivvis. Welcome to Pivotal States where we examine the strategic challenges and opportunities facing the future of American foreign policy.

When historians look back at the 2020s, there are two trends that are going to stand out. One is the changing nature of the international system and the emergence of a more multipolar order. China is of  course, at the center of that trend. The other trend is the emergence of AI, which stands to reshape our societies, our economies, and our politics. What we're going to be doing today, is to talk about the intersection between AI and competition with China.

This is a big issue right now. We've got business leaders on one side who are arguing that if the United States doesn't rapidly and urgently lift regulation on our own development of AI, we're going to fall behind and lose a race with China. On the other side, there's lots of people who are warning that an AI free-for-all could create severe political, social, and economic risks for America itself. This is a real dilemma for our time.

With me to discuss this today is Colin Kahl. Colin is a professor at Stanford University and importantly served as the policy chief in the Pentagon during the Biden administration. He's also one of the sharpest minds in foreign policy. I've had the pleasure of having many conversations and learning an enormous amount from you, Colin, over the course of the last, gosh, I think maybe at least 10-

Colin Kahl:

A long time. Don't tell people, right? Tell them how old we are.

Chris Chivvis:

A couple of years.

Colin Kahl:

Yeah, a few years.

Chris Chivvis:

So really delighted to have you here on Pivotal States.

Colin Kahl:

It's great to be with you, Chris. I also learned a lot from you, so I'm looking forward to the conversation.

Chris Chivvis:

Fantastic. So I was thinking one way of looking at our field right now, field of foreign policy analysts is, where do you line up on China? Do you think that China is an important issue that the United States needs to deal with? Is it a very important issue, or is it the only thing that's going to matter in our foreign policy and grand strategy? So where do you come down on this?

Colin Kahl:

It’s a good way to frame it because it's not actually a continuum between, they don't matter at all and they're everything. There's pretty much universal agreement that China is one of the two most consequential countries in the world. The U.S.-China relationship, whether it's competitive, cooperative or managed, or something else, is the most important bilateral relationship in the world.

I guess I fall pretty far on the line of it's not the only thing because there are other catastrophic and existential challenges I worry about. I worry about climate change. I worry about the externalities from emerging technologies. I worry about trends in global democracy and democracy here at home. So it's not the only thing, but to the degree, as you said at the outset, our world is increasingly defined by resurgent geopolitical competition. Ground zero for that competition is the United States and China, and we have to get that competition right.

Chris Chivvis:

So it's great that you use the term competition because that's one of the things that I wanted to ask you about. This is a term that was used pretty widely to frame the Biden administration's strategy towards China. But when you think about it, it's kind of an unusual term for international relations. We talk about conflict, war, adversity, but with China, we've been talking about strategic competition or competing with China.

What is it that we mean when we say we're competing with China? How is that different from the other forms of international interaction that are out there?

Colin Kahl:

Yeah. I think it's important. Look, I think the pivot actually started in the first Trump administration.

Chris Chivvis:

Okay. Towards competition, you mean?

Colin Kahl:

Yeah. If you go back to the 2017 National Security Strategy and the 2018 National Defense Strategy, I'm not completely convinced Trump himself as president was bought into those strategies, but certainly the people around him were. And it really did put a great power competition back on the map, until those documents. U.S. strategy had largely been defined since 9/11 around the global war on terrorism. And even Barack Obama who wanted to pivot away from that, found his administration focusing very much on the campaign to defeat the Islamic State and the wars in Iraq and Afghanistan, and everything else.

So I actually think the pivot starts in Trump One. And I think the recognition is that China is the most relevant strategic actor in the sense that it is the only country in the world that is capable of marshaling the military, economic, diplomatic, and technological wherewithal to displace the United States as the world's most influential country. And that would have profound implications for our interests, for our alliances, for our values. And so, that really started under Trump One. And I see the Biden Administration as more of a continuity in terms of that intellectual frame.

I mentioned the 2017 and 2018 strategy documents in Trump.  If you look at the 2022 National Security Strategy and National Defense Strategy put out by the Biden administration, it labels China as our most consequential strategic competitor. The National Defense Strategy calls them the pacing challenge, that is, the military challenge against which the Pentagon has to pace its progress and activities.

I think what that competitive frame is is just that there is a jockeying for relative advantage. Like when we compete in a football game or a basketball game or soccer, there's a winner and there's a loser. And there may not be an absolute winner and an absolute loser, but it is about a relative comparison. And I think it evokes a more real politique view of the world as being not completely positive sum, but full of zero-sum moments and jockeying for relative advantage. And a view that I think is common in Washington that the country that achieves that relative advantage over the long term will be the country that's most secure, most prosperous and most capable of defending Its way of life, and that we'd rather that be the United States.

Chris Chivvis:

That makes a lot of sense to me. I guess would you agree that when we use the term competition, we're talking about a constant back-and-forth between the United States and China rather than sort of a game where there's going to be a clear winner at the end of it, at least in the near term? Because for me, that's what's important about the term competition.

We've lived in the United States certainly since the end of the Cold War and actually including the Cold War in a frame of mind that says the United States is going to win definitively a series of different contests that are out there.            We won the Cold War, we were going to win the Iraq War, we were going to win in Afghanistan, we were going to defeat Al-Qaeda. But when we talk about competing with China, we're talking about something slightly different, that there isn't necessarily an end to it. We may end up better off and we want to be in a more competitive position, but there's not an end game because China's not going anywhere.

Colin Kahl:

Yeah. I think that's a really important distinction. So I think competition is much more about process and approach. You are competing, it's more verb than outcome.

Chris Chivvis:

Yeah, exactly.

Colin Kahl:

It's the process of competing and approach to competing, but I think the outcomes are not well agreed on. So I think while there's a widespread bipartisan consensus, maybe too much of one, but a widespread bipartisan consensus that we need to compete vigorously with China, there's no agreement about what the end state is.

There are some, like my friend Matt Pottinger, very smart guy, very hawkish on China, who has written that the end state is essentially the same end state that we had during the Cold War, which was Kennan's end state, which is to bottle up the communist adversary and wait for them to collapse. I think that most analysts don't think that is plausible as an outcome.

Chris Chivvis:

Certainly not any time soon.

Colin Kahl:

And actually in pursuing a maximalist policy like that, we may stumble into a conflict that could be catastrophic. I would put myself more in that category, not because I wouldn't see it as beneficial in a lot of ways, if China was freer than they are now. And I'm not a fan of the Chinese Communist Party, but China's one-fifth of humanity. They are a real, dynamic, vibrant society. Their government is authoritarian, but also extraordinarily competent and with a lot of capabilities. They're not going anywhere.

And I think a policy that presumes that competition will drive China into the abyss is infeasible and probably reckless. I think more of what we need to do through competition is, not to change China, but to be able to remain in control of our own destiny, to secure our own interests, to defend our own security, to enhance our own prosperity, and to defend our way of life. These are just the bread and butter national interests that all countries have, but I think of them from an American perspective. And so, competition is, how do we compete with China in a manner that secures those objectives, recognizing that China's not going anywhere?

Chris Chivvis:

I'm glad that you mentioned interests because that's what I wanted to ask you about next.

When you think about America's interests in this competition with China, what do you think of? What are some of the more specific things that really are important to us? Obviously there's a whole range of things that we would like to have, but what's really vital to the American people in this competition?

Colin Kahl:

I think any national security conversation starts with the defense of the homeland. And we know from public disclosures that China has implanted malware over a lot of our critical infrastructure here at home, probably for contingencies. That is, if there's ever a war with the United States, they want to be able to turn off the lights. But we have to be mindful against that threat. China's going through the most breathtaking nuclear modernization effort in the contemporary era, trying to achieve parity with the United States and Russia. That is a strategic threat to the homeland. So there are direct homeland threats.

I think more directly though, China is asserting contested sovereignty claims in the first island chain in the Indo-Pacific that run headlong against our treaty commitments in Asia. So think about their claims in the South China Sea, their claims in the East China Sea, their claims across the Taiwan Strait. And so, I think there's a perception that China is a revisionist power that at the very least is looking to expand its borders beyond what is currently internationally recognized in a way that would directly bring them Into conflict with U.S. allies. So that's a security risk.

Beyond the hard security questions, I think there is an emerging view, an emerging consensus that the hope in the 1990s and the aughts, that by integrating China into the liberal international order, in particular the economic order, it would normalize China and make them a responsible stakeholder in that order. And that over time, it would liberalize China, maybe politically, but hopefully at the very least, economically. And I think that Washington's largely been mugged by reality that we didn't have that convergence. I think you had Tom Wright on a previous podcast. Tom wrote a book where he talked about the convergence hypothesis, this belief that integrating countries like China would converge them with essentially the western model.  

And instead, we've had a divergence outcome where China's become more authoritarian and they've become wealthier in a way that a lot of Americans see as predatory, predatory of American jobs, of American manufacturing, of American intellectual property. And so, I think there is a degree to which competition with China is viewed as an issue of economic fairness and leveling the playing field, but also a competition for the commanding heights of technology. And I know we're going to talk about artificial intelligence a lot. But China has put the marker down that they intend to dominate the technologies that will be the most consequential technologies of the rest of the 21st century, AI. But not just AI, space, biotech, clean energy, quantum.

Last point, I'm sorry for going on for so long:     

A big part of American interest is defending our way of life, our democracy. Now, we have our own challenges with that frankly, here at home. China and other authoritarian powers are not the only or primary challenge to that democracy, but we now live in a world where our entire way of life is mediated through digital environments. And how values are embedded in the internet or in technologies that shape our exposure to information in digital space, like artificial intelligence, will profoundly affect how we understand the world, what we value in the world, and how much we give into things like surveillance and censorship, and propaganda.

And maybe we can come back to this later on, AI, but I think a major concern about China dominating the global AI stack is that it will embed Chinese AI models with CCP characteristics into them in a way that actually will not be conducive to our way of life.

Chris Chivvis:

Help me to understand concretely what it means to compete with China on AI. Because there are multiple different dimensions to this. 

Let me propose that there are at least three dimensions. One is being as innovative as possible, and that's staying ahead on the latest technology, something that we in the United States do extremely well.  Another is integrating AI into your economy and your society in ways that increase productivity and hopefully make you wealthier and happier as a nation. And then the third is AI diffusion or what people have been referring to as AI diffusion, which is something you were talking about a minute ago, which is AI in other countries, the extent to which the United States or China is able to control the AI systems of other nations.

So are those the right three categories to think about it, or are there more?

Colin Kahl:

 Yeah. So largely agree and we should walk through all those.

I'd probably think of it in terms of five categories. A fourth category would be the race to adopt AI into our governments, in particular, our national security departments and agencies, which is I think different from economic integration. So that might be a fourth race to year three. And then there's a fifth race, which is probably a race we should avoid, which is a race to the bottom. And we should come back to that.

Chris Chivvis:

Okay.

Colin Kahl:

But if it makes sense, maybe we can start on the first race.

Chris Chivvis:

Please, yeah. What does that mean concretely, and why should people out here in California who don't work in the AI industry, care?

Colin Kahl:

Yeah. First of all, so the Trump administration released its AI Action Plan back in July, and there's a lot of goodness in that document, but the title of the document is actually winning the AI race. And so, there's a lot of obsession here in Silicon Valley about winning that race. And that's largely defined in terms of which company is going to win the race. Is it going to be OpenAI? Is it going to be Google DeepMind? Is it going to be Anthropic? Is it going to be X AI or Meta?

Chris Chivvis:

There's a real competition.

Colin Kahl:

There's a real competition. But I think in Washington, the race is framed more geopolitically about who wins the race between the United States and China. I think mostly though, when people talk about that race, they talk about the first dimension you mentioned, which is innovation, and that's really the race to the AI frontier.

Artificial intelligence are machines and software with the intelligence of humans or beyond. And so, think about the ability to sense the world, understand and make sense of the world, learn, make decisions and take actions that shape the world. All the ways in which we use intelligence. We're trying to replicate this through algorithms and software. So that's what AI is.        

I think the race to the frontier is largely some race to what some in Silicon Valley called artificial general intelligence or AGI.

Chris Chivvis:

Right. This is super intelligence.

Colin Kahl:

 Well, we will get nerdy here for a second. So first of all, the terms artificial general intelligence and artificial super intelligence, there's no agreement on what those terms mean and sometimes they're used interchangeably. But I think in the valley here, people do think about them in different ways. So I think the most common way to think about AGI, artificial general intelligence, are AI models that are as smart as the smartest humans across a wide range of cognitive tasks, especially cognitive tasks that are relevant to the economy.

Whereas artificial super intelligence is our AI models that are smarter than the smartest humans AI models. So, AGI would be extraordinary because it's what Dario Amodei, the CEO of Anthropic calls a country of geniuses in a data center. And by that he means, what if you could have models that were as smart as the smartest humans, but in every field and they were capable of solving problems at the height of human intelligence collaboratively at machine speed? Think of how amazing that would be as an oracle, as a problem solver, as something to create change in the world.

Artificial super intelligence is more of what we think about in science fiction of like Hal 9,000 or Samantha or Skynet from the Terminator movies of basically a new race, a new species. An entity that is smarter than the smartest humans, and in fact, would think in ways that human beings might find quite alien.

Chris Chivvis:

Okay. So that's a useful distinction and I think sometimes those terms are used interchangeably, but conceptually it's a valuable distinction. But then let's bring it back to China. So what are we talking about here in this area?

Colin Kahl:

 Sure. We're talking about really, in the race... So the first race and the race that people are most obsessed with is the race to the frontier. People are kind of measuring it.

Chris Chivvis:

Is there a frontier or is the frontier always...

Colin Kahl:

 Well, this is why I said a race toward the frontier. And maybe a race towards the frontier is better than a race to. . .

Chris Chivvis:

Yeah. The frontier is always moving out, right?

Colin Kahl:

 It's a moving target, but a race to some version of AGI or ASI. The scorecard of that is measured largely in two ways. One is, which companies and countries have the most capable AI models and which countries have the most high-end computing resources or just shorthanded as compute to throw at training those models, but also increasingly doing inference that is running those models to solve problems? And measured by that, the United States has a meaningful lead. It's a narrower lead than it was a couple of years ago, but it's meaningful.

The best models put out by OpenAI and Anthropic and Google DeepMind are still better than the best Chinese models, but that difference is probably measured in months, not years. If you had asked me this question a year or two ago, I would say that the gap is one or two years, and now it's months. So I think that what that shows is that, China, they have enormous amounts of talent, enormous amounts of data, they're highly motivated. And they're probably six months behind the leading US AI labs, but they're nipping at our heels. They're fast followers.

Where China has a bigger problem is on compute. And I'll just give you one statistic. About 75% of the world's high-end AI compute is either in the United States or run by American hyperscalers. China's about 15%. All right. So that's a huge difference, 5 to 6X. the best indigenously made Chinese chips are 60 to 70% as good as the chips NVIDIA was making back in 2022. So the gap in models is measured in months. The gap in chips and computing power is probably measured in years, but it does mean for the moment, the United States is ahead at the frontier.

And we should transition to this. In almost every other race that we mentioned, the race is either a lot, lot closer or China's actually eating our lunch.

Chris Chivvis:

Okay. I want to go to the other races. But I want to try to make this as concrete as possible because ultimately what we care about are the vital interests that you discussed at the beginning of our chat here. So, what is the concern other than just maintaining some type of advantage here? Is there something bigger than that or are we just trying to maintain the advantages that accrue from having better AI, from having more technologically advanced AI? Is there something beyond that?

Colin Kahl:

 I think there is a sense, and I guess people could debate this, but I think there's widespread agreement that artificial intelligence represents a novel general purpose technology. That is, it's not one thing, it will be laced into all the things. And so, it's more like electricity or the internet than simply a dual-use technology. And I think there is a lot of concern, or the reason why I think there's a lot of focus on the race from a geopolitical perspective is that the countries that do the best at harnessing AI as a tool- but also as a driver of other forms of innovation and productivity- will have the strongest economies that are developing other technologies. It could be breakthrough drugs, it could be breakthrough energy technologies that are inspired by human collaboration with advanced AI. It could be new missiles, it could be new drones, it could be new ships. And so, there are real hard-power resources that come from being at the lead of AI.

I think there's also a sense, though, that it'll matter for global soft power, and we can come back to this when we talk about diffusion. But to the degree that AI will be like electricity and the largest AI labs and their hyperscaler partners will be essentially like utility companies providing this thing as a service all over the world: It matters very much whether people around the world are experiencing the digital environment through American models running on American designed, but probably Taiwan-built semiconductors in data centers that are built by American hyperscalers. So think: AWS or Microsoft or Google Cloud. And as opposed to an AI stack that is running Chinese models on Chinese chips with all the data going back to Beijing and a bunch of strings attached from the AI stack that China helped countries build, that that matters for soft power, for influence.

Chris Chivvis:

So here, you're already talking about one of the other categories, which is what often is referred to as AI diffusion. I'm tempted to say it's the most diffuse of the different categories for me to understand. But you've just given a very good explanation of why we might care about it. Let's try to make it concrete and pick a country, Nigeria, for example.

Why does it matter to the United States or how much does it matter to the United States that Nigeria builds its infrastructure around US AI as opposed to AI from China?

Colin Kahl:

 I think there's a bunch of ways to answer that question. It's interesting that you picked Nigeria. One of my favorite fun facts, now this will probably not turn out to be true because it presumes straight trend lines between now and the year 2100. But on straight demographic trend lines, there will be more people in Nigeria in 2100 than in China. There will be 800

Chris Chivvis:

Wow, I believe you, but I hadn’t thought of that.

Colin Kahl:

 There will be 800 million Nigerians and 700 million people in China because China's population is aging and imploding, and Nigeria's population is young and exploding.

Chris Chivvis:

Keep growing. Yeah.

Colin Kahl:

 Who knows? 2100 is a long time away. Nigeria will probably peak on their population before then. Well, it's interesting. And it's part of a general trend, which is that there will be as many people in Africa at the end of this century as there are in Asia. So there will be 4 billion people in Africa, again on current course and speed. So Africa's going to emerge as an extraordinarily important place. A lot of people, a lot of energy, a lot of innovation, a lot of trade opportunities, a lot of geopolitical importance around infrastructure and critical minerals, sea lanes, all the things. So Africa's going to matter a lot. And China has recognized this for a long time and has made significant investments through the Belt and Road Initiative and the Digital Silk Road initiative in Africa.

The United States has historically under-invested in Africa. And as Africa becomes one of the more consequential and dynamic regions in the world, I think it actually matters quite a lot whether the AI stack that the majority of people on the continent tap into, is a more democratic one in which the United States has an important role versus a more authoritarian one that China dominates. And I think It matters for the hard and soft power reasons that I meant before, but I also think it matters for the people in Nigeria and the rest of Africa. In the first instance, what matters though is that they have access to the technology, period.

And the challenge that I have about the U.S. approach to diffusion is that the Trump administration's approach to diffusion is essentially to unleash the market, to put no barriers in front of the ability to export U.S. AI, but especially semiconductors, to build out an AI stack all around the world. I think there's a lot of goodness to that approach, but it will under-serve a lot of countries that don't have enough money, don't have enough customers, don't have enough infrastructure. You get the point.

I actually think we need a more activist approach to promoting that, that would probably require integrating diffusion with development tools. And we know that the Trump administration is pretty hostile to those tools. But at the very least, leaning into some of the things they're not hostile to, like the EXIM Bank and the Development Finance Corporation. And the administration does have a good executive order about bundling together the offerings of American companies to offer a vertically integrated AI stack to countries around the world. I think they should really lean into that and fully resource it to make investments in places like Nigeria and the rest of Africa.

Chris Chivvis:

Okay. So, one of the things that you have referred to a couple of times is that the AI that the United States is able to offer, is going to be inherently more democratic than the AI that Beijing is able to offer.

On a superficial level, okay, sure, the United States is basically democracy. China's obviously not.

So you would imagine that there would be some kind of a correlation between the technologies, but can you say a little bit more about what that actually means? To what extent does having US AI help a country, guarantee that a country becomes... What's the actual amount of difference that we're talking about?

Colin Kahl:

 Yeah. I see this not as a democracy promotion tool, but as a defense-of-democracy tool. So it's more about what anti-democratic AI means than democratic AI. So let's start with anti-democratic AI. So the question is, is AI a vehicle for censorship? So if you ask leading Chinese models about the Uyghurs in Xinjiang or the status of Taiwan or territorial claims in the South China Sea or Tiananmen Square, or even Winnie the Pooh, because there are memes making fun of Xi Jinping because people think he looks like Winnie the Pooh, the models won't answer the question, or they'll fudge it, or they'll give you  an answer that is consistent with the CCP party line.

And so, that is both a form of censorship and a form of propaganda in the sense that, if you're an American high school student, you're doing a research paper on the South China Sea and you ask about territorial claims in the South China Sea, there's a decent chance that a Chinese model will not tell you about the 2016 tribunal finding that the nine-dash line is not supported by international law. But that's both true and a really important thing for that student to know. So that's just an example.

I know this for a fact because I've run a simulation in my master student class on foreign policy where we do a South China Sea scenario and I have them write memos about what the president should do in a South China Sea crisis and run their memos through one American AI model and one Chinese AI model to ask how China would respond to our recommendations, and they're very different.

Chris Chivvis:

Interesting.

Colin Kahl:

Okay. So, part of it is censorship and propaganda.

Part of it is also surveillance. We know from the way which China has integrated AI into smart city and safe city programs in many parts of the world, predominantly in the global south, that this is in many cases meant to enhance state surveillance. So think: facial recognition, other biometrics, but also integrating data in a way that can keep tabs in a very police-state fashion on individuals. That's problematic in those countries where it empowers authoritarian tools, but it's also problematic because a lot of that data can be vacuumed back to Beijing to use for their own authoritarian purposes.   So, I think that's a model we want to avoid.

I think the United States is a highly imperfect democracy. We always have been. I think at the moment, we are an increasingly imperfect one. But I do think it matters if the models bake in certain values like freedom of speech, freedom of religion, freedom of the press, if they try to give objective, balanced and factual answers, whether the models are privacy protecting. So I think those kind of democratic principles that are not being baked in because the US government is telling the companies to do this, but because the companies are attempting to do it and they do it in different ways.

Chris Chivvis:

Wouldn't it be true though that if we are providing our AI to an authoritarian state, that we are strengthening that state, just by virtue of the fact that we are giving it the best technology, assuming that we stay ahead in the innovation category?

Colin Kahl:

 I think it could, certainly from a state capacity perspective. If authoritarian countries that are more capable of service delivery because they have AI tools to make government more efficient, maybe those authoritarian governments last longer. Because we know historically from a political science perspective that the legitimacy of authoritarian systems is oftentimes based on performance.    

It's also possible that if the companies providing these tools do not have strong and enforced terms of service, that their tools could also be used for propaganda, information operations to manipulate and create synthetic media for elections or other government purposes, or it could be used for surveillance. So is it possible that AI as a general-purpose technology could be generally performance enhancing for authoritarian countries? Sure. But the things that I'm most concerned about are the specific ways in which AI could be used as a tool of censorship, propaganda, and surveillance.

Chris Chivvis:

Okay. Just staying with diffusion then for another couple of minutes here, one thing that we haven't talked about is just the commercial interest that the United States has in ensuring that many countries around the world adopt the U.S. AI as opposed to Beijing. Would you put that on your list?

Colin Kahl:

100%.

So, I think the easiest way to conceptualize the difference in the US versus Chinese approaches to diffusion, is that the US approach is a market-based approach. It was under Biden, it's even moreso under Trump. The assumption that if you have the best technology, that means the best model running on the best chips in the best data centers, that the market will diffuse the technology widely across the globe in search of customers. And I think there's a lot to that.

But what you're tending to see is the investment being highest in either countries that have extraordinary amounts of money and energy, so think the UAE or Saudi Arabia. Or they have very large populations, so think of India, Brazil, Indonesia, Nigeria, Kenya, other places like that, or that are otherwise important to the AI value chain. So it could be wealthy countries that are really very important to the supply chain, so Japan, South Korea, parts of Europe, Taiwan, etc. I have no doubt there will be massive investments in those places, massive investments.

I am less convinced that the market will serve a big swath of the global majority, especially in the lowest income countries, because you may not have enough people. Even if you have enough people, they may not have enough wealth to be viable customers. It may be difficult to build the required infrastructure in those countries because it may not require just data centers, but roads and fiber optic cable and basic telecommunications infrastructure and the energy to power data centers and things like that.

Chris Chivvis:

A huge part of it.

Colin Kahl:

 So that's big. So the U.S. approach in my view, is necessary but not sufficient because you're likely to have market failures or market gaps. China doesn't approach this from a market perspective. Yes, their companies want to make money, but through the Belt and Road Initiative and the Digital Silk Road Initiative and their global AI initiative, I think they have a much more strategic top-down view of providing kind of AI in a box:

So look, we may not have the best models. We may not have the best chips, but we have good enough models for your needs and we have good enough chips, and boy can we build the data centers really fast. And oh, by the way, if you don't have the roads to get the data centers, we'll make those two. And if you don't have the ports to get, we'll make those two. And if you don't have the telecommunications, Huawei's got that. And if you need energy, oh, we can do that in a snap. And we're going to provide AI and the related infrastructure in a box, and we're going to provide it in places that the companies wouldn't invest in, but the Chinese government will subsidize those investments for geopolitical reasons.

So, they just have a very different model. And I'm not saying we should embrace their model, but we need a model that combines the market incentives with a smart set of strategic investments in places the market won't invest, if we want to win the diffusion race.

Chris Chivvis:

So ultimately when we're competing with China over diffusion, it's really about influence all over the world.

Okay. Let's go to the category of integration or adoption, I mean integration into the economy, and then adoption into the government in particular, which is what I think you were thinking of. This is an area where there's been some discussion recently. Again, you've written about it. Why is this part of the competition with China on AI, why does how quickly we integrate AI into the Pentagon or into the Ford Motor Company important to strategic competition with China?

Colin Kahl:

 Yeah. Let's separate those two things because I think they are meaningfully different.

So I think first, diffusion throughout the economy, if one assumes or believes that it's compelling that AI has the potential for dramatic increases in productivity. It allows companies to be more efficient. It allows companies to be more innovative. It generates breakthroughs that allow completely new sectors of the economy. Then you want the technology to be widely diffused, so that as many enterprises in the United States can reap those productivity gains.

Chris Chivvis:

Widely diffused within the US itself?

Colin Kahl:

 Right. Yeah. Economic diffusion. So the first question is, who is able to reap the productivity gains from being able to access significantly more amounts of advanced intelligence? And so, diffusion matters in that sense. But I also think we have to think about diffusion in other ways which is, productivity is not just about big ideas and coming up with breakthroughs and having efficiencies. It's also about making things.

And so, it's great to come up with a new wonder drug. You still have to produce the drug. It's great to come up with a new manufacturing process, you still have to build the factory. It's great to come up with a new missile, you still have to build the missile. And so, part diffusion across the economy or integration into the economy, is also about the translation from the gains from intelligence into actual things.

Chris Chivvis:

Really interesting.

Colin Kahl:

People sometimes talk about this as turning bits into atoms. And this is a place where I think China is most poised to completely eat our lunch.

So in 2024, China installed 300,000 industrial robots. The United States installed something like 35,000, and China installed more than the rest of the world put together. China's already running dark factories. These are factories that don't have to have the lights on because there are basically no humans there. They're pumping out cell phones and electric vehicles. And I think most experts would agree, China has  a substantial lead on the embodiment of AI, that is the integration of AI into robots, industrial robots, and potentially humanoid robots. And we already know that China essentially has more industrial capacity than the free world put together. And this is going to dial that to 11.

But I think we could see a future in which the United States leads at the frontier and for the most part, dominates AI as a service to the globe, but where China dominates the translation of AI into industrial capacity. And so, really what you see is a kind of accelerated version of what we already see now where the United States dominates on global services, think about the role that United States plays in the global financial system, and China dominates on industrial capacity, that AI could supercharge both of those trends. So I'm very worried about that.

But this is the other part of the economic diffusion story, which is, which economy is most resilient to the disruptions in the labor force that are going to happen from this? And there, I think oddly, one thinks of China country with 1.3 billion people as the last place where they would want to replace humans with robots, but as I mentioned before, their population is aging and imploding. So actually at least from the Beijing perspective, I think they're more comfortable with the notion that over time, robots are going to do more and more things and AI is going to replace more and more human labor.    I think that's a much more uncomfortable proposition in the United States.

Chris Chivvis:

Certainly cuts across the trends in both the Republican and Democratic party right now.

Colin Kahl:

 Yeah. I'm not a political person in the sense that I'm not an expert on domestic politics, but I would expect that both in the midterms, but especially in the '28 election, this will be a bigger issue. So I think we have not yet seen an election where AI and populism get fused together, and I think we're likely to see it play out among some of the candidates in 2028.

But the thing that I'm worried about is, we may already be seeing the leading edge of some of these job losses, and they're not in places that I think most of your viewers may think of.

I think when we think of automation, we just talked about robots, we think about people on factory floors losing their jobs. Or if you're in here in the Bay Area and you see a Waymo autonomous car, they're like, "Okay. Well, there goes the Uber driver or the taxi cab driver." There will be some of that. But actually the jobs that are most at risk in the near term are entry-level white collar jobs. So the AI models are getting extraordinarily good at coding. They're already winning coding and hacking competitions. They're getting extraordinarily good at math. And it's becoming increasingly challenging for even computer science graduates from the best universities to get entry-level software engineering jobs.

Chris Chivvis:

This exposes us to an extraordinary amount of societal and political risk.

Colin Kahl:

 It does.

Chris Chivvis:

Which is to me an argument to take it slowly. Definitely to do it, but we got to be really careful, right?

Colin Kahl:

 Yeah. Here's the thing. This whole, should we tap on the brakes, and what tapping on the brakes means? I think at the very least, it's a yes-and which is, I'm of the view that even if one doesn't like the notion of a race, even if one believes that a race leads to a self-fulfilling prophecy of disaster, I think it's very hard to turn off the race because it's so baked in to structures. The structure of geopolitical competition at the state level and the structure of economic competition among the most powerful and well-capitalized firms in the world. I think it's a pretty hard thing to slow down. I think the prospect of it slowing down in the Trump administration is zero.

So, the yes-and or the yes-but would be: Okay if we're going to compete on the innovation layer, we need to be thinking at least as seriously about how we make our economy more resilient to the shocks to labor that may happen, but also how we not only retrain our workforce, completely change our approach to education, so that we are producing humans that are augmented by machines, not replaced by them. Because I'll put it this way. I have a 14-year-old daughter and a 10-year-old son, I worry very much that for the rest of their lives, they will never be smarter than AI. But they don't have to be smarter than AI to succeed. All that matters is that my daughter plus AI is smarter than an AI by itself.

And so, we have it in our power to ensure that as many humans as possible are in a human- plus- AI equals greater than just AI category, but we're not even thinking about it. There's no legislation, we can't even pass budgets. There's no legislation; there's no national conversation. And so, what I worry very much is that essentially, we won't actually come to terms with this until the wave hits us, in which case it'll either be too late or we'll go into crisis response and not necessarily do the right things.

It's not exactly parallel, but we did see that we basically spent more than a decade completely ignoring what social media was going to do to an entire generation of Americans. And we are now reaping the consequences of not having an adult conversation about that in 2010, and we're still not having it in 2025. And I don't think we should make the same mistake with AI and labor.

Chris Chivvis:

So okay, I want to continue on this particular topic.

We've talked about the importance of staying ahead technologically, simply because it's going to be to our advantage. And it seems to me that you agree that this is going to be an ongoing competition, that there's not some moment where the clock ticks down to zero, the game is over and one party wins or loses, that we're really just going to be having to continue to compete technologically in this field. 

When it comes to the diffusion of AI, we care about it because we care about the defense of democracy in the world. There's a commercial incentive to do it, which also feeds back into our capacity to produce high-end technologically advanced AI. And also because we believe that if the world adopts U.S. AI, it will give us more influence worldwide.

And then finally, if I understand what you're saying about integration, basically this is about trying to ensure that we can actually reap the concrete economic and perhaps also state capacity benefits that are generated at the high end of AI, so that Americans become wealthier, more prosperous, and that the United States is an attractive center of the world, one that has a broad-based economy which it can use to compete with China perhaps in other areas. Is that more or less the gist of it?

Colin Kahl:

 I think that is a a great summary. In fact, a better one, even if ChatGPT or Claude were listening to us, I don't think that would've...

Chris Chivvis:

I was going to say, AI can't do this, but then I wasn't sure if that's actually true.

Colin Kahl:

 No. They'd actually probably do pretty good. No, I think that's great.

We haven't really done a deep dive on the national security adoption thing. I might say two or three minutes about that.

Chris Chivvis:

Please, yeah.

Colin Kahl:

 So in the military, there's all sorts of just generic enterprise applications. I sometimes joke, we think of the Pentagon sometimes as these whiz-bang techno thriller movies with all sorts of computer screens everywhere and everything else. And if you really want a movie that describes life at the Pentagon, it's probably a movie like Office Space. It's just a bunch of people in cubicles and the old computer equipment and Xerox machines that don't work very well. So there's lots of enterprise applications of making everything more efficient, workflows, healthcare, budgeting, contracting, there's all sorts of that.

But I think from a warfighting perspective, what the Pentagon's particularly focused on is the so-called OODA loop, which is a Pentagon acronym. But OODA stands for Observe, Orient, Decide, and Act. So in other words, you observe by sensing the environment, you orient by understanding or making sense of the environment, you decide what to do, and then you act. And then the cycle starts all over again. And I think that there is a view that artificial intelligence is already, but it's certainly poised to radically change the OODA loop. The observe part, because AI is already being integrated either as new sensors or in and of itself. Think of computer vision on drones or satellites that can process imagery much more efficiently.

But the most important is probably that second O, orient, understand the environment. So we have oceans of data. The military is vacuuming up enormous amounts of data from every military platform, from intelligence, things like signals intelligence or from satellite imageries, or you name it. Making sense of that ocean of data, and I'm not even talking about the most recent generation of AI, but the most recent generation of AI is very good at this too, is all about connecting dots or finding needles in haystacks, of recognizing patterns in the data that human beings would not be capable of doing or wouldn't be capable of doing at machine speed.

And we've already seen through the application of AI decision support tools like Project Maven, which is a Palantir tool that's been used by the United States to help Ukraine in terms of understanding the battlefield in Ukraine. It's also been used by the US military in the Middle East in various contexts, is already moving us closer to what our commanding generals might call a single pane of glass. The ability to fuse all sources of data to see the adversary and to see ourselves in real time, and to make decisions about that.

Which brings us to the D, the decision in the OODA loop. And there, I think the question is whether AI essentially becomes a recommendation engine. So it recommends potential targets, it recommends weaponeering solutions for servicing those targets, which forces to deploy, how to maintain and logistically support those forces. Recommendation engines like that, or actually even more agentic forms of artificial intelligence in the form of being advisors. So a commanding officer will have their intelligence person, their legal person, their operations advisor, but they'll also have the AI as an advisor who's offering up courses of action, alternatives, running different simulations, providing recommendations, not just on narrow ways, but on more strategic or operational ways.    We are not anywhere close to that, and I think people may have concerns about us going down that road, but that's it.

And then the final is the act, which is actually taking action, carrying out force, either in physical space or in cyberspace. And there, you could have semi-autonomous systems like we see in Ukraine where drones in Ukraine are operated by a first-person pilot, somebody with a VR goggles on their face, piloting something until that drone gets jammed. And then the moment that it's jammed, then the pilot can no longer see through the drone or track it. The drone has computer vision software on it that guides itself to the target. So it's semi-autonomous.

Eventually you may move to full autonomy where the drones are acting under a bounded set of objectives set by a human being, but they have a lot of autonomy to decide how, when, and what to target within that set of parameters. And there's a lot of pressure to move in that direction because I think while everybody would prefer humans to be in the loop or on the loop, there will be a lot of situations in which it can test an electromagnetic spectrum, jamming basically, makes it impossible for human beings to stay in the loop or even on the loop. And so, I think we're already seeing the pressure of war driving things in this direction in Ukraine. We may see it in other ways as well.

I'll just conclude by saying, the future of warfare is already being informed by these technologies. So, the balance of military power between the United States and China will be informed by that  competition. And in fact, the entire theory of the case, maybe on how you deter or defeat a PLA, a Chinese military incursion on Taiwan, increasingly relies on being able to field thousands or tens of thousands of cheap attritable drones with degrees of autonomy. And so, the future is already here, but As William Gibson said, it's just unevenly distributed.

Chris Chivvis:

Look, the technology's really cool. I worry that there's a little bit of a risk of getting carried away by how impressive the technology is because you said that we would probably prefer to ensure that humans remain in the loop. I think that's certainly intuitively of what most people would want, is that if we're going to kill things, if we're going to break things, if we're going to go to war, that that decision should be made by human being and not by an AI, no matter how smart that AI is.

But what you seem to be saying is, well, we prefer that, but we're not going to get that because we don't have a choice. Is that really true? We're the most powerful country in the world. We have the biggest and most advanced military there is, don't we kind of set the terms of this?

Colin Kahl:

 So a couple of things. First of all, the DOD directive on autonomous weapons which my office drafted, says that there have to be at all times, meaningful degrees of human oversight and judgment on these systems. I'm not saying human beings shouldn't make the decisions. In all cases, human beings should make the decisions to engage in the use of force. They should be confident that the force will be carried out in a way that is consistent with their objectives, and they should be confident that carrying that out will be done in a way that is legal. And I mean that by domestic law, but also under international humanitarian law, the laws of armed conflict. And also, that human beings should be accountable.

So, the current Pentagon Framework bakes in those principles. But I think what sometimes people believe is that, that means the human being has to be in control of the drone the entire way. So, if you think of a drone strike in Afghanistan 10 years ago, that drone was being piloted by an Air Force pilot in Nevada. It was being connected to that drone probably by a satellite. The drone was guided by GPS and other sensors, but there was a human being. And the human being would look through the full-motion video, identify a target, they'd go through a process, and then they'd fire the Hellfire missile and it would blow up the truck. And we know even in those contexts, all sorts of mistakes were made.

And so, when people think of human in the loop, they think of that.

The issue though is there will be all sorts of circumstances, and with that is completely impossible because the GPS environment will be denied and that satellite will be lased or jammed, and you won't have a human being that will be able to pilot it all the way to the endpoint. Now, somebody would say, "Okay, well then we'd have to find another way." But we already do this with missiles. I'll give you an example. We have a missile in the US military called a harm missile. It's an anti-radiation missile. A pilot in let's say an F-16 or an F-35, picks up a radar signature from an enemy radar. They lock the missile, they fire the missile, the missile goes in, and boom. All right, that's human in the loop.

Now, you had to be confident the missile was going to work the way that it was designed. But there's another mode for that missile that's been around for a long time which is, you fire it before the radar is picked up and the missile loiters. And then you do something else to perturb the radar, like send a decoy. The radar turns on and the missile goes, ooh, look. Well, the human being didn't tell it, right?

Chris Chivvis:

I get it. But only just because they don't exist, doesn't mean it's necessarily something that we should...

Colin Kahl:

 But whether we should or shouldn't do it is not... Look, the fact that we've done it before, doesn't mean we should do it again. The reason why I don't have a problem with the harm missile example is because we have a lot of confidence in the sensor that the missile, once it picks up the radar, will do what it's supposed to do.

Chris Chivvis:

Isn't going to hit a village.

Colin Kahl:

 And so, the same thing is, we're talking about a drone with computer vision as a form of terminal guidance. The question is not, is that okay? It's just a sensor, just like the radiation detector is a sensor.

The question is, how much trust do you have that the system will act in the way that you intended and in the way that you are both morally, legally, and consequentially comfortable with? And if the answer is, you have a high degree of trust and you believe it's consistent with your moral, legal and consequential judgments, then it's just a weapon.

Chris Chivvis:

Okay. It's obviously a super complicated issue and one that it's important that we continue to think about. I want to move on to a discussion that we have sort of already started which is, at what point do we tap on the brakes, as you put it?

Jensen Huang recently said that if the United States doesn't lift the regulations that it has on AI, we're going to lose the AI race with China.

That seems like a fairly self-interested point of view of industry. And I'm concerned that there's a certain amount of threat inflation going on here or fear politics, and that we could be driven to take steps in terms of our own control of AI, which comes with all kinds of risks, steps that we will later regret out of exaggerated desire to compete with China in these areas. And I'm not saying that there isn't a competition. You make a convincing case that these are all areas where it would be beneficial to the United States to stay ahead.

But how far should we go in order to ensure that because there are real risks on the other side as well?

Colin Kahl:

 Yeah. So first of all, on the particular how close China is to the United States that Jensen has been asserting; Look, he undoubtedly is privy to all sorts of information both about his own company and Huawei, the competitor he worries about that I'm not privy to. But by all the information that I've seen that's publicly out there, it's ridiculous to assert that Huawei is nipping at NVIDIA's heels. And this is actually relevant for this conversation. The best chips that Huawei produces is a chip called the Ascend 910. This year in 2025, they will produce several hundred thousand Ascend 910s.

It is not clear that any of them actually use logic dies that were produced in China. They were probably smuggled in from TSMC from Taiwan, but let's assume all them were produced indigenously by China. That chip is 60 to 70% as good as the NVIDIA H100 that was put out in 2022. So it's 60 or 70% as good  as

Chris Chivvis:

It's several years behind.

Colin Kahl:

Several years behind. The latest NVIDIA Blackwell chips are two to four times better than the H100.

Chris Chivvis:

Okay. There's a huge difference. Yeah.

Colin Kahl:

That's a huge difference. And Huawei maybe has produced several hundred thousand this year. NVIDIA will ship 7 million chips this year, including millions of those Blackwell chips. And the notion that Huawei is about to catch up, it's hard for me to understand how, for two reasons. One, they don't have access to the extreme ultraviolet lithography equipment that is only produced by one company in the world, ASML

Chris Chivvis:

Because we got the

Colin Kahl:

 Because of export controls that started out of Trump and were enforced under Biden, and are still in place, which means that they cannot produce chips below seven nanometers. So, they're not going to get to the leading edge. Thing one.

Thing two is, it's one thing to produce logic chips, like the Ascend 910 is a good enough logic chip, but you need high bandwidth memory to allow those... Basically, you can carry out a lot of calculations, but if the model can't hold a lot of things in its head at the same time, it's not going to be useful, either for training or for inference. And one of the last things the Biden administration did on the way out the door and that the Trump administration has upheld are restrictions on advanced high bandwidth memory chips.   So to my mind, that actually gives us time.

Look, I believe the race is baked in. The race is baked in. Geopolitically, it's baked in from a corporate level. We don't need to dial it up based on some... the race is going to go plenty fast on itself. I also don't know though that we can’t meaningfully tap on the brakes unless there were substantial regulation that we could agree on domestically here in the United States. And it's not going to happen. Not as long as Trump is president and the Republicans control the Congress. And even if the Democrats flip one house, I think it's highly unlikely we're going to get...

We may get important laws on things like non-consensual intimate imagery or how children Interact with AI. We may get bipartisan agreement on things like that. But large regulatory frameworks, I don't think we're likely to see in the administration.

Chris Chivvis:

But this could change. If some of the impacts that we've been talking about on society, large numbers of younger white-collar workers losing their jobs or having no job prospects, the politics of AI could take a very different turn.

Besides, you have the super intelligence that we were talking about before. I don't think you have to be a Republican or a Democrat to be concerned about the creation of a super intelligence. Whether or not we think that's going to happen in two years or in 25 year or 200 years, it's a concern that is shared, I think across the political spectrum by most people outside of a very small number who think that would be cool.

Colin Kahl:

 So, 100%. And at the beginning, I said there was a race we needed to avoid, which was the race to the bottom. And in many respects, I was talking about these kinds of catastrophic scenarios.

But let me take half a step back.

A minute ago, we talked about whether there's threat inflation about how close China is to nipping at our heels. The same folks who make those arguments also argue there's threat inflation against these catastrophic scenarios, "AI's just normal technology." It matters a lot for military capabilities and the academic, but it's not what you think of, not you personally. But when you think of these science fiction scenarios, that's just a bunch of “doomers” and they hype that threat because they want to throttle innovation.

So, I think that's also a narrative that is very common here in Silicon Valley. It is very dominant within the Trump administration's view of this technology. And so, I think with it, we have to have a rational conversation about the risks.

Here are the risks that worry me the most in the near term. And I completely agree, by the way, Chris, that the most likely situation in which you get anything that looks like tapping the brakes, is an incident. Okay. The three areas that I worry about the most: In the near term, the thing I worry about most is a catastrophic cyber attack. We already know that these programs, so Claude Code, Codex, which is OpenAI's tool, these are extraordinarily powerful coding tools.

They allow fairly uneducated folks to engage in “vibe coding”. Maybe they can vibe code their way to... Vibe coding is basically where you're just sitting at a laptop, you don't really know how to code, but you just ask ChatGPT or Claude to write a program that brings down your local bank. And a lot of times, it won't do a good job, but every once in a while, it may do a good job.

Chris Chivvis:

And it could get better.

Colin Kahl:

 But for others, it will just accelerate the process of giving them a lot more horsepower. Folks who are already good hackers, there will still be meaningful uplift in time spent, efficiency, raw capability to do things. And so, you could get a catastrophic cyber incident using one of these tools that brings down a bunch of hospitals, a bunch of newborns die in their incubators, could bring down the energy grid on the eastern seaboard. It could bring down the stock market. I don't know If you have an incident like that, I do think it would be a galvanizing moment. Thing one.

Thing two is, the tools are getting increasingly good at biology, and there is a lot of concern that this could make it easier for non-state actors, terrorists, to engage in forms of bio warfare. So think of releasing pathogens that are more transmissible, but with longer incubation periods and more deadly than COVID, which could kill millions of people. Now, there's all sorts of reasons why historically terrorists have not been inclined to do that, but all it takes is one. It's also true that you still have to produce those things in a wet lab, but it's also true that a lot of those wet labs are themselves becoming increasingly automated and accessible over the cloud. So, I think that the cyber and bio risks are super important. They are already emerging.

And this is actually one area where the Trump AI action plan, which doesn't talk about AI safety, does talk about AI security at the end and talks about some of these risks. So that's the first area. And then the second area is rogue AI. And here, I think that when people talk about rogue AGI or ASI, they're really talking about extraordinarily powerful AI tools that are able to create effects in the real world, either through digital space, cyberspace or through manipulation in the real world. And that they do so to advance their own goals, like the model's own goals and objectives, not our goals and objectives. And that's what's called the alignment problem.

And we're already seeing signs that the most advanced models do demonstrate in testing certain patterns towards scheming and deception, hiding their bad behavior from the humans monitoring them. And people do worry that that is a little bit of a canary in the coal mine for rogue AI systems in the future, either intentionally or inadvertently creating catastrophic outcomes. I do not think we were there yet, in large part because these systems are not integrated enough, nor are they effective enough as agents to create effects in the real world. But this could be something that within a five-year timeframe is a serious concern.

Last point. I think we can have a rational conversation around those risks because some of them are already emerging. We don't have to take on the most doomer-istic views of them and be accused of threat inflation on the other side. We can ground this in actual data. So, we should have that conversation in Washington among Republicans and Democrats.

We should also have the conversation with China because even if we are competing intensely with China, China doesn't have an interest in non-state actors being uplifted to engage in cyber terrorism or bioterrorism, and they don't have an interest in advanced AI running amuck. In fact, the CCP is obsessed with control.     And so, they may not agree on what values and interests the AI models should be aligned to, but they do agree that advanced AI models should demonstrate alignment, that is be aligned to something and humans should be in control of those models.

So, I think that there is a narrow path, just like the United States and the Soviet Union cooperated to eradicate smallpox or on the Non-Proliferation Treaty at the height of the Cold War because they had a shared interest, a shared fate. We should identify a handful of catastrophic risk scenarios that are foreseeable and start a conversation with China over them.

Chris Chivvis:

It seems like that's really important, even though I understand it's really, really hard to do.

You described a range of different catastrophic outcomes that could occur from a lack of regulation on AI. And it occurs to me that most of the things that we've been talking about in terms of AI competition with China are things which are important, but not catastrophic. Maybe a few of them could go in that category.

So, as you try to put these two things together, on the one hand, you have a risk of some pretty terrible things happening. Terrorism, misalignment, as you called it. AI super intelligence, again, recognizing that's very far in the future.      On the other hand, we have the risk of falling behind China in measurable ways in terms of our global influence, our economic capacity, technologically or potentially in some military spheres, although we also recognize that we are currently pretty far ahead of China in all of those spheres. And as you said, we have some scope to sort of play with here.

So, as you look at these two things, which should we be prioritizing? Isn't that an argument for trying to take it slowly, or at least that we have some give, where we can try to come to some kind of a common ground with China to try to begin to build the norms that you were talking about?

Colin Kahl:

 Certainly on the second. Again, I'm skeptical that either side would slow down because I do think that they see AI as a general-purpose technology and reaching the frontier or being ahead as so entangled with their future economic prosperity, their national security and their way of life, that I find it unlikely that the two sides will slow down, or agree to, I don't know, force their companies to slow down.

Chris Chivvis:

Effectively, that's what arms control is about.

Colin Kahl:

 It is. Well, except in this case, it's a pretty hard thing. It's one thing to be able to count bombers on a runway or count missiles in missile fields. It's entirely different to be able to observe, and verify, and ensure compliance on a technology that will be laced into everything.

Chris Chivvis:

It's a real problem.

Colin Kahl:

 But my view is this. If there are these catastrophic risks, why should we compete? Well, because there are catastrophic risks on the other side too. The thing that worries me the most about the United States and China stumbling into a conflict is that there is a miscalculation or a deterrence failure on one side or the other in the conventional domain, probably over Taiwan. It could happen in the South China Sea. I could argue it round or flat on whether AI makes that problem better or worse.

But actually I think for example, that AI decision support tools and drones, at least on the battlefields of Ukraine, have been overwhelmingly defense dominant. That is, they've made it very difficult for either side to take territory, especially the Russians. And I actually think that the Pentagon is right to emphasize things like the Replicator initiative and the Hellscape operational concept for INDOPACOM to use AI decision support tools and autonomous and semi-autonomous munitions and drones to make it impossible for China to land on Taiwan. Not because I ever want that to happen. If it happened, it would be disastrous for everybody. But because I don't want that war to happen, I want China to be deterred. So I could make that argument. People could make arguments in the other direction.

I also believe for the reasons we talked about earlier, our ability to have influence in the world and defend our way of life cannot be disentangled from the AI ecosystem because just of how we will all live our lives. And I think our economic prosperity and what it means are going to be all entangled in this too. So, I think we have pretty big stake in getting this right in terms of seizing the opportunities that the technology presents, but we have to simultaneously do that and have an adult conversation with China on these other things.

Chris Chivvis:

Do you think we are having that conversation?

Colin Kahl:

 I do not think we are having it at the government level.

Chris Chivvis:

Do you think we can have that conversation in Washington right now?

Colin Kahl:

 Well, first of all, I think it's not just a Washington problem.

I think that the Biden administration tried to initiate this conversation repeatedly with China. And almost every time, and we'd be worth talking to people who are actually in the room for those conversations, Jake Sullivan and Tarun Chhabra, Ben Buchanan, people like that who are now on the outside.

But my overwhelming impression is that, like in a lot of dialogues between Washington and Beijing, Beijing thinks of the dialogue as in and of itself a favor to the United States and tends to link it to other things that they want. So, "Yeah, we can have a conversation about AI and strategic stability, but only if you lift the export controls." Or, "Yeah, we could talk about AI and nuclear security, but only If you first adopt a nuclear no first-use policy."

In other words, they wanted to, but it's hard to have a conversation that's on the level when the other side says, "Sure, sure, sure, we can talk about that, but first you have to do all these other things that we know you're not going to do, but that we want you to do."

So I think there was a two-way problem, but at least the Biden administration was willing to have that conversation. And it did manifest in one meaningful outcome, which is in their last meeting at the G20, I think it was in November of 2024, President Biden and Xi Jinping agreed that nuclear launch decisions should stay in the hands of humans, not artificial intelligence, which sounds like, sure, we should be able to agree on that. That's basic.

Chris Chivvis:

Well, it's a step in the right direction from not agreeing on it. Yeah.

Colin Kahl:

 It took years to get China to do that. And my understanding is in part, Xi Jinping did that as a favor to Biden on the way out. So, I am not yet convinced that China takes this question seriously. But we should call them out. And I think there are a lot of track-twos going on between the United States and China on a whole range of AI issues to include military and strategic stability questions. That's good. Those you should continue. I've been involved in some of that. That's great.

But we need to track one conversation on this, and it can be more than just about AI. It can be about other questions on strategic stability, nuclear weapons, space, cyber, especially as these domains get increasingly entangled. But we have to have a conversation. And I hope the Trump administration will get there. Maybe if Trump strikes a big beautiful trade deal with China, at some point it'll open up channels for dialogue on other things. I don't know. But at the moment, I've seen zero appetite for the administration to engage on these issues.

Chris Chivvis:

So you don't think that the Trump administration is doing a very good job when it comes to technology competition with China or AI competition in particular?

Colin Kahl:

 I actually think that they are doing a good job in some areas. I think they are doing a good job in trying to diffuse U.S. technology, but also cut through some of the red tape that creates bottlenecks in that technology. The biggest one being energy. Now, I wish they were doing it more from a clean energy, carbon-neutral perspective, leaning into solar wind, but also geothermal and nuclear more than just ...

Chris Chivvis:

This is another cost that we need to be concerned about.

Colin Kahl:

 Correct. And this is an area, actually, where China also has a competitive advantage because they're bringing on a lot more electricity capacity than the United States from all of the above. But I think they're moving in the right direction. I wish it was a little different and a little bit more balanced and a little bit more environmentally understanding. But I think that's all to the good.

I think the diffusion framework is also in the right direction, but it needs to be supplemented with a much more aggressive strategy of serving places the market won't serve. We had that conversation earlier. So I would give them a B+ in that area. They don't care what I think, but that's maybe how I'd grade it. I think that they are doing

Chris Chivvis:

Does that make it a 12 or not?

Colin Kahl:

 A 12? I don't know.

Chris Chivvis:

Well, you remember Trump scored his recent meeting with President Xi on a scale of one to 10 as a 12.

Colin Kahl:

I don't know. Grade inflation seems pretty high in that, so maybe it'd be a 14. I don't know. Maybe when you have Trump on, you should ask him that question.

I do not get the sense that they're having a serious conversation about not just harnessing the economic benefits of this for the American people, actually having a real conversation about how to put our kids in our workforce in an augmentation mode, as opposed to a replacement mode. I do not think they're having that conversation. So, the grade would not be nearly as high on the economic part.

And I think frankly, the MAGA base would welcome that conversation. I think maybe it's not happening because they don't want to signal anything that is throttling back on the seven companies that are propping up the entire economy.

So, I think Trump is a little bit split between the companies that are holding up his economy and what the MAGA base wants to include their skepticism of those same companies. But maybe they'll get past that. I hope they do. Maybe if there's a Democratic house, for example, this is something where normal politics could reassert itself and we could have some dialogue there.

I do think the Trump administration is moving with more alacrity on the adoption of AI and across our national security enterprise to include building out classified compute and stepping on the gas a little bit. I think that is good. I wish it wasn't happening at a time where they were also showing deep skepticism about the laws of war and ethical frameworks. Because I worry that going fast is especially reckless if you don't build ethical and legal guard rails around yourself. And I don't see...

Chris Chivvis:

Well, especially if it's coming at a time where you're firing a lot of the people who actually have the experience to understand these problems.

Colin Kahl:

 JAGs, IGs, but also young people who've been brought in, who've been DOGE-d out of existence. We're hollowing out the capacity of the government to even have these decisions, even if the government wanted to pursue decisions consistent with the rule of law. So, I have a deeper skepticism about that.

So, I would say B+ on the innovation side and poor grades on the other.

And the last point I'll make is, I think the other challenge that they have is that they have ideologically presented all these catastrophic harm scenarios as a bunch of woke doomers who just want to throttle the technology as a bunch of existential risk people that are like the enemy.

And so, I think we need to move away from that ideological positioning of, “everybody who mentions catastrophic risk being a doomer,” and actually ground it in the facts that the models are getting really good at a handful of things that should concern us from a national security perspective. And I think there are some in the Trump administration that understand that argument and are willing to work on it. I just hope they can do so in a way where they're not self-deterred by worrying that they'll be accused of being doomers.

Chris Chivvis:

Right. Fascinating. Thanks so much, Colin. This has been absolutely fascinating talking to you. Really, really enjoyable.

Colin Kahl:

 Always. That was fun.

Chris Chivvis:

This has been Pivotal States. I'm Chris Chivvis with the Carnegie Endowment for International Peace. I'm out here at Carnegie's California office and have been talking to Colin Kahl about the AI race with China. If you enjoyed this episode, please subscribe on Spotify or check out our website for the American Statecraft Program at the Carnegie Endowment for International Peace.

Thanks very much.

 

Carnegie does not take institutional positions on public policy issues; the views represented herein are those of the author(s) and do not necessarily reflect the views of Carnegie, its staff, or its trustees.