The AI boom is the biggest investment mania in decades, channeling trillions of dollars into data center infrastructure. If investors bet right, they may usher in technological breakthroughs that produce vast wealth. If they’re wrong, they could crash the U.S. stock market, trigger a recession, and spread financial contagion globally.
Ed Zitron was among the first to call AI a bubble. His unsparing deep dives into AI finances are must-reads, even for his critics. In a spirited back-and-forth on The World Unpacked, Ed and host Jon Bateman debate Wall Street’s “unhealthy relationship” with Nvidia, if China has its own AI bubble, and whether ChatGPT should give tax advice.
Transcript
Note: this is an AI-generated transcript and may contain errors
Jon Bateman: Ed Zitron, welcome to the show. I've been excited for this conversation because lately everyone has been asking, are we in an AI bubble? This whole conversation has kind of broken containment. You've got mainstream economists, national pundits, even the CEOs of some of the AI companies are now unable to avoid the question, but you have been pushing this question farther and louder and longer than anyone else I know. You're kind of like the OG AI bubble guy. How did you find yourself in that situation? Why were you one of the first and now one of the most persistent?
Ed Zitron: So, it all started in November 2023 when Sam Altman got fired from OpenAI. Now, there are other people, like Gary Marcus was very early on the diminishing returns of large language models, for example. But I saw the reaction to Altman being fired, and it was like a cult, all of the members of OpenAI doing the Hunger Games thing. Like it was very strange. And so I thought, what the hell is this? This is weird because I don't know, I don't remember a cult like that for the iPhone or cloud computing. I don't remember the Amazon Web Service Thirsters. So I went and looked into it. It's just generating text and images, and I mean, they don't seem that good. Why is everyone excited? And then I went one level further and I said, surely somebody will be talking about money. They were not. All of these public companies, Google, Amazon, Meta, Microsoft, they were all yakking up the whole thing around AI. But no nobody wanted to say how much they were making. And indeed, no one wanted to discuss how much they were losing. The only thing I can consistently find is it's very expensive. And then in June 2024, Amir Eferati over at The Information posted this great thing saying OpenAI could lose five billion dollars in 2024. And at that point, I jokerified. I was like, this is ridiculous. Five billion dollars in losses, that doesn't, that's an insane thing. And then the more I dug into everyone, public companies and private companies, the more I saw was just loss. I saw loss, loss, loss after loss. I didn't really see any business returns. I didn't see great revenue. I saw everybody losing hundreds of millions or billions of dollars. And then the actual products themselves didn't really seem to do anything that magical. I don't remember having to do calisthenics and do a 15-part course just to use an iPhone or a laptop.
Jon Bateman: You mentioned Gary Marcus. There's this whole group of critics of contemporary AI industry who are really focused on making technological critiques. People like Marcus or even Jan Lacoon who are out there saying large language models aren't all they're cracked up to be. This is a technological dead end. It's not gonna get us to AGI. Then there's a group of people, and I see you as one of the leaders there, who are making a financial critique and saying the economics aren't there. This is an unsustainable economy. How intertwined are those? In other words, for me to believe that we're in a bubble, do I have to believe that the tech sucks or can I just believe that it's an investment mania and the underlying tech is sound?
Ed Zitron: So I think that that's a multifaceted answer there. So, first of all, you can think the tech sucks without hating it. Large language models on their face are not a bad idea. They do somewhat interesting things in some cases. If they didn't cost five to ten times their revenue, maybe if they didn't require stealing from everyone, maybe if they didn't require destroying our environment and polluting communities with gas turbine engine pollution. I don't know. Maybe if we removed all of these massive land blocks, maybe it would be good. I don't know. I think the technological side is something I've talked about a lot as well, because it is fairly simple. These are probabilistic models, even at scale. Every product you use with AI is prompt engineering. They're not particularly precise. You can't rely on them to do the same thing more than once, even if they do it nine hundred times. They're probably gonna be doing them wrong at least, I don't know, could be anywhere from ten to a hundred times to three hundred times. You literally don't know because that's probability.
Jon Bateman: Yeah, and and just to say anyone who's used these tools is familiar with the limitations too, right? So at the same time, you've got huge numbers of corporations spending billions of dollars on enterprise licensing. You can always critique that and say, oh, they're just chasing the next great thing, they don't really understand what they're using.
Ed Zitron: Where are you getting that number from?
Jon Bateman: I think it's been reported in outlets like the information that companies like Anthropic are largely like B2B plays where there are companies or maybe workers at companies who are using them to do things like code. Is that not the case?
Ed Zitron: Right there is a myth. Anthropic does not break out their revenue based on customers. Nobody does. We don't have any idea because these companies do not break them down. Anthropic, maybe a month ago said they had three hundred thousand business subscriptions. What the hell does that mean?
Jon Bateman: Okay, so this could be individual users who happen to work at a company, maybe buying it themselves, maybe getting small teams, et cetera. But there is an underlying reality here that hundreds of millions of people are using these tools. Some of them are paying for them. What are we missing? Because I'll say just for myself, I get tremendous value out of ChatGPT and I'm using it more and more. So there is some underlying value there. But are you just saying there's no way to make a profit off of that?
Ed Zitron: What value are you getting? I ask, I'm not actually trying to be argumentative. I'd like to discuss it because it is formative to the point I'm gonna make.
Jon Bateman: So you get different answers from different people. But for me, the most value that I get from it is things that are more complex or involved than a Google search, but maybe not quite at the threshold where it would be worth me spending time and money on getting professional advice. For example, I got married recently. I needed to figure out whether I should do my taxes as joint or separate you know, filing. It's actually really hard to get the major tax software to answer that question for you, like TurboTax. They're not built that way. But neither did I want to ask a professional to guide me through it. It's too expensive and complex. I didn't have the time. So I was able to get ChatGPT by providing them some of my basic figures, which I already had on hand. So it was able to run a bunch of numbers that I could kind of sanity check. Now, Google can't do that. TurboTax couldn't do that. And I didn't want to pay H&R Block to do it.
Ed Zitron: What you're describing is search two. Okay. You are describing the growth of search. Had Google search grown to a more sophisticated platform versus being made worse so that they could get more advertising impressions, Google search would likely do this. I mean this with no harm or insult though. The idea of turning tax advice over to this is insane to me. That terrifies me. And I would always recommend using a CPA. And I think my accountant would kill me. But the point is, what you were describing is search. It is just the next generation of search. If you Googled should I ten years ago, if you Googled should I file single or and and I've done this process myself, by the way, I know how confluent it is. And perhaps a page would have popped up from a CPA which had an SEO thing that explained it, and you've read that and you'd have go, oh right, I'm fine. Or I will do this. It is just an outgrowth of search. You can say that, but if you'd have found a good search result that answered your question, it would effectively be the same thing.
Jon Bateman: I disagree with that, because the Google search result will not have my numbers on hand. I will not be able to supply it with the kind of mathematical assumptions that it should be making about my income, my wife's income, student loans, the tax rates in Maryland, and in the specific locale that I'm living, all the deductions that I plan on taking. What it could do is offer me some generic advice, but then I'd have to wade through a bunch of different pages in order to kind of decide which advice to take. You know, it it's one example, but I do think using ChatGPT and LLMs has taught me a little bit actually about how limited search is. There are a lot of things that are just kind of in this middle category where it doesn't quite make sense to ask a professional, or to give you another example, because I love if you could take pot shots at this. I find I'm often in a situation where there's an information asymmetry and I'm talking to a professional that has a huge informational advantage over me, like a doctor, right? And I'm walking into that situation fairly blind. If an LLM can kind of up level my amount of knowledge that I have over symptomology, potential courses of action, I can have a bit more of a balanced conversation with a doctor. Otherwise, like I'm kind of at their mercy. And so this is just a better version of, you know, of Google, as you're saying.
Ed Zitron: I need to be clear about something. My best friend is a nuclear health and safety physicist and had Q clearance at one point. I must be clear that those subtle, those little these what you're describing here in both cases are situations of that could cause astronomical harm depending on how badly you rely on these things. While you may think that the information is correct, how do you know? Sure, you could learn some things that are useful for the doctor on a low end. Sure, you could be told file single or file joint. But if you mess that up, you could be audited, you could be fined.
Jon Bateman: Oh but that but that's also true that's also true if I'm interpreting Google results.
Ed Zitron: At least you can verify where that information comes from. At least you can point to somewhere and say that an expert wrote this. At least you can say it's the Mayo Clinic.
Jon Bateman: Oh, sure.
Ed Zitron: At least you can say it's a CPA. With this, it is generating the answer each time. The reason I bring up the safety expert thing is is that Phil, one of the early tests he did with ChatGPT, was to ask it how you would put out certain chemical fires. Multiple times he was told things that would have exploded. My point is is that if we're talking use cases, this is pretty mediocre.
Jon Bateman: No, that's interesting. That's interesting.
Ed Zitron: At least for the massive cap, for the hundreds of billions of dollars. And then if we're talking safety wise, you're talking two different situations. Doctor, maybe not as much. Doctor, you might get in and the doctor will say that's not true. Perhaps you'll have better questions. That could be useful. But there could be a scenario where you reading this, much like your tax situation, say, I don't need to go to the doctor. AI mode on Google or ChatGPT told me this, and then you take a supplement and then you're very sick.
Jon Bateman: To quote someone who I'm I'm not in the habit of quoting, Joe Biden used to say, don't compare me to the Almighty, compare me to the alternative. The reality is, to me, I already have to make health decisions without complete trust in whatever expert I'm dealing with. Doctors have made mistakes all the time. So I'm already going to go into a doctor's appointment armed with what I can read online if I can. A tool like ChatGPT can more efficiently do that search and aggregation function and then provide me with citations that I could I could look into. It it goes back to the central question of does one's view of the technology necessarily lead to one's view of the the financial prospects of this? And and I and I kinda get the sense from you that because you're so downcast on the technology, that has something to do with and it's gonna be core to your predictions of the economics.
Ed Zitron: I've used the technology a great deal. I am very well versed in how these work. I am very well versed in how the hardware works as well. My views on this are multifaceted and built on a great deal of reading. I think what you were describing is a better Google search, a an outgrowth of that, and nothing you're describing, however you may feel about it, makes any of the financial side worth it, or indeed makes up for the harms that this could cause. You can feel what like the technology is bare but have hope for the future. I don't know why you would, but I think the financial side just overrides any hope there is. And on top of that, the fact we're running out of high quality training data and we're hitting the walls of scaling laws, in the training paradigm, these models aren't getting better. What we're seeing today is pretty much what they're always gonna be like. And I'm not really sure what we have is worth the squeeze.
Jon Bateman: Okay. Maybe we could dig into the finances a bit. And there's a lot of different ways that we could do this, but maybe one way is you could pick one or two of the AI companies that you think is maybe most overvalued or kind of most on a sustainable trajectory, and just walk the audience through some of the key numbers or realities that point in that direction.
Ed Zitron: So let's start with OpenAI. OpenAI I just reported a story out based on documents I've viewed about OpenAI's inference spend and their revenue share with Microsoft. I'm gonna focus on the inference spend because inference is the way that models generate outputs. You put something in, inference happens, output comes out. They spent 12.4 billion dollars on inference from the beginning of 2024 through the end of September 2025. Based on what I have seen, they may have only OpenAI may have only made at least maybe four point let's see, it's 2.7 billion through the first half of this year. Okay. And they're like 4.3 billion. Now I'm gonna say at least because these are numbers inferred based on the because they pay 20 percent of their revenue to Microsoft. So these numbers are inferred from there. Nevertheless, we're talking a company that just the cost of doing business is two or three times what they are making in revenue. And that's just the inference. That's before training, which I do not have the numbers on. Leaked suggests training is 6.7 billion dollars just for the first half of this year. This company is annihilating billions of dollars to make what? Two, four, five billion dollars of bread.
Jon Bateman: You've got the training part that you do with the model that kind of is the initial creation of the model, and that's a huge expense.
Ed Zitron: And updating it as well.
Jon Bateman: Right. And then afterward, you've got inference, which is kind of operating the model on a going forward basis. And so to your understanding, OpenAI is currently losing money on just the operations, not even including the kind of the startup expenses.
Ed Zitron: It's not including the training, it's not including the staff, the real estate, the data. None of that is involved in this. Just on the inference, this company is annihilating money.
Jon Bateman: Is that your general understanding of the AI industry writ large? Are there companies that, as far as we know, are making money on a kind of going forward basis just considering inference spend versus revenue on just the operations of the model, set setting aside training?
Ed Zitron: I don't think a single one of them is profitable in inference. Okay. I think the whole thing is complete wank. Honestly, after seeing what I've seen with OpenAI's numbers, I don't know who's telling the truth anymore. I'm not saying OpenAI is misleading anyone, OpenAI has made no public statements of any kind about their inference. I'm not saying anything. However, leaks have suggested that inference costs were much lower. So the point is, is that I think every single one of these companies is unprofitable just on inference. I can't speak to anyone else. But I will say that a few weeks ago I published Anthropic's revenue, sorry, Anthropic's Amazon Web Services spend, and just their AWS bills were like 2.6 something billion dollars through September on like 2.5 something billion dollars worth of revenue. So like just their AWS bills. Now with them, I can't tell you if it was inference or not. I don't know what their trading spend is. And they spend a bunch of money on Google Cloud as well. So who knows? But the point I'm making is it's very obvious that the cost of inference is not going down. This has been the classic refrain from everyone. They're saying the cost of inference, so creating these outputs, it's going down. And the reason they've said this is the model companies keep making models that they're charging less for. Now it's really easy to say, oh, this means the cost of inference is going down. But what this is is just subsidies. This is corporations subsidizing users. And I think there's going to be a kind of subprime AI crisis eventually when these companies, faced with their gruesome margins, eventually have to start charging. What it actually costs to run them. And I don't know how that is going to work, but if it happens, and I think it might, you're going to see across the board with startups, they're gonna see their costs explode. And everybody, I think, yeah, I think basically everybody uses Anthropic. And basically everybody uses OpenAI.
Jon Bateman: And it's a tried and true tech business strategy to try to grow as fast as you can while subsidizing the customers to a degree with investor money. And then eventually you reach some pivot point where you say, Okay, we've really got to be profitable now. You start charging people more and you also start looking for other revenue streams.
Ed Zitron: So the reason I crack my knuckles there is I've had this conversation a lot. So let's talk about the ones that everyone's thinking about. Amazon Web Services. That's the one everyone loves to say. Amazon Web Services, Ed. Amazon Web Services, they burn so much money. Burn so much money to make the most profitable thing in cloud computing. They burned about 68 billion dollars in today's money in the space of nine years. Yeah. That is, let's see, if OpenAI is maybe 30percent of Microsoft's CapEx, we're talking over 100 billion dollars just to build the infrastructure for OpenAI. Same deal with Anthropic. So they have already massively outpaced that. Uber's the other one. People love saying Uber, Uber Ed, Uber's the one. Uber's gonna nope. Yeah. Uber's worst years, I think, were 2020 when they couldn't operate their business. And then they had a future year where they lost, I think, like 7 billion in a year due to R&D and marketing problems. Like it's Uber was a terribly run company, but they had a path to at least some kind of gap profitability. They also only burned about 33 billion dollars. That's nothing compared to either OpenAI or Anthropic.
Jon Bateman: The scale of the spend, to be sure, yes, of OpenAI anthropic other AI companies extraordinary. Yeah, and investors' patience for that has been very, very striking. And of course, that that could run out at any time. But I'd want to come back to this question of are there untapped revenue streams that could be turned on in a pinch? One of the things that people have pointed to is advertisements. ChatGPT, Claude, they don't have advertisements on the models or as part of the interface. But this is one of the arguments that people will say of okay, if you really need to turn on the regular revenue spigots, there's a way to do that. But what what do you think? Because of course, Google, Facebook, they make gobs of money from from ads. OpenAI, we we know ChatGPT does have hundreds of millions of users in some form, whether they're paying or otherwise. That is a substantial asset.
Ed Zitron: Perplexity, AI search engine, you're familiar. They turned on advertising in 2024. Do you want to know how much money they made? $20,000. You wouldn't even get a goddamn Super Bowl ticket for that.
Jon Bateman: Wow. What happened then?
Ed Zitron: They made $20,000. Well, I'll tell you what didn't happen. They didn't manage to make much money, and indeed they ended up shutting down the program this year and their ads head left. Now here's the question. Why is the most obvious ad-based AI company walking away from AI ads? And in my opinion, it's because they don't work so good. Because with advertisers, they require replicability, which LLMs are terrible for, and they require reliability of placement. You may remember when Elon Musk had trouble when he bought Twitter, advertisers walking because they couldn't stop their stuff being shown near sexual or graphic content.
Jon Bateman: Yeah, brand safety, people call this.
Ed Zitron: Yeah. Exactly. So that is a massive problem. Sure, open AI could turn this on, but where would they go? Would they have banner ads? With any kind of new ad platform, there are always experimental budgets. Companies are not companies in ads don't tend to just be like, wow, big new thing. A hundred million dollars. They'll go, here's a million. Here's a couple hundred thousand. Think about it, when you've got things like TikTok and Facebook and Instagram and Google that all have very good return on investment. Why would you take an experimental platform and work with it there?
Jon Bateman: I think you're right that there's a lot of hurdles that would have to be overcome. People would have to be willing to test out a completely new type of advertising construct. I guess your argument would be these are insuperable hurdles, that, you know, people could not figure this out. There there isn't a better way.
Ed Zitron: Maybe they could figure it out, there is a vast gap between figuring it out and turning this into a massive revenue driver. But they've got to work it all out, it's got to work, and then it's got to actually have a return on investment. Then they've got to wait for advertisers to come in, and then they need to make sure it keeps working. Google has thousands of sales staff for their ads team. They have tons of physical and digital infrastructure. Google owns the publisher, the place you place the ads, and the platform Google search. So there is a massive reason that Google does so well, and it's their huge monopoly. Same thing with Facebook ads. They have the monopoly on ads on Instagram and Facebook. There is a reason these companies are big and it's not because their products are good. It's because they've created a massive audience and they've worked out how to kind of plumb them. Perhaps if OpenAI had twenty years to work this out, they would be in a better position. But when you're burning billions, I don't think he got that long.
Jon Bateman: Maybe there is room for another player serving ads in a different space, just like there's room for both Google and Meta competing against each other, but also basically serving ads.
Ed Zitron: Sure, theoretically, I think that it's possible, but I will say you'll also notice that Google doesn't appear to have worked out ads on AI mode. You don't think and that's I'm not even being sarcastic or facetious, right? I'm just like, if Google can't work it out, how does OpenAI?
Jon Bateman: No, it's true. It's also a threat to their business too, right? The more I use the AI overview at the top of Google, the less I'm clicking on Google's ads. So Google could experience a kind of cannibalization here where their own revenue is getting eaten into. I want to come back to Google and Meta and the big public companies in a moment. But just sticking with the AI, OpenAIs and Anthropics, another argument that people will make as to the path for profitability. You mentioned driving costs down through cheaper inter inference and training. The other option would be driving revenue up through improved value creation. And there the argument is that the models will get better in the future. The companies are even claiming that we're a year or two away from some kind of recursive self-improvement where the AIs could train and boost themselves, and that the pace of capability growth would not only sustain but radically increase to the point where maybe we might be willing to pay way more for it. You hear stories of how we could have an AI employee in the future. Now, this that's you know, obviously a very bold and ambitious vision. But just generally, what do you make of the idea that AI could actually just become much more valuable to its users in the future and we'd be willing to pay a lot more for it?
Ed Zitron: If my grandmother had wheels she'd be a bicycle. I mean, sure, you can say that about anything. This table could become a hawk and it could fly out of this room. Nothing about. What what you're saying is reasonable based on very basics, these people have been saying this for years. They have hit a wall where they're running out of training data. They are running out of the training data that they need to make the models better. The models getting better is also something judged by them, by benchmarks that are rigged in their favor because large language models are not reliable enough to do tasks, SWE-bench.
Jon Bateman: I mean, I I'll say the the advent of a reasoning models.
Ed Zitron: But we're talking about better. We're talking about better here, though. You said better. You said improved. You could not do that based on anecdotal data. You actually have to measure efficacy based on tests. You cannot just say, well, I found it good because sure, fine, but the advent of reasoning models allowed like test time compute has become the whole new thing that they're claiming will make things better. Why was GPT 5 such such a damp squib then? Why has all of this stuff been so mediocre for so long? Because that's the thing. You're saying, yeah, this could be an AI employee. Well, you can't rely on it. It doesn't seem to be able to do anything. If I needed someone useless, I could find a human to do that. If I like it, it's just I'm not even being sarcastic. I'm just saying, let's get tangible here. Let's actually look at the thing and what it can do. Because if you give them the oxygen to just say, in the future it will be recursively self learning. What?
Jon Bateman: I know, I know. No, I get it. It's a deus ex machina. It's you know, invisible hand kind of thing. And I will say clearly you and I have different points of view on how valuable current AI is. I I feel that it's valuable for me. And so I maybe give it more license when I imagine how it could improve in the future. But I think there's another way that we could look at this. I sometimes think about it this as the fake it till you make it theory. And that is that you could believe, like Gary Marcus or Jan Lacoon, that the LLM boom is kind of a bust that people are driving toward a dead end. And yet, so much money is pouring into this industry that could provide a bridge towards some shift in the technological paradigm that would allow them to get out of this cul-de-sac. Sure, the big labs must have other types of experiments that they're running on non-LLM paradigms, on new forms of deep learning, on you know, neuro symbolic reasoning, other just games that they're playing, algorithmic improvements, and maybe just so much money is going into this field that much like the space race, we could kind of manifest technological breakthroughs to some degree. What do you think of that?
Ed Zitron: I don't really know what to say. You're just saying so just to break down what you said, if they put more money into it, something might happen. I'm not being sarcastic, I'm just saying that's effectively what you said. They surely are working on stuff. Why haven't we heard about it? OpenAI said last year that when they have breakthroughs, they pretty much talk about them immediately. QSTAR, which eventually became reasoning models, was leaked maybe a couple months beforehand. And the thing is, actually no, I take the back. QSTAR was leaked like eight months beforehand and it ended up being actually way less important than it was, and they've not really done the same thing. The thing is, you can say surely, but we've actually not seen any sign they're working on anything other than large language models. They don't talk about anything else other than large language models. They train using GPUs on large language models. It's all large language models.
Jon Bateman: It's some it's absolutely speculation on my part. I you know, I guess the space race example to me does illustrate that it is possible for humanity to accelerate our progress down a certain tech tree when there is the will and the resources that are put behind it. Some sort of extraordinary acceleration. We weren't destined to invent all of that moon technology. We just we had the will and we put a huge proportion of GDP and federal spending behind it in a ten year period. And miracles did come from that. So I yeah, I I do think there that is the possibility for AI.
Ed Zitron: But that's very different because the move first of all the space race was a long time ago, government funded, not like there was private enterprise involved in it, but there's all sorts of history with like rocket fuel within that that I won't get into. But fundamentally, fundamentally, it's not the same thing, and it's way less money. Hundreds of billions of dollars. The attention of every person in tech, every single engineer, the best engineers in the world, the best mathematicians in the world, the best I don't know. I truly am not a scientist, so you can tell I'm running out of words for them. But all of the king's horses and all the king's men, all the money, all the GPUs, every single GPU, every bit of attention from the media and investors and private investors, public investors, retail investors, AI, LLMs, all that. And we are here.
Jon Bateman: I will say one thing you're right about is that we already had the basic science to confirm for us that a moon landing was possible. So we embarked on it with that understanding. That is not the case for AGI or or superintelligence. We have not kind of scientifically validated that these goals are reachable in any kind of timeline. If we could pivot from OpenAI and Anthropic, because those are kind of the pure AI plays in some ways, and they're the most attention grabbing, the most famous if you're in this world. Actually, I think when people talk about an AI bubble, they're also talking to a large degree about the big public companies, the so called Magnificent Seven, Google, Meta, Amazon, Apple to an extent, it certainly Nvidia, maybe even Tesla. These are the big multi-trillion dollar companies that have different business models. They're older, more established, have other revenue sources. What's your sense of these companies? Are are they equally overvalued or on an unsustainable path?
Ed Zitron: Yes. I think that they don't burn, well no, they're burning way more because they're CapEx, but Microsoft, the information is reported repeatedly, that they're, they are having trouble sell selling Microsoft 365 with AI. Microsoft 365 is a cash cow. Over 440 million people pay for it. Based on my reporting, in August of this year, only eight million active paid licenses existed. That's doo doo. That's terrible. Steve Baum would throw an Aeron chair at you. You'll also notice something curious. Microsoft stopped reporting their AI revenue in the first quarter of this year. Why? Do you think it's because it's good? Do you think it's because Microsoft is doing great? Why do you think Amazon, Meta, Google, none of them talk about their AI revenue. They talk about their CapEx. I think they're gonna do Amazon's gonna do 116 billion dollars in CapEx this year. Why aren't they talking about the revenue? And the answer is because they're not doing so good. If they were doing well, they would share it. If they were doing well, it'd be all they talk about. When has there ever been a time that a public company is making tons of money off of something where they don’t want to share it? And the answer is never.
Jon Bateman: They would have absolutely every incentive to share it. I mean, I guess the question that comes to mind for me is the cloud services aspects of these businesses, Microsoft Azure, Google Cloud, AWS, I think those are very, very high revenue sources.
Ed Zitron: They are, but not for AI.
Jon Bateman: In the tens or hundreds of millions of dollars. Okay. Yeah. Do we have a sense of what proportion of that is for AI related activity?
Ed Zitron: Well, I mean, I reported that OpenAI, I think they through September was 8.6 something billion dollars on inference just on Microsoft Azure. So there's that. The information also reported, but I was not able to confirm, that Microsoft sells access to Azure for OpenAI to run their services at cost. Which means OpenAI is not really making money, they're making revenue for Microsoft. But outside of that, Microsoft doesn't want to talk talk for even a second for some reason about how much AI revenue they're making. And I think if they're making money off of this, that they're probably not cracking, I don't know, more than a billion. I'm guessing here, this is all guesswork. I doubt they're making more than a billion or two a quarter each. I think it's probably less. Back, a few months ago, an analyst estimated that Amazon would make five or six billion dollars off of AI this year. And these numbers seem high. They seem really good. Except.
Jon Bateman: Not in the scale of those businesses.
Ed Zitron: If it's not profitable, and yeah, and when you've got Microsoft making over 70 billion a year on Azure, when you've got, I think, Amazon Web Services, I forget exactly like they're over 120 something billion dollars annualized. I mean, these numbers, the AI stuff is a drop in a bucket, yet they're destroying their balance sheets and they're adding masses of depreciation to their earnings. They're literally burning their income so that they can do this. It's all just it's a mess, and it's a mess that will eventually come and lie in the lie in the laps of retail investors who will be the ones that suffer.
Jon Bateman: Yes. Microsoft, Google, others, they are burning through their balance sheet. But they’re largely not financing the data center build-out as I understand it with debt. They're beginning to bring some debt to bear on this, but I think a lot of it is being financed with cash flow. These are profitable businesses, and wouldn't they remain profitable even if the AI bubble bursts? I mean, obviously the stock price would get totally washed out and a lot of people would lose money as investors. But Microsoft wouldn't go poof. Google, Google wouldn't go poof.
Ed Zitron: That's actually a really important point. Nothing about what I'm saying is saying any of the hyperscalers are gonna ever. I've never said that. I've maintained this is not the case. But Google on November third had a twenty-five billion dollar bond sale. Meta, I think, did seventeen or twenty billion, forgive me for not knowing off the top of my head. Oracle has massively leveraged their future in debt. And what's crazier is Meta just did, I think, a thirty billion dollar special purpose vehicle off balance sheet debt deal with a bunch of in people to invest in building the Hyperion data center. It was a abal of different banks and private equity firms and Blue Owl, who is part of the Crusoe deal for Oracle and OpenAI. But nevertheless, this deal is crazy. Because Meta has guaranteed they will pay off the debts if they walk away from leasing this data center. So actually, there is going to be and there is now massive debt associated with this. Microsoft has tons of debt as well. It's just only now is it starting to eat into free cash flow because wouldn't you know, they're not making a bunch of money from AI.
Jon Bateman: Yeah, I think we've now seen the first wave of big debt deals. So that does create some risk. Of course these are huge companies, right? So 25, 30 billion dollars. It that is a you know, it is to some extent the cost of doing business. But but I think this gets at part of the bubble question because when people are worried about a bubble, they're worried about who's left holding the bag if the whole thing comes crumbling down. You you mentioned retail investors, but could you just walk us through how we can think about the losers in a bubble scenario? Who would get screwed? There's all of a sudden a realization that the AI industry is overextended, overbuilt, over invested, and there's a massive pullback. Who loses in that scenario?
Ed Zitron: Everyone who invested in NVIDIA, that's a great place to start because NVIDIA makes physical things. Physical things, they make actual GPUs. However, their strategy may work, NVIDIA is in a weird position because last quarterly earnings, and I think they have another one coming up a week from recording this, so forgive me if these numbers change. Last quarter I've seen 55 percent year-over-year growth. This should lead to everybody popping champagne. The street got pissy. The street said we don't like this. Because NVIDIA used to grow 100 percent year over year. 100 and something percent, 146 percent, I think one quarter. The market has an unhealthy relationship with NVIDIA, which has been an incredible growth cycle. It was like eight percent of the stock market's value, but I think it's higher. It's astonishing. So you've got and you've got so much retail money, regular people buying into this because everywhere they look, CNBC, Forbes, Bloomberg, everyone saying AI, this is the biggest thing ever, number go up. And for now that's correct. But because this is all sentiment driven, 'cause remember, there really isn't outside of NVIDIA significant revenue in AI. There's not really anything holding this bloody thing up. Everyone's levering into NVIDIA, and NVIDIA is doing really well until it doesn't. And when it crashes, it's gonna be because people get out on the hedge fund level, and then leaving the regular people who can't panic sell at that speed to panic sell on top of panic sell, on top of panic sell, for which point they're left with the bag.
Jon Bateman: Yeah, I think you're right about that. I do worry about retail investors. Even with retail investors, I I guess you could still say they made a decision to make that bet. But what worries me even more is the possibility of a financial contagion that could begin in the AI industry and ripple elsewhere, just like we saw in the housing market in 2008. The percentage of the stock market that's in the Mag 7 is so extreme. I I think it's been hovering between 35, 40 percent. Where if you saw, and I think it's conceivable to me, a 50 percent drop in those stocks, you could then have that translate immediately to a 25 percent drop almost in the overall stock market, and then have all sorts of counterparty risks. Maybe the big banks lose a lot. Maybe, you know, the rest of the S&P 500 is somehow exposed. And this could quickly turn into a financial crisis and maybe even a recession.
Ed Zitron: So based on everything that's come out, we're already in a recession when you remove the money being plowed into data centers. But I want to be clear about something. I don't think this is going to be as bad as the great financial crisis. Because the great financial crisis was so heavily levered within people's mortgages they couldn't pay, but also with banks that had bet on them and banks on bet on bet on bet with watch The Big Short, it actually does a good job of explaining. But nevertheless, that was also banks and insurance companies started running out of money. I don't think that's gonna happen here. I really hope it's not the case. I don't, I've not seen signs that this would be the case. So my general thing here is though, is that so much more retail money is in the market than ever. So I think that there will be kind of when when E-trade first got popular, you had a lot of money lost from the early retail investors kind trying to bet on the market. I think that the contagion, to your point, I'm not a stock analyst, just to be clear. I think the contagion could be really harsh just because of the amount of weighing weight even the Magnificent 7 has, like you said. I don't know whether it will translate to 25 percent losses elsewhere, but I do know a lot of people will lose a lot of money, especially, especially the people who are just regular retail investors, users of Robin Hood or what have you, who are just like, oh, I've read CNBC and they say AI is the biggest thing ever. I'm gonna put 10,000 dollars in NVIDIA at the top. And what do you think happens to that person?
Jon Bateman: I feel like we've switched roles in the conversation because earlier you were critiquing AI and I was defending it. But I think I'm actually more worried about the possible consequences of a bubble if one exists than you are, just because of the level of market concentration and the fact that we don't really know, or at least I don't know who all the counterparties are. But if it gets into the hedge funds, the big banks, you do wonder what an overall risk retrenchment looks like. And as you said, the AI spend is actually holding up GDP right now. If you subtract that, we could be in a recession in the real economy. And that's quite frightening.
Ed Zitron: It is.
Jon Bateman: Typically, in the past, when people have worried about whether there is some kind of financial mania or irrational exuberance to use Greenspan's memorable phrase, there is then discussion of should the Fed come in and raise interest rates in order to kind of take some of the wind out of the sales of the market. Now, everyone everywhere right now is asking if AI is in a bubble, but I have not heard anyone propose that the Fed raise interest rates. And instead, there's still this discussion about the Fed being on a path toward lowering interest rates. And the only question is how quickly it will do so. What's your sense of this? Because this is really the main public policy tool that we've used in the past in order to try to pull back on bubbles before they burst violently.
Ed Zitron: So the problem with this bubble, I'm not an expert in this. I've just I won't talk to interest rates because it's just not my area of expertise. But I will say this. This isn't something that I think interest rate hikes will help with. Sure, it might make more easily available credit, but the scale of the spend required to build an AI data center, for example, about 50 billion dollar per gigawatt. Two and a half years. So the time horizon of investment on top of these things from private equity and private credit is crazy bad. The amount of money you will have to make back to make this worthwhile is, actually, kind of insane. Like I I calculated recently that by 2030, big tech needs to make two trillion dollars of extra revenue on top of what they were already gonna make just from AI to make this worthwhile based on the CapEx and depreciation. This is an across the board problem. Everybody who has invested in AI data centers needs these things to be at full capacity, full utilization all the time, paying top rates forever. Otherwise the whole thing falls apart. Making more credit available only means you're spreading the risk. The risky idea, even. In many ways, data centers are a kind of thought contagion themselves. They get into the heads of private equities to go, oh, this is the future. This is like this is buying the future fracking infrastructure. We're just gonna print money. Except it's not the case, but because we live in a high information, low processing society, we have tons of people with tons of money who just don't think too hard. So lower interest rates might free up capital. It might mean there's more credit available for some parties. But at some point, even the most lascivious lenders are gonna say, hey, when are we gonna make money on this? How is this gonna make money? And I think that that is going to, no matter what the reinterest rates are, that's gonna be what people run into.
Jon Bateman: Yeah, I just say low interest rates are the great facilitator of people making dumb decisions, right? So to the extent that there's already some kind of data center mind virus, low interest rates make it so much easier to go forward with that. Now, of course, in a way, we we really don't have low interest rates today. We have kind of moderately elevated interest rates by the standards of recent history. I just think there's kind of two macroeconomic conversations happening here that really have not collided or integrated yet. Serious economists are starting to worry that we might be in a bubble. And yet there's still this expectation that the Fed will continue to lower interest rates, at least gradually. So if those two things are not reconciled, we could make any bubble that exists far worse. Or to put it differently, because there's this expectation and desire to lower interest rates, we've taken off the table the main public policy tool that we have to ameliorate a bubble before it bursts violently. That's a very difficult situation to be in. I want to pivot a little bit to China, if I could. Because I think the question that comes to mind for me is if there is a bubble, is it just a U.S. bubble? Or is there a bubble everywhere that frontier AI is being developed and everywhere that data centers are being built? And of course, China would be the other major global center there. Have you looked into the Chinese AI companies? Are they just as over invested as the American ones?
Ed Zitron: They're actually less invested. There was a thing it's Joseph Tsai, a Chinese billionaire, I believe, who said we're in a bubble. There was a story a few months ago about how there are a bunch of Chinese data centers that are empty, sorry, they're not being used. And on top of that, we keep getting stories out of China that they're finding ways to train models cheaply. Not just Deep Seek, there's Gwen, there's all sorts of different models. It seems China's approach is to make this thing cheaper. If anything, if they have a bubble, which I imagine they do, though I don't have much knowledge of Chinese finances, I think that there's a good chance there's, there's a good chance they're trying to pivot either away from this or around this in a different way to America. But I will say it is strange we haven't had a Chinese OpenAI. And by that I mean Deep Seek, I guess you could call that. But Deep Seek isn't signing 1.2 trillion dollars worth of compute deals. They're not making 300 billion dollar deals with Oracle. They're not building, they claim 250 gigawatts of data center capacity by 2030. Why is that not happening? Especially if you’re taking the posit that China is doing propaganda, why are they not doing the same thing?
Jon Bateman: I think this would really worry me if I were an American policymaker or national security leader who is invested in the AI narrative and beating China, because China is, I think, a lot less exposed to bubble risk than we are. There's less stock market concentration in the AI companies in China. I think they're less overbuilt as far as data centers. And there's more of an emphasis on diffusing diffusing AI within the real economy and using kind of leaner and meaner open source tools to get practical results. So there's a very real situation in which the AI narrative explodes and AI bubble bursts in the United States, but China weathers the storm and continues to be more economically sustainable in terms of its AI industry, then it's actually positioned to pull ahead in any future kind of direction this market takes.
Ed Zitron: I think that again, we have to trust what is coming out of China, which of course you should look at with questions. I think there is also I think the way to look at it is just listen to the story here and listen to the story there. China doesn't appear to be trying to prop up a Chinese company. They're propping up China, but you don't hear them. They might talk about Huawei or what have you, or any number of any number of different manufacturers, but you don't have the same rah-rah, we must build more China. Sure, you might have Chinese propaganda for sure, but you don't have a you don't have the same bubble around NVIDIA. America's big problem is that we've built this reliance on one vendor that appears to make something with bad gross margins. The information had a report about the B200 GPUs used in a lot of data centers in the GB200 racks. The GB200 racks, so 72GPUs, I believe. Negative 100 percent gross gross margins. They don't teach yet you that in business school, do they?
Jon Bateman: I think part of what's going on here, Ed, is that the U.S. happens to be out in front on the frontier AI race. And so therefore, we have built a national narrative around the importance and the glory of that. China is not ahead in the frontier race. They're taking this diffusion strategy, and therefore they've built a narrative and strategy around that. That could be the more stable narrative, if you're right that this is a bubble about to pop. Well, Ed, you've been very generous as I have thrown objection and objection and argument after argument at you. I want to just end this conversation with a twofold question. One is for people who don't buy the bubble case, what is the single data point that they should be looking at that could change their mind in the future? And then the flip side of that is what data point could change your mind, where if some miracle were to occur, this data would actually reveal that we're not in a bubble.
Ed Zitron: Well, we'll start with for the over half trillion dollars that has been put into AI, we're looking at maybe 60, 70 billion dollars of revenue. And I don't mean profit. I mean more than likely the well, I mean 100 percent the costs of that 70billion dollars was likely higher than 70 billion dollars. That's my data point. What would change my mind would be I truly actually don't know because I know so much about the hardware and software behind this. It would have to be we worked out a way to run large language models for.0001 cent per query. Got it. Some tiny teeny amount. And the truth is, I don't think that'll happen because here's a crazy fact, three years in, we don't have a good handle on how much these things actually cost to run. That's not something that happens when we're confident in something.
Jon Bateman: Okay. So the data point that could conceivably change your mind, as implausible as you might find it, is orders of magnitude increase in the efficiency of these models, the training and the inference, such that it actually becomes profitable to run them. What what about for for those people in the audience who aren't convinced of the bubble narrative? What what's the thing that they should watch going forward that actually could could worsen over time or kind of prove your case definitively?
Ed Zitron: I mean the revenue is as the revenue increases, so do the costs. That really is it, that I've established that very well. That's been established repeatedly in reporting. And I think that story is only going to mature or marinate or I don't know, get worse.
Jon Bateman: Okay. Well, I hope you're wrong, Ed, if only for the sake of the global economy and all of our financial futures. But I'll leave that for the audience to decide and we'll find out. Thanks a lot. This has been a really riveting conversation.
Ed Zitron: Thanks for having me, this has been awesome.



