• Research
  • Emissary
  • About
  • Experts
Carnegie Global logoCarnegie lettermark logo
Democracy
  • Donate
Podcast Episode
Carnegie India

Beyond Superintelligence: A Realist's Guide to AI

In this episode of Interpreting India, host Nidhi Singh is joined by Sayash Kapoor, co-author of AI Snake Oil, to unpack the myths, misconceptions, and exaggerated expectations around artificial intelligence. Kapoor challenges the dominant narratives of both utopian and dystopian AI futures and advocates instead for a more grounded perspective, viewing AI as a “normal technology,” akin to electricity or the internet, whose impact will unfold gradually over decades.

Link Copied
By Nidhi Singh and Sayash Kapoor
Published on Jul 10, 2025

Subscribe on

SpotifyApple PodcastsAmazon MusicYoutube
Project hero Image

Project

Technology and Society

This program focuses on five sets of imperatives: data, strategic technologies, emerging technologies, digital public infrastructure, and strategic partnerships.

Learn More

Episode Summary

In this episode of Interpreting India, host Nidhi Singh is joined by Sayash Kapoor, co-author of AI Snake Oil, to unpack the myths, misconceptions, and exaggerated expectations around artificial intelligence. Kapoor challenges the dominant narratives of both utopian and dystopian AI futures and advocates instead for a more grounded perspective, viewing AI as a “normal technology,” akin to electricity or the internet, whose impact will unfold gradually over decades. Through a wide-ranging conversation, the episode examines the limitations of benchmark-based evaluation, the dangers of speculative AI policy, and the need for domain experts in shaping meaningful governance frameworks.

Episode Notes

The episode begins with Kapoor explaining the origins of AI Snake Oil, tracing it back to his PhD research at Princeton on AI's limited predictive capabilities in social science domains. He shares how he and co-author Arvind Narayanan uncovered major methodological flaws in civil war prediction models, which later extended to other fields misapplying machine learning.

The conversation then turns to the disconnect between academic findings and media narratives. Kapoor critiques the hype cycle around AI, emphasizing how its real-world adoption is slower, more fragmented, and often augmentative rather than fully automating human labor. He cites the enduring demand for radiologists as a case in point.

Kapoor introduces the concept of “AI as normal technology,” which rejects both the notion of imminent superintelligence and the dismissal of AI as a passing fad. He argues that, like other general-purpose technologies (electricity, the internet), AI will gradually reshape industries, mediated by social, economic, and organizational factors—not just technical capabilities.

The episode also examines the speculative worldviews put forth by documents like AI 2027, which warn of AGI-induced catastrophe. Kapoor outlines two key disagreements: current AI systems are not technically on track to achieve general intelligence, and even capable systems require human and institutional choices to wield real-world power.

On policy, Kapoor emphasizes the importance of investing in AI complements—such as education, workforce training, and regulatory frameworks—to enable meaningful and equitable AI integration. He advocates for resilience-focused policies, including cybersecurity preparedness, unemployment protection, and broader access to AI tools.

The episode concludes with a discussion on recalibrating expectations. Kapoor urges policymakers to move beyond benchmark scores and collaborate with domain experts to measure AI’s real impact. In a rapid-fire segment, he names the myth of AI predicting the future as the most misleading and humorously imagines a superintelligent AI fixing global cybersecurity first if it ever emerged.

Hosted by

Nidhi Singh
Senior Research Analyst and Program Manager, Technology and Society Program
Nidhi Singh

Featuring

Sayash Kapoor

Co-author,
AI Snake Oil

Carnegie does not take institutional positions on public policy issues; the views represented herein are those of the author(s) and do not necessarily reflect the views of Carnegie, its staff, or its trustees.

More Work from Interpreting India

  • Podcast Episode
    Deciphering the “Mother of All Trade Deals”: The India–EU FTA

    In this episode of Interpreting India, Dinakar Peri is joined by Mohan Kumar, former Indian Ambassador to France and a veteran trade negotiator, to unpack the newly concluded India–EU Free Trade Agreement and why he describes it as the “mother of all trade deals” for India. Kumar explains why the agreement is strategically significant, why the timing matters, and what it signals about India’s trade posture, competitiveness, and broader alignment between trade, technology, and security.

      Dinakar Peri, Mohan Kumar

  • Podcast Episode
    AI Adoption Journey for Population Scale: The UCAF Framework

    In this episode of Interpreting India, Nidhi Singh is joined by Shalini Kapoor, chief strategist for Data and AI at the EkStep Foundation, and Tanvi Lall, director for strategy at People+ai. They unpack why so many AI initiatives get stuck after impressive demos, and what it takes to move from pilots to real, sustained adoption. Drawing on research spanning 1,000+ use cases across 25 countries, the guests introduce the Use Case Adoption Framework (UCAF) and explain how India can translate AI ambition into population-scale impact—especially across public services, agriculture, health, and other high-priority sectors.

      Nidhi Singh, Shalini Kapoor, Tanvi Lall

  • Podcast Episode
    Scarcity, Sovereignty, Strategy: Mapping the Political Geography of AI Compute

    In this episode of Interpreting India, Adarsh Ranjan is joined by Zoe Jay Hawkins, co-founder and deputy executive director of the Tech Policy Design Institute. They explore the evolving idea of AI sovereignty, the geopolitics of compute, and how countries are navigating access to the foundational infrastructure that powers artificial intelligence. Drawing from her research at the Oxford Internet Institute, Zoe unpacks the political geography of AI compute, the rising concentration of AI chips and data centers, and what this means for both developed and developing economies.

      Adarsh Ranjan, Zoe Jay Hawkins

  • Podcast Episode
    Cybersecurity in Outer Space: A Growing Concern

    In this episode of Interpreting India, host Tejas Bharadwaj is joined by P. J. Blount, an assistant professor of space law at Durham University. Together, they delve into the critical topic of cybersecurity in outer space, exploring the challenges and implications of protecting space-based assets amidst rising geopolitical tensions and technological advancements. Blount shares insights from his extensive research in international space law and cyberspace governance, highlighting the complexities of legal attribution and the evolving landscape of space security.

      Tejas Bharadwaj, P. J. Blount

  • Podcast Episode
    Unbundling AI Openness: Beyond the Binary

    In this episode of Interpreting India, host Shruti Mittal speaks with Chinmayi Sharma, associate professor of law at Fordham Law School. Together, they explore the evolving and often misunderstood debate on openness in artificial intelligence. Drawing from her forthcoming paper, Unbundling AI Openness, in the Wisconsin Law Review, Sharma explains why the traditional “open versus closed” framing oversimplifies the reality of modern AI development.

      Shruti Mittal, Chinmayi Sharma

Get more news and analysis from
Carnegie Endowment for International Peace
Carnegie global logo, stacked
1779 Massachusetts Avenue NWWashington, DC, 20036-2103Phone: 202 483 7600Fax: 202 483 1840
  • Research
  • Emissary
  • About
  • Experts
  • Donate
  • Programs
  • Events
  • Blogs
  • Podcasts
  • Contact
  • Annual Reports
  • Careers
  • Privacy
  • For Media
  • Government Resources
Get more news and analysis from
Carnegie Endowment for International Peace
© 2026 Carnegie Endowment for International Peace. All rights reserved.