• Research
  • About
  • Experts
Carnegie India logoCarnegie lettermark logo
AI
{
  "authors": [
    "R. Shashank Reddy"
  ],
  "type": "legacyinthemedia",
  "centerAffiliationAll": "",
  "centers": [
    "Carnegie Endowment for International Peace",
    "Carnegie India"
  ],
  "collections": [],
  "englishNewsletterAll": "",
  "nonEnglishNewsletterAll": "",
  "primaryCenter": "Carnegie India",
  "programAffiliation": "",
  "programs": [],
  "projects": [],
  "regions": [],
  "topics": [
    "Technology"
  ]
}

Source: Getty

In The Media
Carnegie India

Should We Fear Artificial Intelligence?

It is necessary to be open-eyed and clear-headed about the practical benefits and risks associated with the increasing prevalence of artificial intelligence.

Link Copied
By R. Shashank Reddy
Published on Aug 7, 2017

Source: Livemint

A recent, relatively minor, spat between Mark Zuckerberg and Elon Musk erupted online over the dangers of Artificial Intelligence. To briefly recap, in a Facebook Live session a couple of weeks back, Zuckerberg railed against people who talk about Artificial Intelligence-related “doomsday scenarios”, clearly hinting at fellow Silicon Valley leader Musk. Musk replied by stating that Zuckerberg’s “understanding of the subject is pretty limited”.

While the exchange itself did not move beyond this, Zuckerberg and Musk personify broadly the two sides of an ongoing debate on the dangers of Artificial Intelligence, ironically brought back into popular consciousness by recent (mostly incorrect) reports that Facebook shut down an Artificial Intelligence programme after it invented its own language.

But what is the key takeaway of the debate for policymakers and non-billionaires? Should one fear Artificial Intelligence?

As with most things, the answer is both yes and no. Beginning with why one must not “fear” Artificial Intelligence, such systems are actually pretty dumb. The much vaunted AlphaGo, for instance, would find it impossible to pick out a cat from a data set of animal pictures, unless it was reprogrammed completely and made to forget how to play Go.

This is because even the most intelligent systems today have artificial specific intelligence, which means they can perform one task better than any human can, but only that one task. Any task that it is not specifically programmed for, howsoever simple it may seem to us, such a system would find impossible to undertake.

This is also not the sort of Artificial Intelligence Musk is talking about. His warnings pertain to a type known as artificial general intelligence, which is a system that has human-level intelligence, i.e., it can do multiple tasks as easily as a human can and can engage in a “thought” process that closely resembles humans. Such artificial general intelligence, however, has so far remained theoretical, and is possibly decades away from being developed in any concrete manner, if at all. Therefore, any fear of a super-intelligent system that can turn on humans in the near future is quite baseless.

This, however, does not mean that there is nothing to fear when it comes to Artificial Intelligence. There are three broad areas where one should fear the effects and consequences, if not the technology itself.

First, and most importantly, jobs. While the possible negative effect of Artificial Intelligence on jobs has been a trending topic recently, there has been no academic or policy consensus on what the exact effect will be. A May 2017 study by Lawrence Mishel of the Economic Policy Institute, for example, argues that in the past, automation did not have any negative effect on the job market, but actually increased the number of available jobs.

However, this study has also come under some valid criticism, not least because it does not account for differences in the nature of automation between the period of its study and now. There can be no doubt that at least some jobs will be negatively affected by Artificial Intelligence, but the nature of these jobs and the nature of the jobs that may replace them, if at all, is hazy at best. It is this lack of clarity that one must be wary of.

Second, the use of Artificial Intelligence in weapons leading to ‘autonomous weapons’ raises a number of difficult questions in international law. Whether a machine that has been given the ability to make life and death decisions on the battlefield can adequately account for subjective principles of war such as proportionality and precaution is an issue that has been consistently taken up by civil society groups over the past few years. The underlying issue here is not that weaponized Artificial Intelligence would be smart, but that it would not be smart enough. The consequences of this have been deemed serious enough for the UN to begin deliberating on this issue in an official group of governmental experts forum this November.

Third, privacy and data security. It must be remembered that the entire Artificial Intelligence ecosystem is built on the availability of great amounts of data and enhancing efficiency requires continued availability of such data. Constant inputs and feedback loops are required to make Artificial Intelligence more intelligent.

This raises the question of where the required data comes from, and who owns and controls it. Facebook, Google, Amazon and others depend on the immense data generated by their users every day, and while the availability of this data may lead to better Artificial Intelligence, it also allows these companies, or anybody else who has access to the data, to piece together a very detailed picture of individual users, something which users themselves may not have knowingly consented to. The possible authoritarian implications of this, ranging from indiscriminate surveillance to predictive policing, can be seen in the recent plan released by China’s state council to make China an Artificial Intelligence superpower by 2030.

It is necessary to be open-eyed and clear-headed about the practical benefits and risks associated with the increasing prevalence of Artificial Intelligence. It is not going to go “rogue” and turn on humans (at least in the near future), and talk of such a theoretical existential risk must not blind policymakers, analysts, and academics to the very real issues raised by Artificial Intelligence.

This article was originally published in Livemint.

About the Author

R. Shashank Reddy

Former Research Analyst

R. Shashank Reddy was a research analyst at Carnegie India. His research focuses on the implications of emerging technologies and their governance for international and Indian security.

R. Shashank Reddy
Former Research Analyst
Technology

Carnegie India does not take institutional positions on public policy issues; the views represented herein are those of the author(s) and do not necessarily reflect the views of Carnegie, its staff, or its trustees.

More Work from Carnegie India

  • Research
    For People, Planet, and Progress: Perspectives from India's AI Impact Summit

    This collection of essays by scholars from Carnegie India’s Technology and Society program traces the evolution of the AI summit series and examines India’s framing around the three sutras of people, planet, and progress. Scholars have catalogued and assessed the concrete deliverables that emerged and assessed what the precedent of a Global South country hosting means for the future of the multilateral conversation.

      • +3

      Nidhi Singh, Tejas Bharadwaj, Shruti Mittal, …

  • Article
    India’s Press Note 3 Gamble: Opening the FDI Door to China

    On March 10, 2026, India’s Union Cabinet approved amendments to Press Note 3, a regulation that mandated government approval on all foreign direct investment (FDI) from countries sharing a land border with India. This amendment raises questions primarily about whether its stated benefits will materialize and if the risks have been adequately weighed. This piece will address the same.

      Konark Bhandari

  • Commentary
    The Coming of Age of India’s Nuclear Triad

    The induction of INS Aridhaman, which features several technological enhancements, now gives India the third nuclear ballistic missile submarine to ensure continuous at-sea deterrent.

      Dinakar Peri

  • Article
    What Could a Reciprocal Defense Procurement Agreement Do for U.S.-India Ties?

    India and the United States are close to concluding a Reciprocal Defense Procurement Agreement (RDPA) that will allow firms from the two countries to sell to each other’s defense establishments more easily. While this may not remedy the specific grievances both sides may have regarding larger bilateral issues, an RDPA could restore some momentum, following the trade deal announcement.

      Konark Bhandari

  • Commentary
    India Signs the Pax Silica—A Counter to Pax Sinica?

    On the last day of the India AI Impact Summit, India signed Pax Silica, a U.S.-led declaration seemingly focused on semiconductors. While India’s accession to the same was not entirely unforeseen, becoming a signatory nation this quickly was not on the cards either.

      Konark Bhandari

Get more news and analysis from
Carnegie India
Carnegie India logo, white
Unit C-4, 5, 6, EdenparkShaheed Jeet Singh MargNew Delhi – 110016, IndiaPhone: 011-40078687
  • Research
  • About
  • Experts
  • Projects
  • Events
  • Contact
  • Careers
  • Privacy
  • For Media
Get more news and analysis from
Carnegie India
© 2026 Carnegie Endowment for International Peace. All rights reserved.