• Research
  • Strategic Europe
  • About
  • Experts
Carnegie Europe logoCarnegie lettermark logo
EUUkraine
  • Donate
{
  "authors": [
    "C. Raja Mohan"
  ],
  "type": "legacyinthemedia",
  "centerAffiliationAll": "dc",
  "centers": [
    "Carnegie Endowment for International Peace",
    "Carnegie India"
  ],
  "collections": [],
  "englishNewsletterAll": "ctw",
  "nonEnglishNewsletterAll": "",
  "primaryCenter": "Carnegie India",
  "programAffiliation": "SAP",
  "programs": [
    "South Asia"
  ],
  "projects": [],
  "regions": [
    "South Asia",
    "India"
  ],
  "topics": [
    "Technology",
    "Security",
    "Global Governance"
  ]
}

Source: Getty

In The Media
Carnegie India

AI Rules: From Three to Twenty Three

Given the pace of progress in AI development, the expanding scope for its application, and the growing intensity of the current research effort suggests that it may not be too soon to revisit and revise the Asimov commands on computing.

Link Copied
By C. Raja Mohan
Published on Feb 5, 2017

Source: Indian Express

Many moons ago, in 1942, when intelligent computers began to figure in science fiction, Isaac Asimov laid down three rules for building robots: they should do no harm to humans either through action or inaction; must always obey human command; and be able to secure their own existence so long as this does not violate the first two rules.

As computing power grows rapidly and begins to envelop many aspects of our life, there has been a growing concern that Asimov Rules, sensible as they are, may not be adequate for the 21st century. Scientists like Stephen Hawking and high tech entrepreneurs like Elon Musk have sounded alarm bells about super intelligent computers ‘going rogue’ and dominating if not destroying human civilization.

Not everyone agrees that we are close to such a catastrophic moment. But given the pace of progress in AI development, the expanding scope for its application, and the growing intensity of the current research effort suggests that it may not be too soon to revisit and revise the Asimov commands on computing.

That precisely is what a large group of researchers, businessmen, lawyers and ethicists did last month when they produced a list of 23 recommendations for regulating the future of Artificial Intelligence. Endorsed by nearly 900 AI researchers, the principles would appear much too idealistic and vague.

Techno optimists will dismiss the code as a “Luddite manifesto’. Some will pick nits on the difficulty of defining such things as ‘human values’. Even more important the lawyers will say the guidelines are not enforceable. For all its limitations, the AI code does constitute a significant reflection on the extraordinary challenges posed by robotics and machine learning.

A critical proposition is that the goal of AI research should be the pursuit of ‘beneficial intelligence’ and not ‘undirected intelligence’. It ‘should benefit and empower as many people as possible’ and the prosperity generated by AI should be shared broadly’. The idea of shared benefits in the guidelines is matched by the proposition that investments into AI research should be accompanied by funding for research in law, ethics and social studies that will address the broader questions emerging out of advances in computing. It also calls for more cooperation, trust, and transparency among AI researchers and developers.

The new code also urges the AI community to avoid cutting corners on safety standards, as they compete with each other to reap the multiple commercial benefits from the emerging technology. The principles insist on verifiable safety of the AI systems throughout their operational life. The question of transparency in the systems is also considered critical. The code insists that if an AI system causes harm, ‘it should be possible to ascertain why.’

Ethics and values form an important dimension of the proposed AI code. It insists that the builders of AI systems are ‘stakeholders in the moral implications of their use and misuse’ and have the responsibility to design them to be ‘compatible with ideals of human dignity, rights, freedoms, and cultural diversity’.

The theme of human control over robots emerges as the biggest long-term challenge addressed by the AI code. In a reference to the threats to internal and international security threats posed by the new technologies, the code declares that AI should not be allowed to subvert human societies and urges nations to avoid an ‘arms race’ in the development of ‘lethal autonomous weapons’.

The code argues that AI could mark a ‘profound change in the history of life on earth’ and the catastrophic and existential risks that it entails should be managed with ‘commensurate care and resources’. Pointing to the possibility of machines learning on their own, producing their copies and acting autonomously, the code declares that AI systems designed to ‘self-improve or self-replicate in a manner that could lead to rapidly increasing quality or quantity must be subject to strict safety and control measures.’

Not everyone is pleased with the idea of a code of conduct for AI development. Some are apprehensive of the possibility of governments stepping in to over-regulate a technology that is poised for take off. They would agree that some kind of an oversight is needed and that AI itself might provide the appropriate solutions for effective human supervision.

At the international level, the major powers especially the United States, China and Russia are investing in a big way in AI to gain military and strategic advantages. As an expansive international discourse on AI unfolds, Delhi is uncharacteristically mute. For nearly seven decades, Delhi was always among the first movers of the global debates on the political, economic, social and ethical implications of technological revolutions. On the AI, though, Delhi has much catching up to do.

This article was originally published in the Indian Express.

About the Author

C. Raja Mohan

Former Nonresident Senior Fellow, Carnegie India

A leading analyst of India’s foreign policy, Mohan is also an expert on South Asian security, great-power relations in Asia, and arms control.

    Recent Work

  • Article
    Deepening the India-France Maritime Partnership

      C. Raja Mohan, Darshana M. Baruah

  • Commentary
    Shanghai Cooperation Organization at Crossroads: Views From Moscow, Beijing and New Delhi
      • Alexander Gabuev
      • +1

      Alexander Gabuev, Paul Haenle, C. Raja Mohan, …

C. Raja Mohan
Former Nonresident Senior Fellow, Carnegie India
TechnologySecurityGlobal GovernanceSouth AsiaIndia

Carnegie does not take institutional positions on public policy issues; the views represented herein are those of the author(s) and do not necessarily reflect the views of Carnegie, its staff, or its trustees.

More Work from Carnegie Europe

  • Commentary
    Strategic Europe
    Taking the Pulse: Is France’s New Nuclear Doctrine Ambitious Enough?

    French President Emmanuel Macron has unveiled his country’s new nuclear doctrine. Are the changes he has made enough to reassure France’s European partners in the current geopolitical context?

      • Rym Momtaz

      Rym Momtaz, ed.

  • Commentary
    The Iran War’s Dangerous Fallout for Europe

    The drone strike on the British air base in Akrotiri brings Europe’s proximity to the conflict in Iran into sharp relief. In the fog of war, old tensions in the Eastern Mediterranean risk being reignited, and regional stakeholders must avoid escalation.

      Marc Pierini

  • Trump United Nations multilateralism institutions 2236462680
    Article
    Resetting Cyber Relations with the United States

    For years, the United States anchored global cyber diplomacy. As Washington rethinks its leadership role, the launch of the UN’s Cyber Global Mechanism may test how allies adjust their engagement.

      • Christopher Painter

      Patryk Pawlak, Chris Painter

  • Commentary
    Strategic Europe
    Europe on Iran: Gone with the Wind

    Europe’s reaction to the war in Iran has been disunited and meek, a far cry from its previously leading role in diplomacy with Tehran. To avoid being condemned to the sidelines while escalation continues, Brussels needs to stand up for international law.

      Pierre Vimont

  • Commentary
    Strategic Europe
    Macron Makes France a Great Middle Power

    France has stopped clinging to notions of being a great power and is embracing the middle power moment. But Emmanuel Macron has his work cut out if he is to secure his country’s global standing before his term in office ends.

      • Rym Momtaz

      Rym Momtaz

Get more news and analysis from
Carnegie Europe
Carnegie Europe logo, white
Rue du Congrès, 151000 Brussels, Belgium
  • Research
  • Strategic Europe
  • About
  • Experts
  • Projects
  • Events
  • Contact
  • Careers
  • Privacy
  • For Media
  • Gender Equality Plan
Get more news and analysis from
Carnegie Europe
© 2026 Carnegie Endowment for International Peace. All rights reserved.