Source: iStock
Source: Getty
paper

India’s Advance on AI Regulation

This paper provides a comprehensive analysis of AI regulation in India by examining perspectives across government, industry, and civil society stakeholders. It evaluates the current regulatory state and proposes a policy roadmap forward. Does India need new AI regulations? What should they look like? Who is driving this debate in India and what are their views?

Published on November 25, 2024

This publication was produced under Carnegie India’s Technology and Society Program. For details on the program’s funding, please visit the Carnegie India website. The views expressed in this piece are solely those of the authors.

Introduction

Artificial intelligence (AI) is a general-purpose technology that has existed since the early 1950s.1 Its trajectory is marked by cycles of hype and innovation, followed by periods of stagnation and disillusionment.2 In 2022 alone, more than thirty laws related to AI were passed in over a hundred countries.3 What explains this sudden rush to regulate AI?

Some say the launch of ChatGPT in late 2022 was a defining moment. It brought generative AI to the forefront, and along with it, concerns about bias, misinformation, copyright violations, and the impact on labor markets.4 One might also point to a confluence of factors—the massive breakthroughs in machine learning, new and powerful capabilities of large language models, and the global reach of social media—which has stoked the fears of policymakers and prompted new regulations in some countries.5

And yet, nobody seems to have a clear collective vision for how AI should be regulated. This has resulted in divergent approaches around the world—from comprehensive legislation in the European Union (EU) to technology-specific rules in China, and voluntary commitments in the United States.6 Despite these differences, global policymakers seem to agree on one thing—we must leverage the power of AI while mitigating its risks.7

Where does this leave India on AI regulation? The existing body of literature concerning India’s approach to regulating AI is disconnected, narrow, or superficial. It includes news coverage of regulatory proposals;8 brief commentaries on national and global policy developments;9 editorials on what India’s approach should be;10 summaries of the legal landscape;11 readouts of roundtable discussions;12 and analyses of specific regulatory issues involving AI.13 What is missing is a clear and comprehensive analysis of India’s overall advance on AI regulation. Who is driving the debate in India? What are the views of different stakeholders? Does India need new AI regulations? What should they look like?

What Does This Paper Do?

Aims and Objectives:

The goal of this paper is to answer two main questions:

  1. What is the state of AI regulation in India?
  2. What should be the way forward?

To that end, this paper will capture the views of government, industry, and civil society in India and suggest a policy roadmap to inform India’s advance on AI regulation.

Structure

The paper is divided into four parts:

Part I provides an overview of the current sentiment in government, industry, and civil society in India on the topic of AI regulation.

Part II explores the scope and objectives of AI regulation, the nature of AI risks, and areas where additional regulations may be required.

Part III examines global approaches to AI regulation and views in India.

Part IV suggests a policy roadmap for India on AI regulation.

Methodology

Our analysis is based on multiple discussions with key stakeholders over several months, both in closed-door and public settings.

We also conducted seventeen interviews with senior government officials, industry executives, lawyers, technologists, and scholars working specifically on AI policy in India. All interviewees have been granted anonymity to protect their privacy given their ongoing engagements on these issues and the sensitivity of these discussions.

Finally, we have referenced several books, policy documents, academic papers, news reports, and articles.

Part I: General Sentiments on AI Regulation

Government

Since 2022, the Indian government has oscillated between a hands-off approach to AI regulation and one that is more direct and interventionist,14 which has led to some confusion.

Broadly, India supports a “pro-innovation” approach to AI regulation. It wants to unlock the full potential of AI while taking into account the anticipated risks. This is reflected in the G20 Ministerial Declaration made during India’s presidency, as well as a statement in Parliament in April 2023 that “[the Indian government] is not considering bringing a law or regulating the growth of AI in the country.”15

However, around the same time, the Ministry of Electronics and Information Technology (MeitY) published a blueprint for a new Digital India Act, which includes a specific reference to the “regulation of high-risk AI systems.”16 Then, after a lull of close to a year, the government issued an advisory in March 2024 that jolted the industry.17 The advisory, which mandated compliance with immediate effect, directed companies to obtain the government’s permission before deploying certain AI models in India, and to take steps to prevent algorithmic discrimination and the distribution of deepfakes.18 Following sharp criticism, the advisory was withdrawn and replaced with a fresh one that continues to remain in force.19

The reason for the government’s fragmented approach is that there are multiple, differing views within the establishment. MeitY, which is the nodal ministry for technology regulation in the country, favors a “light touch approach.”20 An official suggested that the much-criticized AI advisory from March 2024 was the product of another agency’s influence and not the brainchild of MeitY.21 Some factions within the government want more regulation. For example, a member of the Prime Minister’s Economic Advisory Council has published a report that characterizes AI as a “complex adaptive system”22 that requires “proactive regulatory intervention.”23 Another key official involved in framing India’s AI policy said that MeitY was “not doing enough” to address the risks of AI.24

For now, it appears the Indian government is still building consensus while adopting a cautious approach. It has tasked the Office of the Principal Scientific Advisor (PSA), set up to advise the Prime Minister and the cabinet on matters of science and technology, to consult with different ministries and provide “strategic guidance” on AI regulation.25 A sub-committee, convened by MeitY and reporting to the PSA, has prepared a draft report on “AI Regulation,” though it has not yet been published.26

MeitY, for its part, is considering various regulatory options, including amending the Information Technology Act, 2000 (henceforth IT Act) which would be less time-consuming than adopting new legislation such as the proposed Digital India Act.27

Meanwhile, sectoral regulators such as the Reserve Bank of India (RBI)28 and the Telecom Regulatory Authority of India (TRAI)29 have begun to articulate the risks of AI. Going forward, they are likely to play an important role in shaping policy and regulation. 

Lastly, the Prime Minister’s Office (PMO) and National Security Council Secretariat (NSCS) will be highly influential in AI policymaking, given their cross-agency mandate and strong leadership.

Industry

The technology industry, as a whole, does not have a single view on what India’s approach to AI regulation should be. As one lawyer put it, “This is a fragmented ecosystem consisting of big tech companies, startups, industry bodies, and VCs [venture capital firms] … there is no one position.”30

That said, the predominant view is that any strict regulation would stifle innovation and make it difficult for India to achieve the ambitious goals of the India AI mission, launched in March 2024 with an initial budget of Rs. 10,300 crores ($1.3 billion) over five years, spanning strategic initiatives across compute, foundational models, datasets, skilling, and safe and trustworthy AI. For example, none of the industry representatives we spoke to expressed support for a new standalone AI law. One cautioned that it was more important for India to “get it right rather than to act swiftly.”31 Another said that “urgency will create problematic regulation.”32 Instead, one startup founder suggested that India adopt an “iterative, light-touch, and collaborative” approach to AI regulation.33

There are also some extreme positions in the industry. One tech policy executive said that there was no need for new regulations at all, arguing that AI presents no novel risks,34 a view not shared by many others. On the other end, some have explicitly called for some form of regulation. Google president Kent Walker has previously stated, “AI is too important not to regulate, and too important not to regulate well.”35 Similarly, Microsoft president Brad Smith has also observed, “there has never been an industry that has successfully regulated itself entirely… we need more laws, more regulation.”36

Some companies have advanced specific regulatory proposals for AI regulation in India. Microsoft has advocated for new laws targeted at “highly capable AI foundation models”37  IBM has called on governments to “recognize co-regulatory mechanisms”38 and Google has called for a “a risk-based and proportionate approach to AI regulation [in India] focused on use cases,”39 a model supported by several companies.40

Overall, industry stakeholders in India favor a two-level approach to regulation:

  • Level 1: Self-regulation that enables firms to proactively address the risks of AI through voluntary commitments, self-certification, and similar models.
  • Level 2: Additional regulations to fill the legal vacuum and deal with high-risk AI use cases through bespoke rules, guidelines, and advisories.

Civil Society

Here, civil society refers to the third sector of society, distinct from government and business. This includes activists, scholars, academics, and lawyers.

Some civil society representatives have called for greater representation of women, gig workers and other marginalized groups in the debate on AI regulation because they are most likely to be impacted by the negative effects of AI deployments.

Some activists also expressed distrust for industry lobbying efforts on self-regulation. One scholar called the argument that regulation would stifle innovation a “convenient oversimplification” that benefits incumbent commercial actors. One scholar argued that voluntary commitments are inadequate because they “merely outlined a set of principles,”41 and that it encourages “experimentation” which could cause harm to individuals and communities.42

Most representatives did, however, agree that India should not adopt a comprehensive AI law, at least for the time being. One scholar argued that an omnibus law might lack the nuance and context required to regulate AI.43 Another academic put it succinctly: “India needs more guidelines, less hard-coded legislation.”44 Academics are also wary of overbearing regulation and worry that new rules could restrict their access to AI systems required for public interest research in areas such as disaster management and cybersecurity.

At the same time, the academics we spoke to believe that AI presents new risks which, according to them, existing laws are ill-equipped to handle. They want regulators to intervene in areas where AI could cause irreversible harm and violate fundamental rights.45

According to one activist, “the government’s use of AI requires immediate intervention since there is a greater likelihood of impact on legal rights.” They called for a review of public procurement guidelines and the use of facial recognition technologies in public services.46

In the legal community, there are broadly two camps. One group believes that there is an unnecessary rush to “create and circulate legislative drafts,” when in fact, only a narrow range of issues are a matter for rule-making.47 Lawyers in this camp believe that the focus should be on applying existing laws to AI to mitigate risks. The other group believes that a separate law for AI is required to deal with the unintended, downstream risks of AI being deployed across India in potentially harmful ways.48

Overall Sentiment

Across India’s government, industry, and civil society, there is broad agreement that:

  • India does not need a comprehensive AI law, at least for now;
  • many of the risks from AI can be addressed through existing regulations;
  • there are some risks for which new regulations may be required; and
  • self-regulation should be encouraged at this stage. Additional rules are required to protect consumers, especially for specific high-risk use cases.

However, there is disagreement about:

  • the nature and novelty of AI risks;
  • the extent to which current laws can deal with AI risks;
  • whether or not self-regulation can sufficiently address the risks of AI; and
  • the types of binding rules that are required and when they should be introduced.

This raises some important follow-up questions—What are the risks of AI? Are they novel? What are the gaps in existing laws? What aspect of AI should we regulate?

We explore these questions in Part II.

Part II: Regulation From First Principles

To answer the question of how AI should be regulated in India, it may be useful to reason by analogy. In a recent paper for RAND, Michael J. D. Vermeer compares AI with four other general-purpose technologies: nuclear technology, the internet, encryption, and genetic engineering. He lists out various factors that would inform their governance, such as the risks posed, consensus on these risks, and the role of public-private partnerships in its development.49 Based on a similar analog, we suggest that for a dual-use, general-purpose technology such as AI, three fundamental aspects of regulation need to be clear upfront:

  1. Objectives: Policymakers should clearly state the purpose of any coercive state action in the form of regulation. Generally, regulation is deemed necessary to address market failures that may be present in the form of negative externalities, information asymmetry, consumer harm, and abuse of market power.50 Another objective may be to enforce principles such as transparency, fairness, privacy, security, and accountability through a system of rights and obligations.51 The Indian government has outlined certain objectives for AI regulation—to mitigate user harm, reduce misinformation, increase accountability, and balance commercial interests—but has not explained why regulation is required in each case. For example, it is not clear why the government should intervene in contracts between creators and publishers, as proposed by the IT Minister under a “new AI law.”52 
  2. Scope: Regulating AI presents multiple challenges, including, for example, the “pacing problem,”53 definitional issues, and what is called the “black box” problem.54 As one expert put it, “regulating AI is like regulating operating systems.”55 Therefore, it is important to clarify upfront the aspects of AI that should be regulated. Experts suggest that AI regulation should cover three aspects—inputs, outputs, and processes. Inputs may include training data and copyright material; outputs include automated decisions and AI-generated content; and processes may include models and algorithms.”56 Another expert suggested that India look at regulation of “data, models, and applications separately.”57 This appears to be a common view, with a senior MeitY official publicly stating that “Instead of trying to regulate [artificial intelligence] technology, [the Indian government] is looking at regulating its applications.”58
  3. Liability: On the issue of whom the regulation should apply to, the de facto approach is to impose different obligations on the developer and deployer, as has been done in the EU AI Act.59 The question of liability is also an important strategic issue. As one startup founder put it, “India’s competitive advantage lies in the application layer, so any regulation [on deployers] should account for this.”60Multiple experts conceded that India’s legal regime requires a significant update to clarify issues of liability. Under the current IT Act, digital service providers are classified as an “intermediary” or “publisher” (or their sub-categories).61 However, as one lawyer commented, these existing definitions under the IT Act when applied to AI systems are “being stretched too thin.” This has caused confusion about which market actors need to comply with the rules since “generative AI systems may not fall neatly within the purview of either publisher or intermediary.”62

Recommendations

Based on the above analysis, we recommend the following:

  1. The government should clarify the objectives of regulation, wherein any proposed intervention should be mapped to specific market failures.
  2. Instead of regulating the underlying technology itself, policymakers should focus on standalone issues (for example, deepfakes) and individual aspects of the AI value chain (for example, data inputs).
  3. Laws must be clear about to whom these regulations apply and when they can be held liable. For example, the current platform classification and intermediary liability framework under the IT Act must be updated to reflect the AI value chain.

Risk-based approach to AI Regulation

A risk-based approach to AI regulation is the most popular. Though implementation varies across jurisdictions, its core objective is to mitigate harm to individuals and society.63 A risk-based approach is also supported by senior officials in the Indian government, including the IT minister, the IT secretary,64 and several big tech companies.65

In the current discourse, there is a general tendency to conflate the notions of risk and harm. A key difference exists between the two. As explained in a report prepared by a group of scientists led by professor Yoshua Bengio, considered one of the “godfathers” of AI, risk is derived from the “probability of an occurrence of harm and the severity of that harm.”66 In other words, risk has a “future-orientation” and “looks at the aggregate impacts of the system on groups of people and tries to (often controversially) quantify these harms.”67 For that reason, as one legal expert put it, “AI regulation should be risk-based, not harm-based because the harm has already occurred.”68  Therefore, an important exercise in a risk-based approach to regulation is to gather evidence of harm in order to measure and anticipate the associated level of risk.

Next is the question of how to classify these risks. A report commissioned by the UK government classifies AI risks into three types: malicious use risks, risks from malfunctions, and systemic risks.69 Risks can also be categorized as safety risks and fundamental rights risks, with overlaps between the two.70 Risks may also vary based on stages (for example, during design, development, or deployment); scope (systemic risks); time-scale (short, medium, or long-term); and the source of risk (inputs vs. outputs). Even as our understanding of AI risk continues to evolve, scholars at the Massachusetts Institute of Technology (MIT) have developed a repository of more than seven hundred AI risks as of 2024.71 Some governments have also incorporated these risk taxonomies into their legislative frameworks. For example, the EU’s Artificial Intelligence Act (AI Act) classifies risk into four levels: “unacceptable, high, limited, and minimal.”72

In the table below, we synthesize the available literature and present our own risk classification framework to inform future AI policy debates.

Table 1: AI risk classification

The table below summarizes key AI risks with examples of harm to inform future policy debates. For the purpose of this analysis, “AI” includes general-purpose AI, generative AI, and artificial general intelligence.

  Potential risk Examples of harm
1. Harmful content Unauthorized impersonation of an individual using AI-generated deepfakes, leading to violation of personality rights and financial loss.i
2. Privacy violations Use of personal data without the consent of the individual for the purpose of training AI models, resulting in a violation of privacy rights.ii
3. Cybersecurity threats Sophisticated attacks using AI systems, leading to breach of critical infrastructure through AI-powered cyber attacks.iii
4. Discrimination AI systems discriminating against certain groups in hiring decisions leading to violation of the fundamental right to equality.iv
5. Loss of control  AI systems making autonomous decisions without human oversight, leading to unintended consequences or failure to stop harmful actions which could compromise individual safety.v
6. National security Public safety is compromised due to AI-powered cyber attacks, misuse of AI in CBRN (chemical, biological, radiological, and nuclear) weapons, leading to national security threats.vi
7. Product safety Misleading advertisements about the performance of AI systems that exploit consumers and cause physical or financial harm.vii
8. Intellectual property rights violations AI models being trained on copyrighted data without proper consent or compensation, undermining the rights of creators and owners.viii
9. Market concentration Increased ownership and control of AI technologies by a small number of powerful corporations, limiting access to compute resources.ix
10. Global inequality Widening digital divide between the Global North and South, with wealthier nations gaining access to AI technologies faster, deepening economic and technological disparities.x
11. Job displacement  Automation of jobs through AI systems, leading to large-scale unemployment in industries such as manufacturing, customer service, and transportation.xi
12. Environmental degradation Increased energy consumption from training and deploying large AI models, contributing to environmental harm such as carbon emissions and resource depletion.xii
13. Superintelligence Risk of AI systems surpassing human intelligence, leading to existential risks if AI’s goals diverge from humanity’s interests.xiii

i Nayan Chandra Mishra, “Urgently Need: A Law to Protect Consumers From Deep Fake Ads,” The Indian Express, October 24, 2023), https://indianexpress.com/article/opinion/columns/law-deep-fake-ads-anil-kapoor-personality-rights-8997267/)

ii “LinkedIn  Users Say Their Data is Being Collected for Generative AI Training Without Permission,” The Hindu, September 19, 2024, https://www.thehindu.com/sci-tech/technology/linkedin-users-say-their-data-is-being-collected-for-generative-ai-training-without-permission/article68658540.ece.

iii Kevin Williams, “Cyber Physical Attacks Fueled By AI are a Growing Threat, Experts Say,” CNBC, March 3, 2024, https://www.cnbc.com/2024/03/03/cyber-physical-attacks-fueled-by-ai-are-a-growing-threat-experts-say.html#.

iv Jeanita Lyman, “Workday Facing Discrimination Lawsuit Over AI Hiring Software,” Pleasanton Weekly, July 18, 2024, https://www.pleasantonweekly.com/courts/2024/07/18/workday-facing-discrimination-lawsuit-over-ai-hiring-software/.

v Adam Satariano and Roser Toll Pifarré, “An Algorithm Told Police She Was Safe. Then Her Husband Killed Her,” New York Times, July 18, 2024, https://www.nytimes.com/interactive/2024/07/18/technology/spain-domestic-violence-viogen-algorithm.html.

vi Raina Talwar Bhatia, Evi Fuelle, and Lucia Gamboa, “AI CBRN Risks: Governance Lessons from the Most Dangerous Misuses of AI,” Credo AI, August 30, 2024, https://www.credo.ai/blog/ai-cbrn-risks-governance-lessons-from-the-most-dangerous-misuses-of-ai#.

vii Adam Lashinsky, “Watch out. Companies Are Using “AI Washing” to Mislead Consumers,” The Washington Post, January 29, 2024, https://www.washingtonpost.com/opinions/2024/01/29/sec-ftc-ai-c3-internet-fraud/.

viii Annapurna Roy, “Indian Publishers Seek Rules for Copyright Protection Against Generative AI Models,” The Economic Times, January 26, 2024, https://economictimes.indiatimes.com/tech/technology/indian-publishers-seek-rules-for-copyright-protection-against-generative-ai-models/articleshow/107154425.cms?from=mdr

ix Jai Vipra and Sarah Myers West, “Computational Power and AI,” AI Now Institute, September 25, 2023, https://ainowinstitute.org/publication/policy/compute-and-ai.

x Philip Schellekens and David Skilling, “Three Reasons Why AI May Widen Global Inequality,” Centre for Global Development, October 17, 2024, https://www.cgdev.org/blog/three-reasons-why-ai-may-widen-global-inequality#.

xi Mark Talmage-Rostron, “How Will Artificial Intelligence Affect Jobs 2024-2030,” Nextford University, January 10, 2024, https://www.nexford.edu/insights/how-will-ai-affect-jobs

xii Renée Cho, “AI’s Growing Carbon Footprint,” State of the Planet, Columbia Climate School, June 9, 2023, https://news.climate.columbia.edu/2023/06/09/ais-growing-carbon-footprint/

xiii Nick Bostrom, Superintelligence: Paths, Dangers, Strategies, (Oxford University Press, July 2014).

How Should India Classify AI Risks?

Currently, there is no credible AI risk classification framework for India that is based on empirical evidence of harm. There are some examples of standalone risk assessments, but they are incomplete and unsubstantiated. For example, a 2021 NITI Aayog report outlines certain “risks and considerations” to operationalize responsible AI practices but doesn’t spell out these risks.73 Another report by TRAI briefly mentions certain risks—“low quality data, data biases, data security, data privacy, inaccurate or biased algorithm, and unethical use of AI”—but jumps straight into regulatory principles without analyzing the risks.74 Similarly, the Telecom Engineering Center has published a draft AI risk assessment framework but focuses entirely on fairness outcomes.75

Therefore, to support future policymaking, we have identified five categories of risks that are most relevant from a regulatory perspective:76

  1. Malicious uses, including the creation and distribution of harmful AI-generated content that could result in the violation of legal rights or threat to public order and safety. 
  2. Algorithmic discrimination, which could arise due to various factors, such as the use of unreliable or unrepresentative datasets in AI systems, and which may result in financial losses, loss of opportunity, and pose a threat to other fundamental rights and freedoms.
  3. Transparency failures resulting from the lack of adequate and relevant disclosures about how AI systems are being used to make decisions, what data is being used to train AI systems, and other forms of information asymmetry that pose a threat to privacy and other rights.
  4. Systemic risks resulting from the dependence on a small number of AI systems in critical sectors like finance, healthcare, national security, and other forms of market concentration that could cause disruptions on a broad scale with potentially catastrophic effects.
  5. Loss of control due to lack of human oversight in the development and use of autonomous AI systems, which could result in unintended consequences and threats to national security and public safety.

We recommend that these specific categories of risks be studied in detail in an effort to develop an appropriate AI risk classification framework for India. Moreover, since the general consensus is that India should regulate “high-risk use cases,”77 we suggest analyzing proposed AI deployments based on these risk vectors. Although there is no standard definition, “high-risk” applications generally include those in critical infrastructure, lending, credit scoring, insurance, product safety, consumer rights, law enforcement, and justice delivery.78 A regulatory approach focused on “high-risk use cases” is likely to find favor in India, though it will have to be an iterative exercise grounded in scientific research.

Risk classification should also be grounded in tangible evidence of harm and account for local factors, such as AI adoption, consumer awareness, and digital literacy levels in India. Market studies, like the one being conducted by the Competition Commission of India, are useful references to identify specific market failures in India’s AI ecosystem.79

From a regulatory standpoint, a well-considered AI risk assessment would also help identify risks for which regulation is not warranted.80 In some cases, risks can be addressed through industrial policy (for example, enabling access to compute through subsidies and reskilling individuals to mitigate job displacement).81 Long-term risks may not warrant immediate regulation (for example, existential threats to humanity). Many other risks can be addressed through existing regulations—an issue that we analyze in more detail in the next section.

What Are the Gaps in Existing Laws?

India has a complex legal system of laws and regulations, comprising the Constitution, statutory laws, rules, regulations, and guidelines. From an AI perspective, the relevant areas of law are privacy law, intellectual property law, competition law, media law, employment law, consumer law, criminal law, contract law, and tort law.82

The common view across government, industry, and civil society groups is that this existing legal framework can be applied to address many of the AI risks outlined in this paper. The table below illustrates how existing statutory laws can be applied to deal with some AI risks.

Table 2: Applicability of existing statutory laws to certain AI risks (illustrative)

Nature of harm Applicable statutory laws
Depiction of a child in a sexually explicit video that is AI-generated
  • Information Technology Act, 2000
  • Prevention of Children from Sexual Offences Act, 2012
  • Bharatiya Nyaya Sanhita, 2023
Unauthorized impersonation using AI-generated deepfakes
  • Bharatiya Nyaya Sanhita, 2023
  • Information Technology Act, 2000
Discrimination in hiring decisions using AI recruitment tools
  • Rights of Persons with Disabilities Act, 2016
  • Transgender Persons (Protection of Rights) Act, 2019
  • Code on Wages, 2019
  • Scheduled Castes and the Scheduled Tribes (Prevention of Atrocities) Act, 1989
Use of an individual’s personal data without consent to train AI models
  • Digital Personal Data Protection Act, 2023
  • Information Technology Act, 2000
Misleading ads about the reliability or performance of an AI service
  • Consumer Protection Act, 2019
Use of copyright-protected material in AI-generated content without permission of the author or owner
  • The Copyright Act, 1957

One legal expert suggested that policymakers should spend the next six to twelve months applying existing laws to AI use cases to understand potential gaps in the current framework, and that the Department of Legal Affairs, Ministry of Law and Justice, Government of India, should be entrusted with this task.83

Another important factor will be the role of courts in interpreting, adapting and enforcing existing provisions of law. So far, Indian courts have dealt with only a handful of cases involving modern AI systems, focused primarily on AI-generated content involving the likeness of public figures. A cursory review of cases indicates that Indian courts are content to apply existing legal provisions to provide redress for now.84

Table 3: Applicability of existing Indian laws to deepfakes (illustrative)

The table below illustrates which existing statutory laws would apply in the case of a deepfake image or video being circulated without the permission of the individual.

Nature of harm Applicable laws
Cheating by impersonation
  • Section 66D of the Information Technology Act, 2000
  • Section 319 of the Bharatiya Nyaya Sanhita, 2023
Transmitting obscene material
  • Section 67A and 67B of the Information Technology Act, 2000
Causing harm to reputation
  • Section 356 of the Bharatiya Nyaya Sanhita, 2023
Failure to observe due diligence guidelines for intermediaries
  • Section 79 of the Information Technology Act, 2000, read with Rule 3 of the Intermediary Guidelines.

Experts have, however, warned that additional regulations are required because “there are fundamentally new risks and harms emerging from AI that existing laws are not equipped to deal with.”85 One expert gave the example of a hospital collecting sensitive health data for a medical diagnosis, which was then repurposed to train AI models without the person’s knowledge86—an example of how “consent-based regimes break down completely in the AI context.”87 Multiple experts also said that new legal rights are required to protect individuals and society given the pervasiveness of AI systems, the lack of transparency, and the disproportionate impact of these systems on vulnerable communities.88

Therefore, on the question of whether new regulations are required to deal with the risks of AI, we suggest the following approach to guide future policymaking:89

1.     No additional regulations are required in cases where:

  • there is no evidence of consumer harm or other market failure;
  • the risks can be adequately addressed through industrial policy (for example, reskilling initiatives to mitigate job displacement or subsidies to enable access to compute);
  • there is no pressing need to adopt regulation at this stage (for example, existential threat to humanity from artificial general intelligence);
  • risks can be addressed through existing regulations (for example, consumer harm arising from misleading advertisements about the performance of AI systems can be adequately addressed under existing laws).90

2.     Clarifications or targeted amendments are required in cases where advisories, guidelines or targeted legal amendments can sufficiently address risks, instead of adopting new and comprehensive rules. For example, the risks relating to the circulation of deepfakes can be broadly addressed through existing laws.91 However, regulators should clarify to whom these existing rules apply and in which cases they can be held liable.92

3.     New regulations should be considered to address market failures and protect consumers. For example, introducing a right to compensation in the case of misuse of AI and the right to object to automated decision-making to protect fundamental rights.93 New transparency obligations for certain AI systems should also be considered to address the risks relating to information asymmetry.94

It is beyond the scope of this paper to conduct a comprehensive analysis of all the relevant laws and regulations and where they fall short in the context of AI systems. Therefore, we recommend that a comprehensive regulatory gap analysis be conducted to help inform future policymaking in India.

Part III: Different Approaches to AI Regulation

In Part II, we identified certain AI risks for which new regulations may be warranted, subject to evidence of harm. In Part 3, we examine the different regulatory approaches that can be adopted to address these risks.

What Are the Possible Regulatory Approaches?

Across the globe, three approaches to AI regulation have been adopted so far:

  1. Self-regulation: In this approach, a “group of firms in a particular industry, or entire industry sectors agree to act in prescribed ways, according to a set of rules or principles.”95 In the AI context, self-regulation is generally organized around principles of privacy, security, transparency, fairness, accountability, and trust.96 Self-regulation can be implemented in many ways, such as voluntary commitments, self-certifications, and impact assessments.
  2. Co-regulation: In this approach, the government or regulators play a more proactive role by developing, recognizing, endorsing, or implementing standards. This is a more stringent approach and “represents a midpoint in the continuum between self-regulation and full government regulation.”97 Examples of co-regulation include codes of practice and risk management frameworks.
  3. Binding regulations: In this approach, policymakers enact a law or some other binding framework that makes them legally enforceable. The most prominent example is the EU’s Artificial Intelligence Act,98 which contains rights and obligations in relation to AI systems. 

Each of these approaches have distinct advantages and disadvantages. According to a report prepared by the Stanford Cyber Policy Center,99 self-regulation helps tap into industry expertise, provides flexibility, and encourages rapid innovation, but lacks sufficient accountability and enforceability. Co-regulation enables collaboration between companies and regulators, allowing for an iterative approach, but often lacks the necessary enforcement mechanisms required to address market failures. Binding regulation provides clear accountability mechanisms and government oversight but could stifle innovation due to bureaucratic delays and the lack of expertise and adaptability.

Therefore, policymakers should carefully evaluate the relative cost and benefit of each approach before adopting them into domestic frameworks.

How Are Different Countries Approaching AI Regulation?

Regulatory approaches in different countries reflect their own socio-economic priorities, legal traditions, and governance models. For example, the EU’s rights-based approach, in the form of a comprehensive statutory law, seeks to “protect health, safety, and fundamental rights.”100 On the other hand, China, with its strong desire for state control, prioritizes social order and the protection of “Socialist Core Values” in its rules on AI-generated content.101 Japan, on its part, has professed a “human-centric” approach to AI that aligns with its broader societal goals.102 Singapore and the United Kingdom have adopted a principle-based approach that reflects a pragmatic style, tailoring rules to specific industries.103 The table below provides an overview of these different global approaches.

Table 4: Summary of approaches to AI regulation in different jurisdictions

The table below provides a summary of the different approaches to AI regulation in various jurisdictions as of the date of publication of this paper.

Jurisdiction Summary of approach Type of regulation
Australia The government has released a discussion paper proposing mandatory guardrails to regulate AI in high-risk settings and general-purpose AI models.i Binding government regulations are currently under discussion.
Brazil Reviewing proposals for a new AI law that protects fundamental rights and ensures secure, reliable AI systems while categorizing them by risk and imposing various compliance requirements.ii Binding government regulations are currently under discussion.
Canada Published the draft Artificial Intelligence and Data Act (AIDA) that focuses on responsible AI use, consumer protection, and fair competition.iii Binding government regulations are currently under discussion.
China Techno-centric approach with specific regulations aimed at algorithm recommendations and generative AI.iv Binding government regulations have been adopted and are in force.
European Union Statutory framework in the form of the AI Act that categorizes systems by risk levels, imposes stringent requirements on high-risk applications, and aims for transparency and accountability.v Binding government regulations have been adopted and are in force, with provisions for co-regulation.
Japan Through the G7’s Hiroshima process, Japan has promoted a light touch approach and a voluntary code of conduct. Japan has since established a set of domestic guidelines for businesses and is considering a statutory framework.vi Self-regulatory approach with ongoing discussion on binding regulations.
Singapore Voluntary, use-case based approach that emphasizes a sectoral approach based on governance frameworks.vii Self-regulatory approach.
United Kingdom Context-based and cross sectoral framework that focuses on core principles that will be implemented by sectoral regulators.viii Self-regulatory approach, with the option for sectoral regulators to frame binding regulations.
United States of America Voluntary commitments and executive orders that emphasize a principle-based, cross-sectoral approach to promote industry best practices, and risk mitigation tools with input from various federal agencies.ix Self-regulatory approach, with limited downstream impact on advanced AI model providers from executive orders.

iSafe and Responsible AI in Australia, (Department of Industry, Science and Resources, Australian Government, September 2024),https://storage.googleapis.com/converlens-au-industry/industry/p/prj2f6f02ebfe6a8190c7bdc/page/proposals_paper_for_introducing_mandatory_guardrails_for_ai_in_high_risk_settings.pdf.)

iiBill no. 2338, of 2023,https://www25.senado.leg.br/web/atividade/materias/-/materia/157233..

iiiBill C-27, (First reading June 16, 2022).

ivCyberspace Administration of China, National Development and Reform Commission, Ministry of Education Ministry of Science and Technology, Ministry of Industry and Information Technology, Ministry of Public Security, State Administration of Radio, Film and Television, Interim Measures for the Management of Generative Artificial Intelligence Services, (Enforced on August 15, 2023).

vThe EU AI Act, Regulation (EU) 2024/1689.

vi “AI Guidelines for Business Ver1.0,” Ministry of Economy, Trade and Industry, Government of Japan, April 19, 2024, https://www.meti.go.jp/shingikai/mono_info_service/ai_shakai_jisso/pdf/20240419_9.pdf; Kohei Sakai, Takai Tsuji, and Ryohei Yasoshima, “Japan Weighs Regulating AI Developers, Following U.S. and EU,” Nikkei Asia, May 2, 2024, https://asia.nikkei.com/Business/Technology/Japan-weighs-regulating-AI-developers-following-U.S.-and-EU.

vii “Model Artificial Intelligence Governance Framework Second Edition,” InfoComm Media Development Authority and Personal Data Protection Commission Singapore, January 21, 2020, https://www.pdpc.gov.sg/-/media/Files/PDPC/PDF-Files/Resource-for-Organisation/AI/SGModelAIGovFramework2.pdf.

viii “A Pro-Innovation Approach to AI Regulation,” Office for Artificial Intelligence, Department of Science Innovation and Technology, Government of the United Kingdom, (Presented to Parliament on March 29, 2023), https://www.gov.uk/government/publications/ai-regulation-a-pro-innovation-approach/white-paper.

ix “Ensuring Safe, Secure, and Trustworthy AI,” The White House, July 21, 2023, https://www.whitehouse.gov/wp-content/uploads/2023/07/Ensuring-Safe-Secure-and-Trustworthy-AI.pdf; “Executive Order on the Safe, Secure, and Trustworthy Development and Use of Artificial Intelligence,” The White House, October 30, 2023, https://www.whitehouse.gov/briefing-room/presidential-actions/2023/10/30/executive-order-on-the-safe-secure-and-trustworthy-development-and-use-of-artificial-intelligence/.

What Are the Views in India?

Self-Regulation

Based on our research and discussions with experts, there are four key reasons for supporting self-regulation. Firstly, industry stakeholders support self-regulation for AI because they have already been proposed by NITI Aayog and the Indian Council of Medical Research.104 Secondly, according to one executive, there are strong market incentives for companies to comply with self-regulation, for example, to secure valuable procurement contracts.105 This applies even to startups, for whom self-regulatory frameworks are “low-effort promises” to signal seriousness in the market.106 Third, voluntary commitments can co-exist with other regulatory models, such as co-regulation, so it is not a binary decision.107 Lastly, India has successfully implemented self-regulation in other parts of the digital sector, such as in digital advertising.108

On the other hand, there are some strong skeptics of self-regulation. “The future trajectory of these technologies and their social impacts cannot be decided by the very individuals who stand to profit from them,” asserted one expert.109 Some industry executives also point to the failure of recent industry efforts to adopt generative AI guidelines,110 because they were “too broad in scope” and prescriptive.111 Another executive claimed that “the Indian government does not trust big tech companies enough to make self-regulation work.112 Moreover, according to one expert, self-regulation “doesn’t do much to separate good actors from bad actors.”113 This skepticism is also shared by some government officials who said that binding rules are necessary to contain some AI risks.114

Co-regulation

Many experts said that co-regulatory models were urgently needed in India to reduce the burden on public institutions while providing a higher degree of accountability, especially when compared to self-regulation. They also agreed that co-regulation may also be the most effective way to deal with “high risk use cases” for which government oversight is necessary.115

However, at present, there are three key issues with co-regulatory models in India: 

  1. Lack of conceptual clarity: The difference between self-regulation and co-regulation is not clearly understood. Many “self-regulatory bodies” (SROs) are subject to strict oversight, though they are presented as voluntary, industry-led approaches.
  2. Structural issues: The failure of co-regulatory models in India indicates deep-seated issues, including a lack of political will to cede authority, poor governance structures, lack of financial independence, and conflicts of interest.
  3. Delays in implementation: The government has rejected multiple proposals for co-regulation from the online gaming industry, leading to a regulatory vacuum.

For these reasons, we recommend that co-regulatory models should not be adopted for AI governance in India until they have been proven to work in other domains.

Binding rules

Many experts believe it would be premature for India to adopt binding rules for AI for several reasons. First, there is no comprehensive risk assessment on the basis of which new rights and obligations can be developed. Second, there is no empirical evidence of market failure to justify the increased compliance cost of new regulations. Third, existing laws can address many of the anticipated risks of AI, and a gap analysis is required to identify areas where new rules are required. And fourth, less expensive methods such as self-regulation may be sufficient to address the anticipated risks.

In the next section, we examine India’s broader policy agenda on AI and analyze various socio-economic factors that influence the choice of regulatory architecture.

Part IV: How Should India Approach AI Regulation?

In this section, we analyze three factors that should inform India’s approach to AI regulation —economic opportunity, cost of regulation, and state capacity.

Economic Opportunity

There is significant economic upside to the successful implementation of India’s national AI strategy.116 According to the Indian government, AI is expected to add nearly $500 billion to India’s gross domestic product (GDP) by 2025.117 The government has also identified potential applications of AI in agriculture, healthcare, disaster management, transportation, law, and finance that could have a transformative social impact.118

The private sector is also bullish on the AI opportunity for India. Accenture anticipates that AI will increase the annual growth rate of India’s economy by 1.3 percent by 2035, while a Google-commissioned report estimates that at least Rs 33.8 lakh crore of economic value will be generated by 2030 through AI adoption.119

For these reasons, some researchers from the Global South have identified a new type of AI risk for developing countries—“the risks or opportunity costs of not implementing AI, [and] missing out on potential benefits.”120 Indeed, this view is shared by many in India, as demonstrated in a survey conducted by Ipsos, which finds that “Indian respondents are more optimistic about AI than their global counterparts.”121

Therefore, we recommend that India adopt a light touch and pro-innovation approach that is aligned with its broader AI strategy to help fully realize the socio-economic benefits of AI.

Cost of Regulation

New AI regulation would entail a series of costs that will have to be weighed against the potential benefits to society.

First is the obvious industry cost of compliance. There are no estimates of what it would cost an Indian business to comply with a model AI law. However, for comparison, the European Commission estimates the cost of complying with the AI Act to be between €1.6 billion and €3.3 billion (Rs 143.5 billion to Rs 296.1 billion or $1.7 billion to $3.5 billion).122 Although this is a small fraction of the EU’s overall GDP, which stood at $16.6 trillion in 2022, the new regulation has been criticized for going “too far” and setting the regulatory barrier “too high.”123 Moreover, as India prepares to implement a new data protection law, new AI regulations will likely increase the cumulative cost for businesses. 

Besides compliance costs, some industry executives fear the “psychological costs” associated with new regulations. They point to existing rules under the Apprentices Act, 1961, which grants officials wide discretionary powers to impose fines.124 In fact, one startup founder said that some new rules, such as the appointment of a local Data Protection Officer,125 only serve to increase compliance costs and create fear in the minds of entrepreneurs.126

Other costs include the administrative expenses associated with implementing a new law.127 New regulations for AI could also increase litigation costs, especially if they are premature, unclear, or duplicative.

On the other hand, timely regulations could provide business clarity and promote innovation. Some say that the introduction of the Information Technology Act, 2000, helped propel the growth of India’s e-commerce industry.128 Policymakers should also consider the costs of delayed regulation, though it is difficult to quantify the loss of legal rights and freedoms.

State Capacity

Multiple interviewees highlighted capacity constraints that, according to them, would prevent the Indian government from effectively implementing new AI regulations given the status quo. These limitations fall into five categories: (1) lack of technical expertise; (2) failure to issue clear and timely regulatory guidance; (3) lack of investigative powers; (4) ineffective or inconsistent enforcement; and (5) lack of grievance redressal mechanisms.129

Addressing each of these issues will require a multi-fold approach. To promote effective AI governance, we recommend increasing state capacity in at least two respects:

  1. Independent expertise: India should consider setting up a national “AI Safety Institute” (AISI) to develop state capacity in foundational research, safety and testing, training and awareness, and cross-border collaboration on AI governance.130 AISIs that have been set up in the UK, the EU, Singapore, Japan, and the United States guide industry compliance and facilitate information exchange on AI safety issues with other like-minded countries. While the Indian government has commenced consultations on a proposed AISI, experts caution that the scope and structure of the AISI should be carefully thought out to ensure that it functions effectively.131
  2. Independent enforcement: Poor enforcement stems from a variety of factors—lack of resources and technical expertise, no separation of powers between the government and regulator, lack of consumer awareness, and ineffective grievance redressal mechanisms.132 There are two possible pathways to increase enforcement capacity—boost MeitY’s resources so that it can act as both policymaker and de-facto regulator, or set up a new regulator to enforce the IT Act and oversee AI governance.

Developing state capacity in these ways, we anticipate, would help separate policymaking from enforcement, promote industry compliance, and protect consumers.

Recommendations: An AI Policy Roadmap for India

Policymakers in India must better understand the current capabilities and unique risks posed by AI, the dynamic and evolving nature of the AI ecosystem, and the gaps in the existing legal framework. They will also need to adopt a balanced regulatory approach and be prepared to address market failures as and when the need arises.

Along these lines, below is a suggested policy action plan:

  1. Understand the risks and benefits of AI: Having a clear understanding of the current capabilities of AI is crucial for cost-benefit analyses, regulatory impact assessments, and risk assessments. We recommend adopting measures similar to the U.S. executive order on AI, which directs government agencies and regulators to issue reports on how AI is being used in their respective sectors and to identify potential risks.133 Market studies on India’s AI ecosystem, such as the one being conducted by the Competition Commission of India (CCI),134 are also required to identify market failures. In India, the Office of the Principal Scientific Adviser has been entrusted with inter-agency coordination on issues of AI governance. These discussions should be used to inform future policy and facilitate awareness, training, and capacity-building efforts.
  2. Classify AI risks based on evidence of harm: The government should identify specific AI risks for which new regulations are required. The five categories of AI risks identified in Part II of this paper provide a useful framework to gather evidence of harm in India and inform future regulation. Further, we recommend identifying “high-risk applications” based on these risk vectors for which additional regulations are required. A comprehensive risk assessment will require input from multiple agencies. We suggest that this exercise be coordinated by an inter-ministerial committee consisting of MeitY, the Department of Science and Technology, the Ministry of Consumer Affairs, the Ministry of External Affairs, the Ministry of Communications, the Ministry of Agriculture, and the Ministry of Health and Family Welfare, amongst others. Alternatively, the Parliamentary Standing Committee on Communications and Information Technology can coordinate this process. The proposed AISI for India should also be involved in conducting these risk assessments.
  3. Identify gaps in existing laws: A comprehensive gap analysis is essential to identifying areas where new regulations are required to address AI risks. We suggest an approach that identifies areas where: (1) no additional regulations are required (for example, in relation to job losses or superintelligence); (2) clarifications or targeted amendments would suffice (for example, in relation to deepfakes); and (3) new regulations are required to address market failures (for example, obligations with respect to transparency or rights in relation to algorithmic discrimination). This comprehensive analysis should involve participation from the Ministry of Consumer Affairs and the Department of Legal Affairs, and may be supplemented by research from academic institutes, law firms, think tanks, and the proposed AISI.
  4. Encourage self-regulation: Based on India’s larger strategic goals, we recommend that India adopt a light-touch, voluntary, and principle-based approach to AI regulation, at least for now. Self-regulation is a good starting point because it allows regulators to adapt to the pace of innovation, develop technical expertise by collaborating with the private sector, reduce administrative costs, and develop baseline industry norms before introducing other compliance measures. To start with, we suggest that companies operating in India adopt voluntary commitments in relation to the safety and testing of AI systems, misinformation, privacy, and security. We also see an important role for specialized industry bodies in developing and adopting voluntary codes for different market actors (advanced models, enterprise companies, startups, etc.) and sectors (healthcare, finance, education, etc.).
  5. Empower the government to address AI risks: We recommend that legal provisions be introduced to enable the government to swiftly address market failures, now or in the future. This would include measures to:
    (a) enable the government to notify key entities involved in the AI value chain based on their evolving roles and responsibilities, and to clarify to whom specific regulations apply;
    (b) empower the government to create new rights for citizens or impose legal obligations on any class of entities if there is evidence of market failure (for example, in relation to testing, transparency, audits, human oversight); and
    (c) develop robust enforcement and consumer protection mechanisms that are designed to address the specific AI risks for which there is evidence of harm.
  6. Adopt a “whole of government” approach: AI governance requires a multi-faceted approach. Broadly, we recommend a three-part framework:
    (a) The relevant sectoral ministries, departments, and regulators (such as the Reserve Bank of India (RBI), the Telecom Regulatory Authority of India (TRAI), the Central Consumer Protection Authority (CCPA), the Ministry of Health, the Ministry of Education, and so on) should take the lead in developing, monitoring, and enforcing regulations.
    (b) MeitY, as the nodal ministry responsible for technology regulation, should set baseline requirements across sectors and support the adoption of voluntary codes of conduct.
    (c) Inter-agency coordination on AI governance should be handled by the Prime Minister’s Office (PMO) or the National Security Council Secretariat (NSCS), given their cross-agency mandate and strong leadership. Separately, a national AI Safety Institute (AISI) would help provide independent expertise and coordinate with the international network of AISIs.
  7. Consult with experts on AI regulation: While the government has initiated a series of public consultations on the proposed Digital India Act, there is a need for a more focused debate on AI regulation with a variety of experts.135 Specific issues such as bias, privacy, security, and copyright require an interdisciplinary approach and more substantive policy discussions with multi-stakeholder groups to shape future policy.

Conclusion

As India’s policymakers carefully mull the next steps on AI regulation, the brief pause in this continuing advance offers an opportunity to reflect and readjust, lest policymakers get trapped in path dependency and a mindless rush to regulate.

As the sentiment analysis in Part I illustrates, there is broad agreement that India should not adopt a comprehensive AI law. Some new regulations are warranted to address the risks of AI, but the scope of new rules and the ideal regulatory approach remains contentious.

Part II explains the intricate relationship between risk and harm, and the need for empirical evidence grounded in the local context, to inform India’s regulatory approach. We suggest focusing on five categories of AI risk and identifying high-risk use cases based on these risk vectors. We also offer some examples of areas where new regulations are required.

As the overview of different global approaches in Part III demonstrates, there is no one-size-fits-all approach to AI regulation. For a developing country like India, which is committed to reaping the full range of benefits from AI, we suggest self-regulation at least for the next six to twelve months (because binding regulation would entail significant costs, co-regulation is broken, and self-regulation is relatively efficient). However, as AI systems continue to evolve, the government should be empowered to prevent harm with clear legal mandates.

This paper suggests a few ways in which such provisions can be introduced. Regulation encompasses more than just laws. It also includes norms, standards, ethical practices, policy frameworks, institutional oversight, and soft laws. Therefore, we suggest a “whole of government” approach in which sectoral agencies, MeitY, and an inter-ministerial body collaborate in a dynamic fashion. An AISI, designed from the ground up keeping India’s unique needs in mind, can also supplement state capacity.

Finally, and this is important, the process by which AI regulations are framed must be both participative and inclusive. Not only should the data, models, and applications that power India’s AI ecosystem be representative of its culture, but so too should the policy frameworks that shape its future trajectory. To that end, it behooves the government to initiate a series of consultations on this topic before continuing its advance on AI regulation.

Acknowledgements

The authors would like to thank Rudra Chaudhuri and Anirudh Burman for their feedback on a draft of this paper, and the interviewees and reviewers for their thoughtful comments.

Notes

Carnegie does not take institutional positions on public policy issues; the views represented herein are those of the author(s) and do not necessarily reflect the views of Carnegie, its staff, or its trustees.