This publication was produced under Carnegie India’s Technology and Society Program. For details on the program’s funding, please visit the Carnegie India website. The views expressed in this piece are solely those of the authors.
Introduction
Artificial intelligence (AI) is a general-purpose technology that has existed since the early 1950s.1 Its trajectory is marked by cycles of hype and innovation, followed by periods of stagnation and disillusionment.2 In 2022 alone, more than thirty laws related to AI were passed in over a hundred countries.3 What explains this sudden rush to regulate AI?
Some say the launch of ChatGPT in late 2022 was a defining moment. It brought generative AI to the forefront, and along with it, concerns about bias, misinformation, copyright violations, and the impact on labor markets.4 One might also point to a confluence of factors—the massive breakthroughs in machine learning, new and powerful capabilities of large language models, and the global reach of social media—which has stoked the fears of policymakers and prompted new regulations in some countries.5
And yet, nobody seems to have a clear collective vision for how AI should be regulated. This has resulted in divergent approaches around the world—from comprehensive legislation in the European Union (EU) to technology-specific rules in China, and voluntary commitments in the United States.6 Despite these differences, global policymakers seem to agree on one thing—we must leverage the power of AI while mitigating its risks.7
Where does this leave India on AI regulation? The existing body of literature concerning India’s approach to regulating AI is disconnected, narrow, or superficial. It includes news coverage of regulatory proposals;8 brief commentaries on national and global policy developments;9 editorials on what India’s approach should be;10 summaries of the legal landscape;11 readouts of roundtable discussions;12 and analyses of specific regulatory issues involving AI.13 What is missing is a clear and comprehensive analysis of India’s overall advance on AI regulation. Who is driving the debate in India? What are the views of different stakeholders? Does India need new AI regulations? What should they look like?
What Does This Paper Do?
Aims and Objectives:
The goal of this paper is to answer two main questions:
- What is the state of AI regulation in India?
- What should be the way forward?
To that end, this paper will capture the views of government, industry, and civil society in India and suggest a policy roadmap to inform India’s advance on AI regulation.
Structure
The paper is divided into four parts:
Part I provides an overview of the current sentiment in government, industry, and civil society in India on the topic of AI regulation.
Part II explores the scope and objectives of AI regulation, the nature of AI risks, and areas where additional regulations may be required.
Part III examines global approaches to AI regulation and views in India.
Part IV suggests a policy roadmap for India on AI regulation.
Methodology
Our analysis is based on multiple discussions with key stakeholders over several months, both in closed-door and public settings.
We also conducted seventeen interviews with senior government officials, industry executives, lawyers, technologists, and scholars working specifically on AI policy in India. All interviewees have been granted anonymity to protect their privacy given their ongoing engagements on these issues and the sensitivity of these discussions.
Finally, we have referenced several books, policy documents, academic papers, news reports, and articles.
Part I: General Sentiments on AI Regulation
Government
Since 2022, the Indian government has oscillated between a hands-off approach to AI regulation and one that is more direct and interventionist,14 which has led to some confusion.
Broadly, India supports a “pro-innovation” approach to AI regulation. It wants to unlock the full potential of AI while taking into account the anticipated risks. This is reflected in the G20 Ministerial Declaration made during India’s presidency, as well as a statement in Parliament in April 2023 that “[the Indian government] is not considering bringing a law or regulating the growth of AI in the country.”15
However, around the same time, the Ministry of Electronics and Information Technology (MeitY) published a blueprint for a new Digital India Act, which includes a specific reference to the “regulation of high-risk AI systems.”16 Then, after a lull of close to a year, the government issued an advisory in March 2024 that jolted the industry.17 The advisory, which mandated compliance with immediate effect, directed companies to obtain the government’s permission before deploying certain AI models in India, and to take steps to prevent algorithmic discrimination and the distribution of deepfakes.18 Following sharp criticism, the advisory was withdrawn and replaced with a fresh one that continues to remain in force.19
The reason for the government’s fragmented approach is that there are multiple, differing views within the establishment. MeitY, which is the nodal ministry for technology regulation in the country, favors a “light touch approach.”20 An official suggested that the much-criticized AI advisory from March 2024 was the product of another agency’s influence and not the brainchild of MeitY.21 Some factions within the government want more regulation. For example, a member of the Prime Minister’s Economic Advisory Council has published a report that characterizes AI as a “complex adaptive system”22 that requires “proactive regulatory intervention.”23 Another key official involved in framing India’s AI policy said that MeitY was “not doing enough” to address the risks of AI.24
For now, it appears the Indian government is still building consensus while adopting a cautious approach. It has tasked the Office of the Principal Scientific Advisor (PSA), set up to advise the Prime Minister and the cabinet on matters of science and technology, to consult with different ministries and provide “strategic guidance” on AI regulation.25 A sub-committee, convened by MeitY and reporting to the PSA, has prepared a draft report on “AI Regulation,” though it has not yet been published.26
MeitY, for its part, is considering various regulatory options, including amending the Information Technology Act, 2000 (henceforth IT Act) which would be less time-consuming than adopting new legislation such as the proposed Digital India Act.27
Meanwhile, sectoral regulators such as the Reserve Bank of India (RBI)28 and the Telecom Regulatory Authority of India (TRAI)29 have begun to articulate the risks of AI. Going forward, they are likely to play an important role in shaping policy and regulation.
Lastly, the Prime Minister’s Office (PMO) and National Security Council Secretariat (NSCS) will be highly influential in AI policymaking, given their cross-agency mandate and strong leadership.
Industry
The technology industry, as a whole, does not have a single view on what India’s approach to AI regulation should be. As one lawyer put it, “This is a fragmented ecosystem consisting of big tech companies, startups, industry bodies, and VCs [venture capital firms] … there is no one position.”30
That said, the predominant view is that any strict regulation would stifle innovation and make it difficult for India to achieve the ambitious goals of the India AI mission, launched in March 2024 with an initial budget of Rs. 10,300 crores ($1.3 billion) over five years, spanning strategic initiatives across compute, foundational models, datasets, skilling, and safe and trustworthy AI. For example, none of the industry representatives we spoke to expressed support for a new standalone AI law. One cautioned that it was more important for India to “get it right rather than to act swiftly.”31 Another said that “urgency will create problematic regulation.”32 Instead, one startup founder suggested that India adopt an “iterative, light-touch, and collaborative” approach to AI regulation.33
There are also some extreme positions in the industry. One tech policy executive said that there was no need for new regulations at all, arguing that AI presents no novel risks,34 a view not shared by many others. On the other end, some have explicitly called for some form of regulation. Google president Kent Walker has previously stated, “AI is too important not to regulate, and too important not to regulate well.”35 Similarly, Microsoft president Brad Smith has also observed, “there has never been an industry that has successfully regulated itself entirely… we need more laws, more regulation.”36
Some companies have advanced specific regulatory proposals for AI regulation in India. Microsoft has advocated for new laws targeted at “highly capable AI foundation models”37 IBM has called on governments to “recognize co-regulatory mechanisms”38 and Google has called for a “a risk-based and proportionate approach to AI regulation [in India] focused on use cases,”39 a model supported by several companies.40
Overall, industry stakeholders in India favor a two-level approach to regulation:
- Level 1: Self-regulation that enables firms to proactively address the risks of AI through voluntary commitments, self-certification, and similar models.
- Level 2: Additional regulations to fill the legal vacuum and deal with high-risk AI use cases through bespoke rules, guidelines, and advisories.
Civil Society
Here, civil society refers to the third sector of society, distinct from government and business. This includes activists, scholars, academics, and lawyers.
Some civil society representatives have called for greater representation of women, gig workers and other marginalized groups in the debate on AI regulation because they are most likely to be impacted by the negative effects of AI deployments.
Some activists also expressed distrust for industry lobbying efforts on self-regulation. One scholar called the argument that regulation would stifle innovation a “convenient oversimplification” that benefits incumbent commercial actors. One scholar argued that voluntary commitments are inadequate because they “merely outlined a set of principles,”41 and that it encourages “experimentation” which could cause harm to individuals and communities.42
Most representatives did, however, agree that India should not adopt a comprehensive AI law, at least for the time being. One scholar argued that an omnibus law might lack the nuance and context required to regulate AI.43 Another academic put it succinctly: “India needs more guidelines, less hard-coded legislation.”44 Academics are also wary of overbearing regulation and worry that new rules could restrict their access to AI systems required for public interest research in areas such as disaster management and cybersecurity.
At the same time, the academics we spoke to believe that AI presents new risks which, according to them, existing laws are ill-equipped to handle. They want regulators to intervene in areas where AI could cause irreversible harm and violate fundamental rights.45
According to one activist, “the government’s use of AI requires immediate intervention since there is a greater likelihood of impact on legal rights.” They called for a review of public procurement guidelines and the use of facial recognition technologies in public services.46
In the legal community, there are broadly two camps. One group believes that there is an unnecessary rush to “create and circulate legislative drafts,” when in fact, only a narrow range of issues are a matter for rule-making.47 Lawyers in this camp believe that the focus should be on applying existing laws to AI to mitigate risks. The other group believes that a separate law for AI is required to deal with the unintended, downstream risks of AI being deployed across India in potentially harmful ways.48
Overall Sentiment
Across India’s government, industry, and civil society, there is broad agreement that:
- India does not need a comprehensive AI law, at least for now;
- many of the risks from AI can be addressed through existing regulations;
- there are some risks for which new regulations may be required; and
- self-regulation should be encouraged at this stage. Additional rules are required to protect consumers, especially for specific high-risk use cases.
However, there is disagreement about:
- the nature and novelty of AI risks;
- the extent to which current laws can deal with AI risks;
- whether or not self-regulation can sufficiently address the risks of AI; and
- the types of binding rules that are required and when they should be introduced.
This raises some important follow-up questions—What are the risks of AI? Are they novel? What are the gaps in existing laws? What aspect of AI should we regulate?
We explore these questions in Part II.
Part II: Regulation From First Principles
To answer the question of how AI should be regulated in India, it may be useful to reason by analogy. In a recent paper for RAND, Michael J. D. Vermeer compares AI with four other general-purpose technologies: nuclear technology, the internet, encryption, and genetic engineering. He lists out various factors that would inform their governance, such as the risks posed, consensus on these risks, and the role of public-private partnerships in its development.49 Based on a similar analog, we suggest that for a dual-use, general-purpose technology such as AI, three fundamental aspects of regulation need to be clear upfront:
- Objectives: Policymakers should clearly state the purpose of any coercive state action in the form of regulation. Generally, regulation is deemed necessary to address market failures that may be present in the form of negative externalities, information asymmetry, consumer harm, and abuse of market power.50 Another objective may be to enforce principles such as transparency, fairness, privacy, security, and accountability through a system of rights and obligations.51 The Indian government has outlined certain objectives for AI regulation—to mitigate user harm, reduce misinformation, increase accountability, and balance commercial interests—but has not explained why regulation is required in each case. For example, it is not clear why the government should intervene in contracts between creators and publishers, as proposed by the IT Minister under a “new AI law.”52
- Scope: Regulating AI presents multiple challenges, including, for example, the “pacing problem,”53 definitional issues, and what is called the “black box” problem.54 As one expert put it, “regulating AI is like regulating operating systems.”55 Therefore, it is important to clarify upfront the aspects of AI that should be regulated. Experts suggest that AI regulation should cover three aspects—inputs, outputs, and processes. Inputs may include training data and copyright material; outputs include automated decisions and AI-generated content; and processes may include models and algorithms.”56 Another expert suggested that India look at regulation of “data, models, and applications separately.”57 This appears to be a common view, with a senior MeitY official publicly stating that “Instead of trying to regulate [artificial intelligence] technology, [the Indian government] is looking at regulating its applications.”58
- Liability: On the issue of whom the regulation should apply to, the de facto approach is to impose different obligations on the developer and deployer, as has been done in the EU AI Act.59 The question of liability is also an important strategic issue. As one startup founder put it, “India’s competitive advantage lies in the application layer, so any regulation [on deployers] should account for this.”60Multiple experts conceded that India’s legal regime requires a significant update to clarify issues of liability. Under the current IT Act, digital service providers are classified as an “intermediary” or “publisher” (or their sub-categories).61 However, as one lawyer commented, these existing definitions under the IT Act when applied to AI systems are “being stretched too thin.” This has caused confusion about which market actors need to comply with the rules since “generative AI systems may not fall neatly within the purview of either publisher or intermediary.”62
Recommendations
Based on the above analysis, we recommend the following:
- The government should clarify the objectives of regulation, wherein any proposed intervention should be mapped to specific market failures.
- Instead of regulating the underlying technology itself, policymakers should focus on standalone issues (for example, deepfakes) and individual aspects of the AI value chain (for example, data inputs).
- Laws must be clear about to whom these regulations apply and when they can be held liable. For example, the current platform classification and intermediary liability framework under the IT Act must be updated to reflect the AI value chain.
Risk-based approach to AI Regulation
A risk-based approach to AI regulation is the most popular. Though implementation varies across jurisdictions, its core objective is to mitigate harm to individuals and society.63 A risk-based approach is also supported by senior officials in the Indian government, including the IT minister, the IT secretary,64 and several big tech companies.65
In the current discourse, there is a general tendency to conflate the notions of risk and harm. A key difference exists between the two. As explained in a report prepared by a group of scientists led by professor Yoshua Bengio, considered one of the “godfathers” of AI, risk is derived from the “probability of an occurrence of harm and the severity of that harm.”66 In other words, risk has a “future-orientation” and “looks at the aggregate impacts of the system on groups of people and tries to (often controversially) quantify these harms.”67 For that reason, as one legal expert put it, “AI regulation should be risk-based, not harm-based because the harm has already occurred.”68 Therefore, an important exercise in a risk-based approach to regulation is to gather evidence of harm in order to measure and anticipate the associated level of risk.
Next is the question of how to classify these risks. A report commissioned by the UK government classifies AI risks into three types: malicious use risks, risks from malfunctions, and systemic risks.69 Risks can also be categorized as safety risks and fundamental rights risks, with overlaps between the two.70 Risks may also vary based on stages (for example, during design, development, or deployment); scope (systemic risks); time-scale (short, medium, or long-term); and the source of risk (inputs vs. outputs). Even as our understanding of AI risk continues to evolve, scholars at the Massachusetts Institute of Technology (MIT) have developed a repository of more than seven hundred AI risks as of 2024.71 Some governments have also incorporated these risk taxonomies into their legislative frameworks. For example, the EU’s Artificial Intelligence Act (AI Act) classifies risk into four levels: “unacceptable, high, limited, and minimal.”72
In the table below, we synthesize the available literature and present our own risk classification framework to inform future AI policy debates.
Table 1: AI risk classification
The table below summarizes key AI risks with examples of harm to inform future policy debates. For the purpose of this analysis, “AI” includes general-purpose AI, generative AI, and artificial general intelligence.
Potential risk | Examples of harm | |
1. | Harmful content | Unauthorized impersonation of an individual using AI-generated deepfakes, leading to violation of personality rights and financial loss.i |
2. | Privacy violations | Use of personal data without the consent of the individual for the purpose of training AI models, resulting in a violation of privacy rights.ii |
3. | Cybersecurity threats | Sophisticated attacks using AI systems, leading to breach of critical infrastructure through AI-powered cyber attacks.iii |
4. | Discrimination | AI systems discriminating against certain groups in hiring decisions leading to violation of the fundamental right to equality.iv |
5. | Loss of control | AI systems making autonomous decisions without human oversight, leading to unintended consequences or failure to stop harmful actions which could compromise individual safety.v |
6. | National security | Public safety is compromised due to AI-powered cyber attacks, misuse of AI in CBRN (chemical, biological, radiological, and nuclear) weapons, leading to national security threats.vi |
7. | Product safety | Misleading advertisements about the performance of AI systems that exploit consumers and cause physical or financial harm.vii |
8. | Intellectual property rights violations | AI models being trained on copyrighted data without proper consent or compensation, undermining the rights of creators and owners.viii |
9. | Market concentration | Increased ownership and control of AI technologies by a small number of powerful corporations, limiting access to compute resources.ix |
10. | Global inequality | Widening digital divide between the Global North and South, with wealthier nations gaining access to AI technologies faster, deepening economic and technological disparities.x |
11. | Job displacement | Automation of jobs through AI systems, leading to large-scale unemployment in industries such as manufacturing, customer service, and transportation.xi |
12. | Environmental degradation | Increased energy consumption from training and deploying large AI models, contributing to environmental harm such as carbon emissions and resource depletion.xii |
13. | Superintelligence | Risk of AI systems surpassing human intelligence, leading to existential risks if AI’s goals diverge from humanity’s interests.xiii |
i Nayan Chandra Mishra, “Urgently Need: A Law to Protect Consumers From Deep Fake Ads,” The Indian Express, October 24, 2023), https://indianexpress.com/article/opinion/columns/law-deep-fake-ads-anil-kapoor-personality-rights-8997267/) ii “LinkedIn Users Say Their Data is Being Collected for Generative AI Training Without Permission,” The Hindu, September 19, 2024, https://www.thehindu.com/sci-tech/technology/linkedin-users-say-their-data-is-being-collected-for-generative-ai-training-without-permission/article68658540.ece. iii Kevin Williams, “Cyber Physical Attacks Fueled By AI are a Growing Threat, Experts Say,” CNBC, March 3, 2024, https://www.cnbc.com/2024/03/03/cyber-physical-attacks-fueled-by-ai-are-a-growing-threat-experts-say.html#. iv Jeanita Lyman, “Workday Facing Discrimination Lawsuit Over AI Hiring Software,” Pleasanton Weekly, July 18, 2024, https://www.pleasantonweekly.com/courts/2024/07/18/workday-facing-discrimination-lawsuit-over-ai-hiring-software/. v Adam Satariano and Roser Toll Pifarré, “An Algorithm Told Police She Was Safe. Then Her Husband Killed Her,” New York Times, July 18, 2024, https://www.nytimes.com/interactive/2024/07/18/technology/spain-domestic-violence-viogen-algorithm.html. vi Raina Talwar Bhatia, Evi Fuelle, and Lucia Gamboa, “AI CBRN Risks: Governance Lessons from the Most Dangerous Misuses of AI,” Credo AI, August 30, 2024, https://www.credo.ai/blog/ai-cbrn-risks-governance-lessons-from-the-most-dangerous-misuses-of-ai#. vii Adam Lashinsky, “Watch out. Companies Are Using “AI Washing” to Mislead Consumers,” The Washington Post, January 29, 2024, https://www.washingtonpost.com/opinions/2024/01/29/sec-ftc-ai-c3-internet-fraud/. viii Annapurna Roy, “Indian Publishers Seek Rules for Copyright Protection Against Generative AI Models,” The Economic Times, January 26, 2024, https://economictimes.indiatimes.com/tech/technology/indian-publishers-seek-rules-for-copyright-protection-against-generative-ai-models/articleshow/107154425.cms?from=mdr ix Jai Vipra and Sarah Myers West, “Computational Power and AI,” AI Now Institute, September 25, 2023, https://ainowinstitute.org/publication/policy/compute-and-ai. x Philip Schellekens and David Skilling, “Three Reasons Why AI May Widen Global Inequality,” Centre for Global Development, October 17, 2024, https://www.cgdev.org/blog/three-reasons-why-ai-may-widen-global-inequality#. xi Mark Talmage-Rostron, “How Will Artificial Intelligence Affect Jobs 2024-2030,” Nextford University, January 10, 2024, https://www.nexford.edu/insights/how-will-ai-affect-jobs xii Renée Cho, “AI’s Growing Carbon Footprint,” State of the Planet, Columbia Climate School, June 9, 2023, https://news.climate.columbia.edu/2023/06/09/ais-growing-carbon-footprint/ xiii Nick Bostrom, Superintelligence: Paths, Dangers, Strategies, (Oxford University Press, July 2014). |
How Should India Classify AI Risks?
Currently, there is no credible AI risk classification framework for India that is based on empirical evidence of harm. There are some examples of standalone risk assessments, but they are incomplete and unsubstantiated. For example, a 2021 NITI Aayog report outlines certain “risks and considerations” to operationalize responsible AI practices but doesn’t spell out these risks.73 Another report by TRAI briefly mentions certain risks—“low quality data, data biases, data security, data privacy, inaccurate or biased algorithm, and unethical use of AI”—but jumps straight into regulatory principles without analyzing the risks.74 Similarly, the Telecom Engineering Center has published a draft AI risk assessment framework but focuses entirely on fairness outcomes.75
Therefore, to support future policymaking, we have identified five categories of risks that are most relevant from a regulatory perspective:76
- Malicious uses, including the creation and distribution of harmful AI-generated content that could result in the violation of legal rights or threat to public order and safety.
- Algorithmic discrimination, which could arise due to various factors, such as the use of unreliable or unrepresentative datasets in AI systems, and which may result in financial losses, loss of opportunity, and pose a threat to other fundamental rights and freedoms.
- Transparency failures resulting from the lack of adequate and relevant disclosures about how AI systems are being used to make decisions, what data is being used to train AI systems, and other forms of information asymmetry that pose a threat to privacy and other rights.
- Systemic risks resulting from the dependence on a small number of AI systems in critical sectors like finance, healthcare, national security, and other forms of market concentration that could cause disruptions on a broad scale with potentially catastrophic effects.
- Loss of control due to lack of human oversight in the development and use of autonomous AI systems, which could result in unintended consequences and threats to national security and public safety.
We recommend that these specific categories of risks be studied in detail in an effort to develop an appropriate AI risk classification framework for India. Moreover, since the general consensus is that India should regulate “high-risk use cases,”77 we suggest analyzing proposed AI deployments based on these risk vectors. Although there is no standard definition, “high-risk” applications generally include those in critical infrastructure, lending, credit scoring, insurance, product safety, consumer rights, law enforcement, and justice delivery.78 A regulatory approach focused on “high-risk use cases” is likely to find favor in India, though it will have to be an iterative exercise grounded in scientific research.
Risk classification should also be grounded in tangible evidence of harm and account for local factors, such as AI adoption, consumer awareness, and digital literacy levels in India. Market studies, like the one being conducted by the Competition Commission of India, are useful references to identify specific market failures in India’s AI ecosystem.79
From a regulatory standpoint, a well-considered AI risk assessment would also help identify risks for which regulation is not warranted.80 In some cases, risks can be addressed through industrial policy (for example, enabling access to compute through subsidies and reskilling individuals to mitigate job displacement).81 Long-term risks may not warrant immediate regulation (for example, existential threats to humanity). Many other risks can be addressed through existing regulations—an issue that we analyze in more detail in the next section.
What Are the Gaps in Existing Laws?
India has a complex legal system of laws and regulations, comprising the Constitution, statutory laws, rules, regulations, and guidelines. From an AI perspective, the relevant areas of law are privacy law, intellectual property law, competition law, media law, employment law, consumer law, criminal law, contract law, and tort law.82
The common view across government, industry, and civil society groups is that this existing legal framework can be applied to address many of the AI risks outlined in this paper. The table below illustrates how existing statutory laws can be applied to deal with some AI risks.
Table 2: Applicability of existing statutory laws to certain AI risks (illustrative)
Nature of harm | Applicable statutory laws |
Depiction of a child in a sexually explicit video that is AI-generated |
|
Unauthorized impersonation using AI-generated deepfakes |
|
Discrimination in hiring decisions using AI recruitment tools |
|
Use of an individual’s personal data without consent to train AI models |
|
Misleading ads about the reliability or performance of an AI service |
|
Use of copyright-protected material in AI-generated content without permission of the author or owner |
|
One legal expert suggested that policymakers should spend the next six to twelve months applying existing laws to AI use cases to understand potential gaps in the current framework, and that the Department of Legal Affairs, Ministry of Law and Justice, Government of India, should be entrusted with this task.83
Another important factor will be the role of courts in interpreting, adapting and enforcing existing provisions of law. So far, Indian courts have dealt with only a handful of cases involving modern AI systems, focused primarily on AI-generated content involving the likeness of public figures. A cursory review of cases indicates that Indian courts are content to apply existing legal provisions to provide redress for now.84
Table 3: Applicability of existing Indian laws to deepfakes (illustrative)
The table below illustrates which existing statutory laws would apply in the case of a deepfake image or video being circulated without the permission of the individual.
Nature of harm | Applicable laws |
Cheating by impersonation |
|
Transmitting obscene material |
|
Causing harm to reputation |
|
Failure to observe due diligence guidelines for intermediaries |
|
Experts have, however, warned that additional regulations are required because “there are fundamentally new risks and harms emerging from AI that existing laws are not equipped to deal with.”85 One expert gave the example of a hospital collecting sensitive health data for a medical diagnosis, which was then repurposed to train AI models without the person’s knowledge86—an example of how “consent-based regimes break down completely in the AI context.”87 Multiple experts also said that new legal rights are required to protect individuals and society given the pervasiveness of AI systems, the lack of transparency, and the disproportionate impact of these systems on vulnerable communities.88
Therefore, on the question of whether new regulations are required to deal with the risks of AI, we suggest the following approach to guide future policymaking:89
1. No additional regulations are required in cases where:
- there is no evidence of consumer harm or other market failure;
- the risks can be adequately addressed through industrial policy (for example, reskilling initiatives to mitigate job displacement or subsidies to enable access to compute);
- there is no pressing need to adopt regulation at this stage (for example, existential threat to humanity from artificial general intelligence);
- risks can be addressed through existing regulations (for example, consumer harm arising from misleading advertisements about the performance of AI systems can be adequately addressed under existing laws).90
2. Clarifications or targeted amendments are required in cases where advisories, guidelines or targeted legal amendments can sufficiently address risks, instead of adopting new and comprehensive rules. For example, the risks relating to the circulation of deepfakes can be broadly addressed through existing laws.91 However, regulators should clarify to whom these existing rules apply and in which cases they can be held liable.92
3. New regulations should be considered to address market failures and protect consumers. For example, introducing a right to compensation in the case of misuse of AI and the right to object to automated decision-making to protect fundamental rights.93 New transparency obligations for certain AI systems should also be considered to address the risks relating to information asymmetry.94
It is beyond the scope of this paper to conduct a comprehensive analysis of all the relevant laws and regulations and where they fall short in the context of AI systems. Therefore, we recommend that a comprehensive regulatory gap analysis be conducted to help inform future policymaking in India.
Part III: Different Approaches to AI Regulation
In Part II, we identified certain AI risks for which new regulations may be warranted, subject to evidence of harm. In Part 3, we examine the different regulatory approaches that can be adopted to address these risks.
What Are the Possible Regulatory Approaches?
Across the globe, three approaches to AI regulation have been adopted so far:
- Self-regulation: In this approach, a “group of firms in a particular industry, or entire industry sectors agree to act in prescribed ways, according to a set of rules or principles.”95 In the AI context, self-regulation is generally organized around principles of privacy, security, transparency, fairness, accountability, and trust.96 Self-regulation can be implemented in many ways, such as voluntary commitments, self-certifications, and impact assessments.
- Co-regulation: In this approach, the government or regulators play a more proactive role by developing, recognizing, endorsing, or implementing standards. This is a more stringent approach and “represents a midpoint in the continuum between self-regulation and full government regulation.”97 Examples of co-regulation include codes of practice and risk management frameworks.
- Binding regulations: In this approach, policymakers enact a law or some other binding framework that makes them legally enforceable. The most prominent example is the EU’s Artificial Intelligence Act,98 which contains rights and obligations in relation to AI systems.
Each of these approaches have distinct advantages and disadvantages. According to a report prepared by the Stanford Cyber Policy Center,99 self-regulation helps tap into industry expertise, provides flexibility, and encourages rapid innovation, but lacks sufficient accountability and enforceability. Co-regulation enables collaboration between companies and regulators, allowing for an iterative approach, but often lacks the necessary enforcement mechanisms required to address market failures. Binding regulation provides clear accountability mechanisms and government oversight but could stifle innovation due to bureaucratic delays and the lack of expertise and adaptability.
Therefore, policymakers should carefully evaluate the relative cost and benefit of each approach before adopting them into domestic frameworks.
How Are Different Countries Approaching AI Regulation?
Regulatory approaches in different countries reflect their own socio-economic priorities, legal traditions, and governance models. For example, the EU’s rights-based approach, in the form of a comprehensive statutory law, seeks to “protect health, safety, and fundamental rights.”100 On the other hand, China, with its strong desire for state control, prioritizes social order and the protection of “Socialist Core Values” in its rules on AI-generated content.101 Japan, on its part, has professed a “human-centric” approach to AI that aligns with its broader societal goals.102 Singapore and the United Kingdom have adopted a principle-based approach that reflects a pragmatic style, tailoring rules to specific industries.103 The table below provides an overview of these different global approaches.
Table 4: Summary of approaches to AI regulation in different jurisdictions
The table below provides a summary of the different approaches to AI regulation in various jurisdictions as of the date of publication of this paper.
Jurisdiction | Summary of approach | Type of regulation |
Australia | The government has released a discussion paper proposing mandatory guardrails to regulate AI in high-risk settings and general-purpose AI models.i | Binding government regulations are currently under discussion. |
Brazil | Reviewing proposals for a new AI law that protects fundamental rights and ensures secure, reliable AI systems while categorizing them by risk and imposing various compliance requirements.ii | Binding government regulations are currently under discussion. |
Canada | Published the draft Artificial Intelligence and Data Act (AIDA) that focuses on responsible AI use, consumer protection, and fair competition.iii | Binding government regulations are currently under discussion. |
China | Techno-centric approach with specific regulations aimed at algorithm recommendations and generative AI.iv | Binding government regulations have been adopted and are in force. |
European Union | Statutory framework in the form of the AI Act that categorizes systems by risk levels, imposes stringent requirements on high-risk applications, and aims for transparency and accountability.v | Binding government regulations have been adopted and are in force, with provisions for co-regulation. |
Japan | Through the G7’s Hiroshima process, Japan has promoted a light touch approach and a voluntary code of conduct. Japan has since established a set of domestic guidelines for businesses and is considering a statutory framework.vi | Self-regulatory approach with ongoing discussion on binding regulations. |
Singapore | Voluntary, use-case based approach that emphasizes a sectoral approach based on governance frameworks.vii | Self-regulatory approach. |
United Kingdom | Context-based and cross sectoral framework that focuses on core principles that will be implemented by sectoral regulators.viii | Self-regulatory approach, with the option for sectoral regulators to frame binding regulations. |
United States of America | Voluntary commitments and executive orders that emphasize a principle-based, cross-sectoral approach to promote industry best practices, and risk mitigation tools with input from various federal agencies.ix | Self-regulatory approach, with limited downstream impact on advanced AI model providers from executive orders. |
iSafe and Responsible AI in Australia, (Department of Industry, Science and Resources, Australian Government, September 2024),https://storage.googleapis.com/converlens-au-industry/industry/p/prj2f6f02ebfe6a8190c7bdc/page/proposals_paper_for_introducing_mandatory_guardrails_for_ai_in_high_risk_settings.pdf.) iiBill no. 2338, of 2023,https://www25.senado.leg.br/web/atividade/materias/-/materia/157233.. iiiBill C-27, (First reading June 16, 2022). ivCyberspace Administration of China, National Development and Reform Commission, Ministry of Education Ministry of Science and Technology, Ministry of Industry and Information Technology, Ministry of Public Security, State Administration of Radio, Film and Television, Interim Measures for the Management of Generative Artificial Intelligence Services, (Enforced on August 15, 2023). vThe EU AI Act, Regulation (EU) 2024/1689. vi “AI Guidelines for Business Ver1.0,” Ministry of Economy, Trade and Industry, Government of Japan, April 19, 2024, https://www.meti.go.jp/shingikai/mono_info_service/ai_shakai_jisso/pdf/20240419_9.pdf; Kohei Sakai, Takai Tsuji, and Ryohei Yasoshima, “Japan Weighs Regulating AI Developers, Following U.S. and EU,” Nikkei Asia, May 2, 2024, https://asia.nikkei.com/Business/Technology/Japan-weighs-regulating-AI-developers-following-U.S.-and-EU. vii “Model Artificial Intelligence Governance Framework Second Edition,” InfoComm Media Development Authority and Personal Data Protection Commission Singapore, January 21, 2020, https://www.pdpc.gov.sg/-/media/Files/PDPC/PDF-Files/Resource-for-Organisation/AI/SGModelAIGovFramework2.pdf. viii “A Pro-Innovation Approach to AI Regulation,” Office for Artificial Intelligence, Department of Science Innovation and Technology, Government of the United Kingdom, (Presented to Parliament on March 29, 2023), https://www.gov.uk/government/publications/ai-regulation-a-pro-innovation-approach/white-paper. ix “Ensuring Safe, Secure, and Trustworthy AI,” The White House, July 21, 2023, https://www.whitehouse.gov/wp-content/uploads/2023/07/Ensuring-Safe-Secure-and-Trustworthy-AI.pdf; “Executive Order on the Safe, Secure, and Trustworthy Development and Use of Artificial Intelligence,” The White House, October 30, 2023, https://www.whitehouse.gov/briefing-room/presidential-actions/2023/10/30/executive-order-on-the-safe-secure-and-trustworthy-development-and-use-of-artificial-intelligence/. |
What Are the Views in India?
Self-Regulation
Based on our research and discussions with experts, there are four key reasons for supporting self-regulation. Firstly, industry stakeholders support self-regulation for AI because they have already been proposed by NITI Aayog and the Indian Council of Medical Research.104 Secondly, according to one executive, there are strong market incentives for companies to comply with self-regulation, for example, to secure valuable procurement contracts.105 This applies even to startups, for whom self-regulatory frameworks are “low-effort promises” to signal seriousness in the market.106 Third, voluntary commitments can co-exist with other regulatory models, such as co-regulation, so it is not a binary decision.107 Lastly, India has successfully implemented self-regulation in other parts of the digital sector, such as in digital advertising.108
On the other hand, there are some strong skeptics of self-regulation. “The future trajectory of these technologies and their social impacts cannot be decided by the very individuals who stand to profit from them,” asserted one expert.109 Some industry executives also point to the failure of recent industry efforts to adopt generative AI guidelines,110 because they were “too broad in scope” and prescriptive.111 Another executive claimed that “the Indian government does not trust big tech companies enough to make self-regulation work.112 Moreover, according to one expert, self-regulation “doesn’t do much to separate good actors from bad actors.”113 This skepticism is also shared by some government officials who said that binding rules are necessary to contain some AI risks.114
Co-regulation
Many experts said that co-regulatory models were urgently needed in India to reduce the burden on public institutions while providing a higher degree of accountability, especially when compared to self-regulation. They also agreed that co-regulation may also be the most effective way to deal with “high risk use cases” for which government oversight is necessary.115
However, at present, there are three key issues with co-regulatory models in India:
- Lack of conceptual clarity: The difference between self-regulation and co-regulation is not clearly understood. Many “self-regulatory bodies” (SROs) are subject to strict oversight, though they are presented as voluntary, industry-led approaches.
- Structural issues: The failure of co-regulatory models in India indicates deep-seated issues, including a lack of political will to cede authority, poor governance structures, lack of financial independence, and conflicts of interest.
- Delays in implementation: The government has rejected multiple proposals for co-regulation from the online gaming industry, leading to a regulatory vacuum.
For these reasons, we recommend that co-regulatory models should not be adopted for AI governance in India until they have been proven to work in other domains.
Binding rules
Many experts believe it would be premature for India to adopt binding rules for AI for several reasons. First, there is no comprehensive risk assessment on the basis of which new rights and obligations can be developed. Second, there is no empirical evidence of market failure to justify the increased compliance cost of new regulations. Third, existing laws can address many of the anticipated risks of AI, and a gap analysis is required to identify areas where new rules are required. And fourth, less expensive methods such as self-regulation may be sufficient to address the anticipated risks.
In the next section, we examine India’s broader policy agenda on AI and analyze various socio-economic factors that influence the choice of regulatory architecture.
Part IV: How Should India Approach AI Regulation?
In this section, we analyze three factors that should inform India’s approach to AI regulation —economic opportunity, cost of regulation, and state capacity.
Economic Opportunity
There is significant economic upside to the successful implementation of India’s national AI strategy.116 According to the Indian government, AI is expected to add nearly $500 billion to India’s gross domestic product (GDP) by 2025.117 The government has also identified potential applications of AI in agriculture, healthcare, disaster management, transportation, law, and finance that could have a transformative social impact.118
The private sector is also bullish on the AI opportunity for India. Accenture anticipates that AI will increase the annual growth rate of India’s economy by 1.3 percent by 2035, while a Google-commissioned report estimates that at least Rs 33.8 lakh crore of economic value will be generated by 2030 through AI adoption.119
For these reasons, some researchers from the Global South have identified a new type of AI risk for developing countries—“the risks or opportunity costs of not implementing AI, [and] missing out on potential benefits.”120 Indeed, this view is shared by many in India, as demonstrated in a survey conducted by Ipsos, which finds that “Indian respondents are more optimistic about AI than their global counterparts.”121
Therefore, we recommend that India adopt a light touch and pro-innovation approach that is aligned with its broader AI strategy to help fully realize the socio-economic benefits of AI.
Cost of Regulation
New AI regulation would entail a series of costs that will have to be weighed against the potential benefits to society.
First is the obvious industry cost of compliance. There are no estimates of what it would cost an Indian business to comply with a model AI law. However, for comparison, the European Commission estimates the cost of complying with the AI Act to be between €1.6 billion and €3.3 billion (Rs 143.5 billion to Rs 296.1 billion or $1.7 billion to $3.5 billion).122 Although this is a small fraction of the EU’s overall GDP, which stood at $16.6 trillion in 2022, the new regulation has been criticized for going “too far” and setting the regulatory barrier “too high.”123 Moreover, as India prepares to implement a new data protection law, new AI regulations will likely increase the cumulative cost for businesses.
Besides compliance costs, some industry executives fear the “psychological costs” associated with new regulations. They point to existing rules under the Apprentices Act, 1961, which grants officials wide discretionary powers to impose fines.124 In fact, one startup founder said that some new rules, such as the appointment of a local Data Protection Officer,125 only serve to increase compliance costs and create fear in the minds of entrepreneurs.126
Other costs include the administrative expenses associated with implementing a new law.127 New regulations for AI could also increase litigation costs, especially if they are premature, unclear, or duplicative.
On the other hand, timely regulations could provide business clarity and promote innovation. Some say that the introduction of the Information Technology Act, 2000, helped propel the growth of India’s e-commerce industry.128 Policymakers should also consider the costs of delayed regulation, though it is difficult to quantify the loss of legal rights and freedoms.
State Capacity
Multiple interviewees highlighted capacity constraints that, according to them, would prevent the Indian government from effectively implementing new AI regulations given the status quo. These limitations fall into five categories: (1) lack of technical expertise; (2) failure to issue clear and timely regulatory guidance; (3) lack of investigative powers; (4) ineffective or inconsistent enforcement; and (5) lack of grievance redressal mechanisms.129
Addressing each of these issues will require a multi-fold approach. To promote effective AI governance, we recommend increasing state capacity in at least two respects:
- Independent expertise: India should consider setting up a national “AI Safety Institute” (AISI) to develop state capacity in foundational research, safety and testing, training and awareness, and cross-border collaboration on AI governance.130 AISIs that have been set up in the UK, the EU, Singapore, Japan, and the United States guide industry compliance and facilitate information exchange on AI safety issues with other like-minded countries. While the Indian government has commenced consultations on a proposed AISI, experts caution that the scope and structure of the AISI should be carefully thought out to ensure that it functions effectively.131
- Independent enforcement: Poor enforcement stems from a variety of factors—lack of resources and technical expertise, no separation of powers between the government and regulator, lack of consumer awareness, and ineffective grievance redressal mechanisms.132 There are two possible pathways to increase enforcement capacity—boost MeitY’s resources so that it can act as both policymaker and de-facto regulator, or set up a new regulator to enforce the IT Act and oversee AI governance.
Developing state capacity in these ways, we anticipate, would help separate policymaking from enforcement, promote industry compliance, and protect consumers.
Recommendations: An AI Policy Roadmap for India
Policymakers in India must better understand the current capabilities and unique risks posed by AI, the dynamic and evolving nature of the AI ecosystem, and the gaps in the existing legal framework. They will also need to adopt a balanced regulatory approach and be prepared to address market failures as and when the need arises.
Along these lines, below is a suggested policy action plan:
- Understand the risks and benefits of AI: Having a clear understanding of the current capabilities of AI is crucial for cost-benefit analyses, regulatory impact assessments, and risk assessments. We recommend adopting measures similar to the U.S. executive order on AI, which directs government agencies and regulators to issue reports on how AI is being used in their respective sectors and to identify potential risks.133 Market studies on India’s AI ecosystem, such as the one being conducted by the Competition Commission of India (CCI),134 are also required to identify market failures. In India, the Office of the Principal Scientific Adviser has been entrusted with inter-agency coordination on issues of AI governance. These discussions should be used to inform future policy and facilitate awareness, training, and capacity-building efforts.
- Classify AI risks based on evidence of harm: The government should identify specific AI risks for which new regulations are required. The five categories of AI risks identified in Part II of this paper provide a useful framework to gather evidence of harm in India and inform future regulation. Further, we recommend identifying “high-risk applications” based on these risk vectors for which additional regulations are required. A comprehensive risk assessment will require input from multiple agencies. We suggest that this exercise be coordinated by an inter-ministerial committee consisting of MeitY, the Department of Science and Technology, the Ministry of Consumer Affairs, the Ministry of External Affairs, the Ministry of Communications, the Ministry of Agriculture, and the Ministry of Health and Family Welfare, amongst others. Alternatively, the Parliamentary Standing Committee on Communications and Information Technology can coordinate this process. The proposed AISI for India should also be involved in conducting these risk assessments.
- Identify gaps in existing laws: A comprehensive gap analysis is essential to identifying areas where new regulations are required to address AI risks. We suggest an approach that identifies areas where: (1) no additional regulations are required (for example, in relation to job losses or superintelligence); (2) clarifications or targeted amendments would suffice (for example, in relation to deepfakes); and (3) new regulations are required to address market failures (for example, obligations with respect to transparency or rights in relation to algorithmic discrimination). This comprehensive analysis should involve participation from the Ministry of Consumer Affairs and the Department of Legal Affairs, and may be supplemented by research from academic institutes, law firms, think tanks, and the proposed AISI.
- Encourage self-regulation: Based on India’s larger strategic goals, we recommend that India adopt a light-touch, voluntary, and principle-based approach to AI regulation, at least for now. Self-regulation is a good starting point because it allows regulators to adapt to the pace of innovation, develop technical expertise by collaborating with the private sector, reduce administrative costs, and develop baseline industry norms before introducing other compliance measures. To start with, we suggest that companies operating in India adopt voluntary commitments in relation to the safety and testing of AI systems, misinformation, privacy, and security. We also see an important role for specialized industry bodies in developing and adopting voluntary codes for different market actors (advanced models, enterprise companies, startups, etc.) and sectors (healthcare, finance, education, etc.).
- Empower the government to address AI risks: We recommend that legal provisions be introduced to enable the government to swiftly address market failures, now or in the future. This would include measures to:
(a) enable the government to notify key entities involved in the AI value chain based on their evolving roles and responsibilities, and to clarify to whom specific regulations apply;
(b) empower the government to create new rights for citizens or impose legal obligations on any class of entities if there is evidence of market failure (for example, in relation to testing, transparency, audits, human oversight); and
(c) develop robust enforcement and consumer protection mechanisms that are designed to address the specific AI risks for which there is evidence of harm. - Adopt a “whole of government” approach: AI governance requires a multi-faceted approach. Broadly, we recommend a three-part framework:
(a) The relevant sectoral ministries, departments, and regulators (such as the Reserve Bank of India (RBI), the Telecom Regulatory Authority of India (TRAI), the Central Consumer Protection Authority (CCPA), the Ministry of Health, the Ministry of Education, and so on) should take the lead in developing, monitoring, and enforcing regulations.
(b) MeitY, as the nodal ministry responsible for technology regulation, should set baseline requirements across sectors and support the adoption of voluntary codes of conduct.
(c) Inter-agency coordination on AI governance should be handled by the Prime Minister’s Office (PMO) or the National Security Council Secretariat (NSCS), given their cross-agency mandate and strong leadership. Separately, a national AI Safety Institute (AISI) would help provide independent expertise and coordinate with the international network of AISIs. - Consult with experts on AI regulation: While the government has initiated a series of public consultations on the proposed Digital India Act, there is a need for a more focused debate on AI regulation with a variety of experts.135 Specific issues such as bias, privacy, security, and copyright require an interdisciplinary approach and more substantive policy discussions with multi-stakeholder groups to shape future policy.
Conclusion
As India’s policymakers carefully mull the next steps on AI regulation, the brief pause in this continuing advance offers an opportunity to reflect and readjust, lest policymakers get trapped in path dependency and a mindless rush to regulate.
As the sentiment analysis in Part I illustrates, there is broad agreement that India should not adopt a comprehensive AI law. Some new regulations are warranted to address the risks of AI, but the scope of new rules and the ideal regulatory approach remains contentious.
Part II explains the intricate relationship between risk and harm, and the need for empirical evidence grounded in the local context, to inform India’s regulatory approach. We suggest focusing on five categories of AI risk and identifying high-risk use cases based on these risk vectors. We also offer some examples of areas where new regulations are required.
As the overview of different global approaches in Part III demonstrates, there is no one-size-fits-all approach to AI regulation. For a developing country like India, which is committed to reaping the full range of benefits from AI, we suggest self-regulation at least for the next six to twelve months (because binding regulation would entail significant costs, co-regulation is broken, and self-regulation is relatively efficient). However, as AI systems continue to evolve, the government should be empowered to prevent harm with clear legal mandates.
This paper suggests a few ways in which such provisions can be introduced. Regulation encompasses more than just laws. It also includes norms, standards, ethical practices, policy frameworks, institutional oversight, and soft laws. Therefore, we suggest a “whole of government” approach in which sectoral agencies, MeitY, and an inter-ministerial body collaborate in a dynamic fashion. An AISI, designed from the ground up keeping India’s unique needs in mind, can also supplement state capacity.
Finally, and this is important, the process by which AI regulations are framed must be both participative and inclusive. Not only should the data, models, and applications that power India’s AI ecosystem be representative of its culture, but so too should the policy frameworks that shape its future trajectory. To that end, it behooves the government to initiate a series of consultations on this topic before continuing its advance on AI regulation.
Acknowledgements
The authors would like to thank Rudra Chaudhuri and Anirudh Burman for their feedback on a draft of this paper, and the interviewees and reviewers for their thoughtful comments.
Notes
1“What is the History of Artificial Intelligence,” Tableau, accessed November 14, 2024, https://www.tableau.com/data-insights/ai/history.
2Peter J. Bentley, Artificial Intelligence and Robotics: Ten Short Lessons (JHU Press, 2020), 5–20.
3“Legislative bodies in 127 countries passed 37 laws that included the words “artificial intelligence” this past year,” in Shana Lynch, “2023 State of AI in 14 Charts,” Stanford University Human-Centered Artificial Intelligence, April 3, 2023 https://hai.stanford.edu/news/2023-state-ai-14-charts.
4See Philippe Lorenz, Karine Perset, and Jamie Berryhill, “Initial Policy Considerations for Generative Artificial Intelligence,” OECD Artificial Intelligence Papers no. 1, September 2023, https://www.oecd-ilibrary.org/deliver/fae2d1e6-en.pdf.
5See The EU AI Act, Regulation (EU) 2024/1689 (Enforced on June 13, 2024); Cyberspace Administration of China, National Development and Reform Commission, Ministry of Education Ministry of Science and Technology, Ministry of Industry and Information Technology, Ministry of Public Security, State Administration of Radio, Film and Television, Interim Measures for the Management of Generative Artificial Intelligence Services (Enforced on August 15, 2023); The Government of Canada, Bill C-27: An Act to enact the Consumer Privacy Protection Act, the Personal Information and Data Protection Tribunal Act and the Artificial Intelligence and Data Act and to make consequential and related amendments to other Acts, (Tabled in the House of Commons on November 4, 2022); Department of Industry, Science and Resources, Australian Government, “Safe and Responsible AI in Australia,” September 2024
6Ibid.
7Department for Science, Innovation & Technology, “The Bletchley Declaration by Countries Attending the AI Safety Sumit, 1–2 Novermber 2023,” November 1, 2023, https://www.gov.uk/government/publications/ai-safety-summit-2023-the-bletchley-declaration/the-bletchley-declaration-by-countries-attending-the-ai-safety-summit-1-2-november-2023; “G20 New Delhi Leaders’ Declaration,” Ministry of External Affairs, Government of India, September 9, 2023, https://www.mea.gov.in/Images/CPV/G20-New-Delhi-Leaders-Declaration.pdf; “GPAI Ministerial Declaration 2023,” The Global Partnership on Artificial Intelligence, accessed October November 12, 2024, https://gpai.ai/2023-GPAI-Ministerial-Declaration.pdf; Governing AI for Humanity, final report, United Nations, AI Advisory Body, September 2024, https://www.un.org/sites/un2.un.org/files/governing_ai_for_humanity_final_report_en.pdf; “G7 Leaders’ Statement on the Hiroshima AI Process,” The White House, October 30, 2023, https://www.whitehouse.gov/briefing-room/statements-releases/2023/10/30/g7-leaders-statement-on-the-hiroshima-ai-process/#.
8Soumyarendra Barik, “AI Law May Not Prescribe Penal Consequences for Violations,” The Indian Express, July 17 2024, https://indianexpress.com/article/business/ai-law-may-not-prescribe-penal-consequences-for-violations-9457780/.
9See Amlan Mohanty, “Beyond the AI Advisory,” Techlawtopia, April 5, 2024, https://www.techlawtopia.com/beyond-the-ai-advisory/; See Amlan Mohanty and Shatakratu Sahu, “India’s AI Strategy: Balancing Risk and Opportunity,” Carnegie India, February 22, 2024, https://carnegieendowment.org/posts/2024/02/indias-ai-strategy-balancing-risk-and-opportunity?lang=en.
10Press Trust of India, “Why India Can Afford to Wait and Watch before Regulating AI,” The Economic Times, July 31, 2023, https://economictimes.indiatimes.com/tech/technology/why-india-can-afford-to-wait-and-watch-before-regulating-ai/articleshow/102269393.cms?from=mdr; See also Soibam Rocky Singh, “Stringent Regulations Could Hinder Growth of AI in India: Experts,” The Hindu, June 22, 2024, https://www.thehindu.com/sci-tech/technology/overly-strict-regulations-could-hinder-ai-growth-in-india-caution-experts/article68320814.ece.
11Pravin Anand, et. al., “Artificial Intelligence Law,” Lexology, July 23, 2024, https://www.lexology.com/indepth/artificial-intelligence-law/india.
12Pranjal Sharma, et. al., AI Governance in India: Aspirations and Apprehension, report, Observer Research Foundation, December 6, 2023, https://www.orfonline.org/research/ai-governance-in-india-aspirations-and-apprehensions.
13Shinu Vig, “Regulating Deepfakes: An Indian Perspective,” Journal of Strategic Security 17, no. 3 (2024), https://doi.org/10.5038/1944-0472.17.3.2245.
14Shaoshan Liu, “India’s AI Regulation Dilemma” The Diplomat, October 27, 2023, https://thediplomat.com/2023/10/indias-ai-regulation-dilemma/; Gulveen Aulakh, “India Will Regulate AI to Ensure User Protection,” Mint, June 9, 2023, https://www.livemint.com/ai/artificial-intelligence/india-will-regulate-ai-to-ensure-user-protection-11686318485631.html.
15See “G20 New Delhi Leaders’ Declaration;” See “No Plan to Regulate AI, IT Ministry Tells Parliament,” The Hindu, April 5, 2023, https://www.thehindu.com/news/national/no-plan-to-regulate-ai-it-ministry-tells-parliament/article66702044.ece.
16“Regulation of hi-risk AI systems through legal, institutional quality testing framework to examine regulatory models, algorithmic accountability, zero-day threat & vulnerability assessment, examine AI based ad-targeting, content moderation etc,” in “Proposed Digital India Act, 2023,” Ministry of Electronics and Information Technology, Government of India, March 9, 2023: https://www.meity.gov.in/writereaddata/files/DIA_Presentation%2009.03.2023%20Final.pdf
17Information Technology (Intermediary Guidelines and Digital Media Ethics Code) Rules, 2021, (Enacted March 1, 2024), under the Information Technology Act, 2000, revoked March 15, 2024.
18See Amlan Mohanty, “Beyond the AI Advisory.”
19Government of India, Ministry of Electronics and Information Technology, Cyber Law and Data Governance Group, Due Diligence by Intermediaries/Platforms under the Information Technology Act, 2000, and Information Technology (Intermediary Guidelines and Digital Media Ethics Code) Rules, 2021, (Issued on March 15, 2024), https://www.meity.gov.in/writereaddata/files/Advisory%2015March%202024.pdf; Soumyarendra Barik, “After Criticism, Govt Clarifies: AI Startups Don’t Need IT Ministry Approval,” The Indian Express, March 5, 2024, https://indianexpress.com/article/business/genai-startups-dont-need-to-seek-govt-nod-before-launch-rajeev-chandrasekhar-9194613/; “IT Minister Replaces AI Advisory, Drops Requirement of Government Permission,” The Hindu Bureau, March 16, 2024, https://www.thehindu.com/sci-tech/technology/it-ministry-replaces-ai-advisory-drops-requirement-of-governments-permission/article67957744.ece.
20See Shouvik Das, “AI Regulations Will Ideally Be Light Touch, Though Harm Is Concerning,” Mint, December 7, 2023, https://www.livemint.com/news/india/ai-regulations-will-ideally-be-light-touch-though-harm-is-concerning-11701961915125.html; See also “ET Startup Awards 2024 | Chips to AI... India’s Moving up the Digital Value Chain: Ashwini Vaishnaw,” The Economic Times, October 7, 2024, https://economictimes.indiatimes.com/tech/startups/et-startup-awards-2024-chips-to-ai-indias-moving-up-the-digital-value-chain-ashwini-vaishnaw/articleshow/113991859.cms?from=mdr where the IT Minister is quoted as saying, “I honestly don’t think that the way some of the countries have put so much regulation on AI, I don’t think that is the right way to go.”
21Interview with a senior Indian government official involved in AI policy (Interview no. 15).
22Sanjeev Sanyal, Pranav Sharma, and Chirag Dudani, A Complex Adaptive System Framework to Regulate Artificial Intelligence, EAC-PM/WP/26/2024, Economic Advisory Council to the PM, January 2024, https://eacpm.gov.in/wp-content/uploads/2024/01/EACPM_AI_WP-1.pdf.
23Aditya Sinha, “Why We Need to Be Proactive on AI Laws,” The New Indian Express, September 12, 2024, https://www.newindianexpress.com/opinions/2024/Sep/12/why-we-need-to-be-proactive-on-ai-laws#.
24Interview with a senior Indian government official involved in AI policy (Interview no. 16).
25See Principal Scientific Advisor to GoI, Facebook post, June 1, 2024, https://www.facebook.com/prinsciadvoff/posts/prof-ajay-kumar-sood-principal-scientific-adviser-to-the-government-of-india-del/768116795503180/.
26A copy of this draft report is available on file with the authors of this paper.
27Information Technology Act, 2000 (Enforced on October 17, 2000); Aashish Aryan, “Govt May Amend IT Act to Add New Rules for AI, GenAI models,” Economic Times, January 4, 2024, https://economictimes.indiatimes.com/tech/technology/govt-may-amend-it-act-to-add-new-rules-for-ai-genai-models/articleshow/106524019.cms?utm_source=contentofinterest&utm_medium=text&utm_campaign=cppst.
28“Unchecked Use of AI in Banking Poses Risk: RBI Governor Shaktikanta Das,” The Times of India, October 15, 2024, https://timesofindia.indiatimes.com/business/india-business/unchecked-use-of-ai-in-banking-poses-risk-rbi-governor-shaktikanta-das/articleshow/114227753.cms.
29M. Rajeshwar Rao, virtual address at the 106th Annual Conference of Indian Economic Association, Innovations in Banking - The Emerging Role for Technology and AI, Delhi, December 22, 2023, Reserve Bank of India, January 1, 2024, https://www.rbi.org.in/Scripts/BS_SpeechesView.aspx?Id=1400.
30Interview with a senior partner at a law firm in India (Interview no. 12).
31Interview with an industry executive at a global technology company (Interview no. 2).
32Interview with a technologist and entrepreneur in India (Interview no. 10).
33Interview with a founder of an Indian AI startup (Interview no. 9).
34Interview with industry executives at a global technology company (Interview no. 1).
35Kent Walker, “7 Principles for Getting AI Regulation Right,” blog, Google, June 26, 2024, https://blog.google/outreach-initiatives/public-policy/7-principles-for-getting-ai-regulation-right.
36See “Yuval Noah Harari: Panel Discussion on Technology and the Future of Democracy,” YouTube video, posted by “Yuval Noah Harari,” October 4, 2020, accessed October 23, 2024, https://www.youtube.com/watch?v=JfyIW9wRvB4.
37“Governing AI: A Blueprint for India,” Microsoft, accessed November 12, 2024, https://blogs.microsoft.com/wp-content/uploads/prod/sites/5/2023/08/MSFT_Governing_AI_BlueprintFuture_India_Web.pdf.
38Urvashi Mishra, “AI Risks Need Precision Regulations: IBM India’s Patel,” Fortune India, April 5, 2024, https://www.fortuneindia.com/enterprise/ai-risks-need-precision-regulations-ibm-indias-patel-2/116359.
39Kent Walker, “7 Principles for Getting AI Regulation Right;” See An AI Opportunity Agenda for India, (Google, October 2024), https://static.googleusercontent.com/media/publicpolicy.google/en//resources/india_ai_opportunity_agenda_en.pdf.
40Interviews with industry executives at global technology companies (Interview no. 1, 2, 3, and 4)
41Interview with an independent lawyer and researcher (Interview no. 14).
42Ibid.
43Interview with an independent lawyer and researcher (Interview no. 14).
44Interview with an academic and AI researcher in India (Interview no. 6).
45Interview with an academic and AI researcher in India (Interview no. 6).
46Interview with an independent lawyer (Interview no. 13).
47Interview with a senior law firm partner (Interview no. 11).
48Interview with a senior law firm partner (Interview no. 12).
49Michael J. D. Vermeer, “Historical Analogues That Can Inform AI Governance,” (RAND Corporation, August 19, 2024), https://www.rand.org/pubs/research_reports/RRA3408-1.html.
50Vijay Kelkar and Ajay Shah, In Service of the Republic: The Art and Science of Economic Policy (Penguin Random House India Private Limited, 2019).
51See “AI Principles,” Organisation for Economic Co-operation and Development (OECD), accessed November 12, 2024, https://www.oecd.org/en/topics/sub-issues/ai-principles.html.
52Surabhi Agarwal and Aashish Aryan, “New AI Law to Secure Rights of News Publishers: Ashwini Vaishnaw,” The Economic Times, April 5, 2024, https://economictimes.indiatimes.com/tech/technology/exclusive-new-ai-law-to-secure-rights-of-news-publishers-ashwini-vaishnaw/articleshow/109043916.cms?from=mdr.
53The pacing problem describes how technological innovation outpaces the ability of laws and regulations to keep up, resulting in significant ramifications for the governance of these technologies. Adam Thierer, “The Pacing Problem and the Future of Technology Regulation,” Mercatus Center, August 8, 2018, https://www.mercatus.org/economic-insights/expert-commentary/pacing-problem-and-future-technology-regulation.
54The “black box” problem in AI refers to the difficulty in understanding and explaining the inner workings of complex AI models, particularly those used in machine learning and deep learning. Cynthia Rudin and Joanna Radin, “Why Are We Using Black Box Models in AI When We Don’t Need To? A Lesson From an Explainable AI Competition,” Harvard Data Science Review 1, no. 2 (2019), https://doi.org/10.1162/99608f92.5a8a3a3d.
55Interview with an academic and AI researcher in India (Interview no. 6).
56Interview with an academic and AI researcher in India (Interview no. 6).
57Interview with a founder of an Indian AI startup (Interview no. 9).
58Mimansa, “Govt is Looking to Regulate AI Applications, Not AI Technology: Additional Secretary to IT Ministry,” MEDIANAMA, September 11 2024, https://www.medianama.com/2024/09/223-government-regulating-ai-application-not-ai-technology-meity-additional-secretary/.
59The EU AI Act, Regulation (EU) 2024/1689 (Enforced on June 13, 2024).
60Interview with a founder of an Indian AI startup (Interview no. 8).
61Information Technology Act, 2000 (Enforced on October 17, 2000).
62 Meghna Bal and N S Nappinai, “Crafting a Liability Regime for AI Systems in India,” Esya Centre and Cyber Saathi Foundation, September 2024, https://www.esyacentre.org/s/ESYA-Centre-Report-Crafting-a-Liability-Regime-for-AI-Systems-in-India.pdf
63See Margot E. Kaminski, “Regulating the Risks of AI,” Boston University Law Review, 103:1347 (2023), https://doi.org/10.2139/ssrn.4195066;
64See “Government Working to Regulate AI: IT Minister Ashwini Vaishnaw,” The Economic Times, July 3, 2024, https://economictimes.indiatimes.com/tech/artificial-intelligence/government-working-on-regulation-for-ai-it-minister-ashwini-vaishnaw/articleshow/111454670.cms?from=mdr; See also Muntazir Abbas, “AI Should Not Cause User Harm, Ready to Legislate: MeitY Secretary S Krishnan,” The Economic Times, August 8, 2024, https://economictimes.indiatimes.com/industry/telecom/telecom-news/ai-should-not-cause-user-harm-ready-to-legislate-meity-secretary-s-krishnan/articleshow/112365799.cms?from=mdr.
65Interviews with industry executives at global tech companies (Interview No.1, 2, 3, and 4) and publicly available information.
66Zoe Kleinman, “AI ‘Godfather’ Yoshua Bengio Feels ‘Lost’ Over Life’s Work,” BBC, May 31, 2023, https://www.bbc.com/news/technology-65760449; Yoshua Bengio, et al., International Scientific Report on the Safety of Advanced AI: Interim Report,” DSIT Research Paper Series no. 2024/009 (Government of the United Kingdom, May 2024), https://www.gov.uk/government/publications/international-scientific-report-on-the-safety-of-advanced-ai.
67Margot E. Kaminski, “Regulating the Risks of AI,” 1362, https://doi.org/10.2139/ssrn.4195066
68Interviews with a senior partner at a law firm in India (Interview no. 11).
69Yoshua Bengio, et al., International Scientific Report on the Safety of Advanced AI: Interim Report,” 12.
70See Christiane Wendehorst, “Liability for Artificial Intelligence: The Need to Address Both Safety Risks and Fundamental Rights Risks,” in The Cambridge Handbook of Responsible Artificial Intelligence: Interdisciplinary Perspectives, ed. Oliver Mueller et al., Cambridge Law Handbooks (Cambridge University Press, 2022), 187–209, https://doi.org/10.1017/9781009207898.016.
71“The AI Risk Repository,” MIT AI Risk Repository, accessed October 23, 2024, https://airisk.mit.edu/.
72See The EU AI Act, Regulation (EU) 2024/1689.
73See Responsible AI #AIForAll, Approach Document for India: Part 2 – Operationalizing Principles for Responsible AI, report, (NITI Aayog, Government of India, August 2021), https://www.niti.gov.in/sites/default/files/2021-08/Part2-Responsible-AI-12082021.pdf.
74See “Recommendations on Leveraging Artificial Intelligence and Big Data in Telecommunication Sector,” Telecom Regulatory Authority of India, Government of India, July 20, 2023, 13 onwards, https://www.trai.gov.in/sites/default/files/Recommendation_20072023_0.pdf.
75“TEC Draft Standard for Fairness Assessment and Rating of Artificial Intelligence Systems,” Telecommunication Engineering Centre, Department of Telecommunications, Ministry of Communications, Government of India, December 2022, https://www.tec.gov.in/pdf/SDs/TEC%20Draft%20Standard%20for%20fairness%20assessment%20and%20rating%20of%20AI%20systems%20final%202022_12_27.pdf.
76We have identified these five categories of risk based on our analysis of the existing global literature, a review of risk assessments that have been transposed into law and regulations, and discussions with experts in India on the types of harm for which there is anecdotal evidence of harm.
77Interviews with industry executives at global technology companies (Interview no. 3, 4, and 17) and Abhishek Singh quote.
78See Arsen Kourinian and Mayer Brown, “Data Collection & Management, Overview - Conducting an AI Risk Assessment,” Bloomberg Law, accessed November 13, 2024, https://www.bloomberglaw.com/external/document/X3D03D2K000000/data-collection-management-overview-conducting-an-ai-risk-assess.
79PIB Delhi, “Competition Commission of India (CCI) invites proposal for launching Market Study on Artificial Intelligence and Competition in India,” Press Information Bureau, April 22, 2024: https://pib.gov.in/PressReleaseIframePage.aspx?PRID=2018466.
80See Chapter 2, “The Tasks of Financial Law,” in Report of the Financial Sector Legislative Reforms Commission, Volume 1, (Department of Economic Affairs, Ministry of Finance, Government of India, March 2013), https://dea.gov.in/sites/default/files/fslrc_report_vol1_1.pdf; Vijay Kelkar and Ajay Shah, In Service of the Republic: The Art and Science of Economic Policy.
81Rajeev Chandrashekhar (then Minister of State for Electronics and Information Technology) pointed out that AI posed minimal threat to jobs, and therefore did not need to be overtly strictly regulated; “AI Not a Threat to Jobs, It’s Just Task Oriented at the Moment: Union Minister Rajeev Chandrasekhar,” The Economic Times, June 9, 2023, https://economictimes.indiatimes.com/news/india/ai-in-current-form-no-threat-to-jobs-minister-rajeev-chandrasekhar/articleshow/100875953.cms; See Shaoshan Liu, “India’s AI Regulation Dilemma” The Diplomat, October 27, 2023, https://thediplomat.com/2023/10/indias-ai-regulation-dilemma/.
82See ‘Real’ Laws for Artificial Intelligence: An Introductory Guide to AI Regulation, (PwC Australia, February 2024), https://www.pwc.com.au/services/artificial-intelligence/regulating-ai-article.pdf.
83Interview with an independent lawyer (Interview no. 13).
84See also Pravin Anand, et. al., “Artificial Intelligence Law,” Lexology.
85Interview with an academic and AI researcher in India (Interview no. 6).
86Interview with an independent lawyer (Interview no. 13).
87Interview with an academic and AI researcher in India (Interview no. 6).
88Interviews with an independent lawyer and researcher (Interview no. 14) and with a technologist and entrepreneur in India (Interview no. 10).
89Based on a preliminary review of the existing legal framework in India and our analysis of the important risks connected to AI contained in Part 2 of this paper.
90See The Consumer Protection Act, 2019, Act no. 35 of 2019, https://www.indiacode.nic.in/bitstream/123456789/15256/1/a2019-35.pdf, “Guidelines for Prevention of Misleading Advertisements and Endorsements for Misleading Advertisements, 2022,” (notified on June 9, 2022), https://consumeraffairs.nic.in/sites/default/files/CCPA_Notification.pdf.
91See Table 3 relating to applicability of existing laws to circulation of deepfakes in this paper.
92See Amlan Mohanty, “Beyond the AI Advisory.” It is unclear if the advisory applies to the developer of the AI system, the system integrator, application service provider or social media company, or some combination of these entities.
93See Christoph Schmon, “Automated Decision Making and Artificial Intelligence—A Consumer Perspective,” BEUC Position Paper, The European Consumer Organisation, June 20, 2018, https://www.beuc.eu/sites/default/files/publications/beuc-x-2018-058_automated_decision_making_and_artificial_intelligence.pdf.
94Article 50: Transparency Obligations for Providers and Deployers of Certain AI Systems, Artificial Intelligence Act (Regulation (EU) 2024/1689), Official Journal version of June 13, 2024, https://artificialintelligenceact.eu/article/50/.
95“Industry Self Regulation: Role and Use in Supporting Consumer Interests,” OECD Digital Economy Papers, no. 247 (OECD, March 1, 2015), 11, https://doi.org/10.1787/5js4k1fjqkwh-en.
96“AI Principles,” Organisation for Economic Co-operation and Development (OECD).
97Florence G’sell, “Regulating under Uncertainty: Governance Options for Generative AI,” October 6, 2024, https://doi.org/10.2139/ssrn.4918704.
98The EU AI Act, Regulation (EU) 2024/1689 (Enforced on June 13, 2024).
99Florence G’sell, “Regulating under Uncertainty: Governance Options for Generative AI.”
100The EU AI Act, Regulation (EU) 2024/1689.
101See Matt Sheehan, “China’s AI Regulations and How They Get Made,” Carnegie Endowment for International Peace, July 10, 2023, https://carnegieendowment.org/research/2023/07/chinas-ai-regulations-and-how-they-get-made?lang=en.
102See Hiroki Habuka, “Japan’s Approach to AI Regulation and Its Impact on the 2023 G7 Presidency,” Center for Strategic and International Studies, February 14, 2023, https://www.csis.org/analysis/japans-approach-ai-regulation-and-its-impact-2023-g7-presidency.
103Singapore’s sectoral approach is seen from the fact that the Monetary Authority of Singapore has released its principles for responsible AI use, the Health Ministry published its AI Healthcare guidelines for promoting patient safety in use of AI medical devices and the Info-Communications Media Development Authority (IMDA) issued its model framework along with a self-assessment guide to align with AI governance framework. See “Artificial Intelligence in Healthcare Guidelines (AIHGIe)” Ministry of Health, Singapore, Health Science Authority, IHiS, October 2021, https://www.moh.gov.sg/docs/librariesprovider5/eguides/1-0-artificial-in-healthcare-guidelines-(aihgle)_publishedoct21.pdf. and, “Principles to Promote Fairness, Ethics, Accountability and Transparency (FEAT) in the Use of Artificial Intelligence and Data Analytics in Singapore’s Financial Sector,” Monetary Authority of Singapore, November 12, 2018, https://www.mas.gov.sg/~/media/MAS/News%20and%20Publications/Monographs%20and%20Information%20Papers/FEAT%20Principles%20Final.pdf.
104See Responsible AI #AIForAll, Approach Document for India: Part 2 – Operationalizing Principles for Responsible AI, report, (NITI Aayog, Government of India, August 2021); See DHR-ICMR Artificial Intelligence Cell, “Ethical Guidelines for Application of Artificial Intelligence in Biomedical Research and Healthcare,” Indian Council of Medical Research, 2023, https://main.icmr.nic.in/sites/default/files/upload_documents/Ethical_Guidelines_AI_Healthcare_2023.pdf; These voluntary guidelines have been published to ensure ethical conduct and to address AI-related ethical concerns in biomedical research and healthcare. Stakeholders involved in biomedical research and healthcare, such as creators, developers, researchers, doctors, ethics committees, organisations, sponsors, and funding bodies, are the relevant actors for these guidelines.
105Interview with an industry executive at a global technology company (Interview no. 2).
106Interview with a technologist and entrepreneur in India (Interview no. 10).
107Interview with an industry executive at a global technology company (Interview no. 2).
108The code for Self-Regulation of Advertising Content in India, (Advertising Standards Council of India, June 2022 edition), https://www.ascionline.in/wp-content/uploads/2022/11/asci_code_of_self_regulation.pdf; and Information Technology (Intermediary Guidelines and Digital Media Ethics Code) Rules, 2021, (Updated on April 6, 2023) https://www.meity.gov.in/writereaddata/files/Information%20Technology%20%28Intermediary%20Guidelines%20and%20Digital%20Media%20Ethics%20Code%29%20Rules%2C%202021%20%28updated%2006.04.2023%29-.pdf.
109Interview with an independent lawyer and researcher (Interview no. 14).
110“Responsible AI: Guidelines for Generative AI,” NASSCOM, June 2023 Edition, https://www.nasscom.in/ai/img/GenAI-Guidelines-June2023.pdf.
111Interview with an industry executive at a global technology company (Interview no. 1).
112Interview with an industry executive at a global technology company (Interview no. 3).
113Interview with a technologist and entrepreneur in India (Interview no. 10).
114Interview with senior Indian government officials involved in AI policy (Interview no. 15 and 16).
115Interview with senior law firm partners (Interview no. 11 and 12).
116See PIB Delhi, “Cabinet Approves Ambitious IndiaAI Mission to Strengthen the AI Innovation Ecosystem,” March 7, 2024, https://pib.gov.in/PressReleaseIframePage.aspx?PRID=2012355.
117PIB Delhi, “UNESCO and MeitY organise National Stakeholder Workshop on Ethics of AI,” Press Information Bureau, June 5, 2024, https://pib.gov.in/PressReleasePage.aspx?PRID=2022930.
118Report of Committee – B on Leveraging AI for Identifying National Missions in Key Sectors, Ministry of Electronics and Information Technology, Government of India, July 2019, https://www.meity.gov.in/writereaddata/files/Committes_B-Report-on-Key-Sector.pdf.
119Rekha M. Menon, Madhu Vazirani, and Pradeep Roy, “Rewire for Success: Boosting India’s AIQ,” Accenture, April 26, 2021, https://www.accenture.com/content/dam/accenture/final/a-com-migration/r3-3/pdf/pdf-153/accenture-ai-for-economic-growth-india.pdf; Access Partnership, ”Economic Impact Report: Accelerating India’s Digital Leadership with Google” October 2023, https://cdn.accesspartnership.com/wp-content/uploads/2024/01/Accelerating-Indias-Digital-Leadership-with-Google.pdf.
120See Jun-E Tan, “To What Extent Does Malaysia’s National Fourth Industrial Revolution Policy Address AI Security Risks?,” in Reframing AI Governance: Perspectives from Asia, ed. Urvashi Aneja (Digital Futures Lab; Konrad-Adenauer-Stiftung, 2002), 39.
121“Indians Most Optimistic About AI in the World: Google Report,” The Indian Express, February 13, 2024, https://indianexpress.com/article/technology/tech-news-technology/indians-artificial-intelligence-google-ipsos-9159149/.
122Andrea Renda, et. al., Study to Support an Impact Assessment of Regulatory Requirements for Artificial Intelligence in Europe, Final Report D5, (Publications Office of the European Union, 2001), https://op.europa.eu/en/publication-detail/-/publication/55538b70-a638-11eb-9585-01aa75ed71a1.
123Pallavi Rao, “These Are the EU Countries With the Largest Economies,” World Economic Forum, February 1, 2023, https://www.weforum.org/agenda/2023/02/eu-countries-largest-economies-energy-gdp/; See Ben Thompson, “The EU Goes too Far,” Stratechery, July 8, 2024, https://stratechery.com/2024/the-e-u-goes-too-far/?utm_source=substack&utm_medium=email; Lionel Laurent, “Europe’s New AI Rules Have US in Mind. It Might Miss,” Bloomberg, August 1, 2024, https://www.bloomberg.com/opinion/articles/2024-08-01/europe-s-new-ai-rules-have-us-tech-in-mind-they-might-miss?embedded-checkout=true&utm_source=substack&utm_medium=email
124Interview with an industry executive at a global technology company (Interview no. 2).
125Section 10 (2) of the Digital Personal Data Protection Act, 2023, which says that a “Significant Data Fiduciary” shall appoint a Data Protection Officer who shall be based in India.
126Interview with the founder of an Indian AI startup (Interview no. 8).
127Aditi Agrawal, “Union Budget: ₹2 Crore for Data Protection Board Establishment, Salary Expenses,” Hindustan Times, July 23, 2004, https://www.hindustantimes.com/india-news/union-budget-2-crore-for-data-protection-board-establishment-salary-expenses-101721733699772.html.
128Interview with former government of India official (Interview no. 7).
129Interview with an executive at an Indian industry body (Interview no. 5).
130Amlan Mohanty and Tejas Bharadwaj, “The Importance of AI Safety Institutes,” Carnegie India, June 28, 2004, https://carnegieindia.org/posts/2024/06/the-importance-of-ai-safety-institutes?lang=en; and Rudra Chaudhuri, “Disrupting AI Safety Institutes: The India Way,” Carnegie India, September 17, 2024, https://carnegieindia.org/posts/2024/09/disrupting-ai-safety-institutes-the-india-way?lang=en.
131Aditi Agrawal, “Govt Mulls Setting Up Artificial Intelligence Safety Institute,” Hindustan Times, October 13, 2024, https://www.hindustantimes.com/india-news/govt-mulls-setting-up-artificial-intelligence-safety-institute-101728833433153.html; Interview with an independent lawyer and researcher (Interview no. 14).
132Interview with multiple experts (Interview no. 5, 7, 11, 12, 13, 14).
133See “Executive Order on the Safe, Secure, and Trustworthy Development and Use of Artificial Intelligence,” The White House, October 30, 2023, https://www.whitehouse.gov/briefing-room/presidential-actions/2023/10/30/executive-order-on-the-safe-secure-and-trustworthy-development-and-use-of-artificial-intelligence/.
134See PIB Delhi, “Competition Commission of India (CCI) invites proposal for launching Market Study on Artificial Intelligence and Competition in India,” Press Information Bureau, April 22, 2024, https://pib.gov.in/PressReleaseIframePage.aspx?PRID=2018466.
135See “Proposed Digital India Act, 2023,” Ministry of Electronics and Information Technology, Government of India, March 9, 2023.