The bills differ in minor but meaningful ways, but their overwhelming convergence is key.
Alasdair Phillips-Robins, Scott Singer
{
"authors": [
"Amlan Mohanty",
"Shatakratu Sahu"
],
"type": "other",
"centerAffiliationAll": "",
"centers": [
"Carnegie Endowment for International Peace",
"Carnegie India"
],
"collections": [],
"englishNewsletterAll": "",
"nonEnglishNewsletterAll": "",
"primaryCenter": "Carnegie India",
"programAffiliation": "",
"programs": [],
"projects": [
"Technology and Society"
],
"regions": [
"South Asia",
"India"
],
"topics": [
"Technology",
"AI"
]
}Source: Getty
This essay shares insights on key elements of India’s AI strategy and outlines some of the trade-offs involved in balancing risks with opportunities.
This essay is part of a series that highlights the main takeaways from discussions that took place at Carnegie India’s eighth Global Technology Summit, co-hosted with the Ministry of External Affairs, Government of India.
India’s national strategy on artificial intelligence (AI) is the pursuit of a delicate balance between fostering innovation and mitigating risk.
At the Global Technology Summit (GTS) 2023, India’s AI strategy was a key topic of discussion—India’s ministerial representative talked about the need for policy enablers and guardrails, industry leaders presented a use-case-led AI strategy, while global policymakers emphasized the value of India’s governance model for the rest of the world.
In this essay, we dig deeper into these questions and share insights on the key elements of India’s national AI strategy and the trade-offs involved in balancing risks and opportunities.
Over the years, the Indian government has actively encouraged AI applications for social welfare. Some examples include applications to detect diseases, increase agricultural productivity, and promote linguistic diversity.
India’s proposed model to deliver impact at scale using technology is compelling. For example, its recent efforts to leverage digital public infrastructure (DPI) to promote financial inclusion have been endorsed by the World Bank in glowing terms.
Now, with the world’s attention quickly moving to the transformative potential of AI, India’s national strategy will be influential. In particular, India’s pro-innovation and welfare-based approach to AI holds immense value for developing countries in the Global South.
On the global stage, India has delivered this message emphatically. The recent G20 leaders’ declaration, signed in New Delhi, advocates a “pro-innovation governance approach” to AI. Subsequently, at the Global Partnership on Artificial Intelligence Summit, which was also hosted by India, the concept of “collaborative AI” was formulated, wherein member states agreed to promote equitable access to AI resources for the developing world.
In this section, we outline three factors that policymakers should consider as they seek to translate these principles into policy as a part of India’s national AI strategy.
Despite the attention on AI-led innovation, India’s policymakers remain sensitive to the risks at hand. At the AI Safety Summit in November 2023, India’s ministerial representative stated that innovation should not get ahead of regulation and signed the Bletchley Declaration, which outlines the safety risks of “frontier models.” In addition, India has endorsed the need for further engagement on fairness, accountability, transparency, privacy, intellectual property, and the development of trustworthy and responsible AI.
With that said, India’s domestic approach to regulating AI remains lacking. While the principles of openness, safety, trust, and accountability have been a core part of the government’s regulatory agenda, there does not appear to be a coherent strategy to regulate AI at present. For example, the current strategy to combat deepfakes has been to issue ad hoc advisories and legal threats even as the problem persists. This approach has been criticized for being “surface-level, rushed, and lacking deep research” by some.
Instead, the government should adopt a holistic approach to AI governance through the prism of risk and safety. It could issue technical guidance on how the responsible AI principles published in 2021 may be implemented to address the issue of misinformation generally. This may prove more helpful than issuing responses in the wake of specific incidents involving public figures. By doing so, the government would establish thresholds for transparency and accountability that can be transposed to different contexts.
India should also have a clear strategy to address emerging regulatory issues involving AI systems. This will entail developing a risk-based taxonomy, an updated platform classification framework for different actors in the AI value chain, and appropriate liability frameworks, including safe harbor protections for AI systems.
Governments around the world are looking for a model that strikes the right balance between innovation and safety. As India looks to formalize its strategy with a national AI program that is expected to cost more than a billion dollars, there is significant global interest.
India’s success in deploying technology at scale through innovative uses of DPI has captured the world’s attention. At the same time, its proposed light-touch approach to AI regulation is likely to resonate with countries in the Global South, which do not want the developed world’s obsession with “existential risk” to hinder their march to progress.
There is a lot riding on India’s AI strategy, both for India and the rest of the world.
Carnegie does not take institutional positions on public policy issues; the views represented herein are those of the author(s) and do not necessarily reflect the views of Carnegie, its staff, or its trustees.
The bills differ in minor but meaningful ways, but their overwhelming convergence is key.
Alasdair Phillips-Robins, Scott Singer
Washington and New Delhi should be proud of their putative deal. But international politics isn’t the domain of unicorns and leprechauns, and collateral damage can’t simply be wished away.
Evan A. Feigenbaum
Against the backdrop of increasing global tensions, transformative technologies—notably artificial intelligence—are poised to revolutionize how the military wages war and how leaders think about, prepare for, and decide to go to war.
Adam McCauley
The second International AI Safety Report is the next iteration of the comprehensive review of latest scientific research on the capabilities and risks of general-purpose AI systems. It represents the largest global collaboration on AI safety to date.
Scott Singer, Jane Munga
A close study of five crises makes clear that Cold War logic doesn’t apply to the South Asia nuclear powers.
Moeed Yusuf, Rizwan Zeb