- +3
Nidhi Singh, Tejas Bharadwaj, Shruti Mittal, …
Source: Getty
Outlooks on Open-Source Innovation at the India AI Impact Summit 2026
Drawing on ten public discussions from the India AI Impact Summit 2026, this article highlights key outlooks on open source in AI that are likely to shape policy and governance conversations going forward.
At the recently concluded India AI Impact Summit, several discussions brought renewed focus to the future of global open-source artificial intelligence (AI) innovation, positioning it as an approach for inclusive development in the Global South, a tool for sovereignty, and a lever to catch up in the global AI competition. Drawing on ten public discussions from the main summit, this article highlights key outlooks on open source in AI that are likely to shape policy and governance conversations going forward.1
The Open vs. Closed Source AI Debate
Open source is deeply embedded within our global digital infrastructure. Over the last three decades, open-source software development has been widely embraced for its net-positive role in democratizing innovation and access to technology, with both public and private actors heavily dependent on it. In the context of AI, however, there has been much debate on the proliferation of open-source frontier AI models, given competing considerations like innovation, competition, inclusive development, national sovereignty, public safety, and trustworthiness.
At its core, the debate is centered on whether the benefits of openness—democratizing access, collaborative innovation, and reduced barriers to entry—outweigh the risks of misuse and the removal of safety guardrails by malicious actors. This open versus closed AI debate has most visibly played out in the context of the U.S.–China tech rivalry, where the release of China’s open-source DeepSeek models in 2025 challenged U.S. leadership in frontier AI innovation, both in terms of performance parity and cost-competitiveness. In open-sourcing its large language models (LLMs), Chinese startup DeepSeek opened up its artifacts, such as model weights, parameters, tooling, and even its scientific research, to public inspection and modification, leading to mass adoption. This, however, also brought global communities to reflect on the perceived double-edged nature of open-source AI: the very transparency and modifiability that enables independent safety audits and broad adoption also exposes a distinct set of risks and vulnerabilities. For example, developers can locally download and modify open-source models to create their own versions, stripped of security guardrails and safety mechanisms.
Another key aspect of the open versus closed AI debate has centered on determining what qualifies as truly open-source AI models, a question prompted by the release of open-weight models, which provide access to the final trained parameters for running and fine-tuning models but keep the training data, code, and methodologies closed, and are marketed as “open-source.” This debate has been significant for the technical open-source ecosystem, since the ability to inspect, modify, and replicate are some of the core characteristics of open source as a model of development. In some jurisdictions like the EU, this discussion has also been important, given legal exemptions for open-source AI models under the EU AI Act.
India’s Position on Open-Source AI and the Delhi Declaration
India’s position on open-source innovation has consistently favored open-source approaches, viewing them as a means to reduce technology costs, preserve sovereignty, and enable domestic adaptation of foundational tools. Over the last three decades, Indian government policies have demonstrated a clear preference for open-source over closed, proprietary options, particularly for public procurement of infrastructure and applications, as well as in departmental operations.2 More recent examples of the government’s position are found in initiatives by the Ministry of Electronics and Information Technology (MeitY), such as the India AI Kosh, an open, national repository of AI models and datasets to support local AI application development, and the BHASHINI platform, which acts as a unified repository for multilingual datasets and a combination of open and proprietary AI models to facilitate large-scale language translation.
Adoption patterns reinforce this beyond policy. A recent survey by the Competition Commission of India (CCI) found that 76 percent of Indian startups build their application solutions on open-source technologies. This extends to protocols. In early 2026, India’s National Statistics Office (NSO) integrated the Model Context Protocol (MCP)—an open-source standard for connecting AI systems to external data sources—into its e-Sankhyiki portal, enabling AI agents to directly query official government datasets. At the India AI Summit in March 2026, government-partnered startup, Sarvam AI, released two foundational LLMs under the permissive Apache 2.0 open-source license, trained on government-subsidized compute, cementing this alignment at the level of model development itself.
India’s open-source posture has not been without friction. Government restrictions on DeepSeek’s Chinese-hosted interfaces drew significant attention, but the concern was primarily about data residency on Chinese servers, not the open-source model architecture itself. In closed-door discussions, however, policymakers have treated DeepSeek’s release as a technical asset—a blueprint for model distillation and evidence that sovereign AI development is achievable with constrained resources.3
In the Delhi Declaration, adopted by eighty-eight countries and international organizations, including China, France, Japan, Russia, the United Kingdom, and the United States, India has sought to anchor this position multilaterally. The declaration explicitly acknowledges open-source AI as a driver of scalability and adaptability across sectors, while framing openness as appropriate where contextually warranted rather than unconditionally beneficial. Its focus on access and inclusion, and its implicit treatment of openness as a spectrum, also marks a step beyond the Paris Summit, where the United States and the UK had declined to sign a comparable declaration on inclusive and sustainable AI.
India is uniquely positioned to advance openness as a driver for redistributing value by taking the lead in reusing knowledge and reducing duplication of effort, while also ensuring that participation translates into agency by shaping some of the standards underpinning global stability.
The broad buy-in reflected the multi-dimensional appeal of openness: Different delegations at the Summit found common ground in the concept not because they share identical interests, but because openness could simultaneously serve equity, sovereignty, and coordination goals. The four sections below trace how these distinct framings played out across the summit’s substantive discussions.
Openness Across Different Organizing Principles
Openness surfaced at the summit across three distinct framings for cross-border AI collaboration. First, openness was framed as a driver for scaling AI adoption for socio-economic good across the Global South. Some at the summit, however, also cautioned against this approach, given the risk of distraction from the structural needs of the Global South, like talent, infrastructure, and governance.
Second, openness was referred to as a lever for middle power challengers to U.S.–China AI duopoly, enabling them to catch up by leveraging the knowledge and findings of others. In doing so, middle powers could form an ad hoc coalition to collectively challenge the J.D. Vance–view of the world as customers of American AI technologies.
Third, openness was framed as the organizing principle for global coordination of complex systems. With AI becoming the “coordination layer” of the global economy by automating how complex systems interact with each other, fostering openness in the form of interoperability standards, cross-border validation, and mutual safety principles becomes essential to ensure alignment across different AI environments. It was also highlighted that without such coordination, mismatched systems would create instability and systemic risk.
India, straddling multiple identities, is uniquely positioned to advance openness as a driver for redistributing value by taking the lead in reusing knowledge and reducing duplication of effort, while also ensuring that participation translates into agency by shaping some of the standards underpinning global stability. However, India would do well to lean into the Global South framing of openness, allowing it to move beyond being seen as a market and instead act as a convenor of solidarity by shaping how openness delivers equitable outcomes.
Democratizing AI Trustworthiness
One of the most significant ways in which the summit furthered the conversation on open source was a clear recognition of the role played by such ecosystems beyond AI model development. A core argument that emerged was that open-source development is preferable to proprietary development precisely because it disrupts the concentration of knowledge and power that proprietary AI creates. One such discussion highlighted that the closed or proprietary approach to frontier AI model development adopted by “AI empires” has enabled a few players to monopolize knowledge production. This, in turn, has enabled them to control who can investigate frontier models, positioning them as the only ones to advise policymakers or inform the public on these technologies. Openness in AI development is one way to challenge this monopoly.
Despite differences in wording, there was a general agreement on the need for open-source tools for evaluating AI models and systems in order to democratize the ability to establish AI trustworthiness and, in turn, AI governance. For the global majority, a shared, open-source “evaluation stack” is the most efficient way to share and reuse knowledge so that limited resources are not fractured across multiple organizations. Some of the tools discussed included benchmarking, content detection, and safety evaluation.
Open-source benchmarking tools, that is, standardized tests which assess AI models across different metrics of performance, safety, fairness and bias, have become the industry favorite for carrying out AI evaluation. While benchmarks have enabled researchers to evaluate models in a reproducible way, many at the summit cautioned that they often hyper-focus on general performance rather than on specific failure points, such as model biases specific to local contexts or vulnerabilities in low-resource languages. In this context, it was highlighted that the open-source AI tooling ecosystem currently lacks a Global South focus. Given these limitations, summit discussions emphasized the importance of open-sourcing more contextual tooling, such as AI red teaming software that would allow researchers to evaluate model performance in structured, real-world scenarios and to proactively identify different points of failure. Similar importance was given to the need for open-sourcing content moderation tools, such as those for detecting Child Sexual Abuse Material (CSAM) in AI models, which were previously only affordable to major AI companies.
Finally, a key misconception highlighted in this context was the difference between open-source “tools” for AI safety and open-source “findings” or data, and that the presence of the former does not guarantee the latter. Emphasizing the value of making such findings open, summit discussions raised concerns about using open-source AI safety tools to generate closed data, or vice versa. Using an open tool to find a model’s flaws, while keeping those flaws secret, would limit social progress.
National Sovereignty and Adaptability
AI sovereignty was a recurrent theme across panel discussions on openness. One panel, in particular, dealt with this subject most directly, addressing questions on how to define AI sovereignty, what is the value of taking an open-source approach, and what are the challenges involved. It was highlighted that decisions around what constitutes AI sovereignty need to be made by governments across the AI stack. This ranges from access to hard infrastructure, which is subject to supply chain constraints and geopolitics, to agency over server operations, so as to not be dependent on foreign server providers who may be subject to export controls, and control over sensitive private and public data. Moreover, governments are also grappling with cultural or linguistic sovereignty, given the cultural limitations of frontier models developed in the Global North for the Global North.
Pragmatically, achieving full-stack sovereignty, however, is not feasible for most governments in the world. This is because building end-to-end AI capacity—from semiconductor fabrication and data center infrastructure to frontier model training—demands extraordinary capital investment, specialized talent at scale, and greater access to global supply chains that are currently concentrated in only a handful of countries. Therefore, replicating this entire stack independently is neither financially feasible nor strategically efficient.
Further, even across specific layers of the AI stack, moving away from foreign platform dependency toward greater sovereignty need not mean that a government must operate in isolation or invest in blank-slate, duplicative efforts. In this context, open-source proponents at the summit, including the Mozilla Foundation, OpenUK, and Red Hat, emphasized the history of open infrastructure in contributing to greater private innovation and reduced costs, citing examples such as the Internet and Linux, among others.
Given the range of capital-intensive inputs required in AI, there was also clear acknowledgement of the limits beyond which open-source software economics are not applicable. Still, the key merits of open source in AI are threefold. First, its ability to distribute costs across shared commons, that is, allowing countries to build on collectively developed foundations, such as open datasets, model weights, and tooling, rather than independently bearing the full expense of building from scratch. Second, its ability to provide optionality to developers and builders. Third, and most significantly, allowing countries to reuse shared innovation and invest in domestic capacity for adapting such foundational technologies to suit their local contexts.
Risks and Challenges
At the same time, discussions underscored that the potential for nefarious use of open-source continues to be a major concern across different stakeholders, despite the open-source ecosystem’s decades of experience in addressing security vulnerabilities transparently. Moreover, the growing adoption of agentic AI could further shift the risk calculus for open-source AI.
Technical discussions also touched upon the challenges associated with AI-generated code and automated agentic contributions to open-source AI projects. Functionally, the ease of creating AI-generated code has led to a flood of low-quality code contributions or “AI slop,” which in turn is causing an overhead problem for maintainers. Given that open-source is fundamentally a human-led social process, open-source advocates emphasized the need for hard and clear community standards around automated contributions to prevent an undermining of the social capital and trust accumulated by the open-source ecosystem.
Future of Open Source at the AI Summit
Switzerland’s chairmanship at the next AI Summit in 2027 will provide a venue to test whether the “middle power” framing discussed in New Delhi can be institutionalized. Swiss policy has signaled an interest in using open-source architecture as a pragmatic tool for digital sovereignty and global competition. A key aspect of this trajectory is the growing preference for open-source in public procurement; just as Indian policies have long preferred open-source for public infrastructure, the Swiss chairmanship may further the dialogue on making open-source the default for publicly funded code to ensure transparency and reuse. By exploring initiatives such as the Swiss open-source Apertus model, future summit discussions are also likely to bring focus on the feasibility of treating open-source foundation models as shared infrastructure to suit local contexts and reduce duplicative costs.
Conclusion
Discussions from the India AI Impact Summit 2026, therefore, reflect a broader shift toward treating openness as a foundational organizing principle for global AI coordination. While the tension between innovation and public safety that often surrounds discussions on open-source AI will likely remain a core debate, the more pressing question is no longer whether to embrace open-source AI, but how countries can use its tools and standards to adapt foundational technologies to their own contexts and needs.
About the Author
Research Analyst, Technology and Society Program
Shruti Mittal is a research analyst in the Technology and Society Program at Carnegie India. Her work focuses on semiconductor industrial policy, AI governance, and open approaches to development for the Global South.
- For People, Planet, and Progress: Perspectives from India's AI Impact SummitResearch
- The State of Digital Transformation in Pacific Island CountriesArticle
Shruti Mittal, Adarsh Ranjan
Recent Work
Carnegie does not take institutional positions on public policy issues; the views represented herein are those of the author(s) and do not necessarily reflect the views of Carnegie, its staff, or its trustees.
More Work from Carnegie Endowment for International Peace
- The Geopolitical Debates Over Controlling Cloud ComputeArticle
If U.S. policymakers continue down the path of restricting China’s access to frontier AI, they will eventually have to implement some sort of restriction on cloud access.
Noah Tan
- From Labor Scarcity to AI Society: Governing Productivity in East AsiaArticle
The debate over AI and work too often centers on displacement. Facing aging populations and shrinking workforces, East Asian policymakers view AI not as a threat, but as a cross-sectoral workforce strategy.
Darcie Draudt-Véjares, Sophie Zhuang
- Governing AI in the Shadow of Giants: Korea’s Strategic Response to Great Power AI CompetitionArticle
In its version of an AI middle power strategy, Seoul is pursuing alignment with the United States not as an endpoint but as a strategy to build industrial and geopolitical leverage. Whether this balance holds remains an open question.
Darcie Draudt-Véjares, Seungjoo Lee
- For People, Planet, and Progress: Perspectives from India's AI Impact SummitResearch
This collection of essays by scholars from Carnegie India’s Technology and Society program traces the evolution of the AI summit series and examines India’s framing around the three sutras of people, planet, and progress. Scholars have catalogued and assessed the concrete deliverables that emerged and assessed what the precedent of a Global South country hosting means for the future of the multilateral conversation.
- +3
Nidhi Singh, Tejas Bharadwaj, Shruti Mittal, …
- The AI Labor Debate: Three Views on the Future of WorkPaper
AI could hollow out jobs, reshape them gradually, create entirely new ones—or do all three at once. The case for starting to act now doesn’t depend on knowing which.
Teddy Tawil