Registration
You will receive an email confirming your registration.
The integration of artificial intelligence (AI) into critical infrastructure across sectors—from healthcare and transportation to defense and financial services—has resulted in increased cybersecurity challenges. AI systems expand the attack surface of critical infrastructure, making them more susceptible to supply chain and data set attacks like data poisoning, etc. These threats can disrupt critical services, impacting national security and economic stability. Given the global scale of AI advancement and adoption, it is imperative to ensure that co-operation on safeguards for these threats remains global and built on shared values.
Carnegie India’s roundtable on “Cyber Securing AI Infrastructure” brought together senior policymakers, cybersecurity experts, and industry leaders to examine these emerging threats and develop frameworks for international cooperation in securing AI infrastructure. The objectives of the session were twofold: understanding the challenges to securing AI infrastructure in cyberspace and exploring ways to foster international cooperation.
The discussion identified five critical areas requiring immediate attention:
- the expanding attack surface created by AI integration;
- fragmented global regulatory standards;
- the need for capacity building in developing nations;
- data governance as the foundation of AI security;
- and the delicate balance between national security imperatives and international humanitarian obligations.
Takeaways from these key themes are as follows:
Integration of AI into Cyber Infrastructure
AI systems are being rapidly embedded into various digital and public infrastructures across the Indo-Pacific. Examples include Singapore’s AI strategy for healthcare and transport, Australia’s cybersecurity strategy for energy and emergency services, Indonesia’s smart city program for waste and traffic management, and India’s digital public infrastructure (DPI) initiatives like UPI and FASTag. While these efforts improve service delivery, the increased cyber risks must not be overlooked, especially where AI is adopted without robust safeguards. As AI begins to control physical processes through integration with operational technologies, there is an increasing risk to system outputs and citizen safety. The discussion emphasized the need for better procurement practices, enhanced transparency in black-box AI systems, and greater public sector oversight. Participants also questioned whether India’s existing DPI has built-in cybersecurity frameworks for AI incorporation or if they rely on ad hoc fixes, which could determine the risk matrix.
Issue of Fragmented Regulations and Standards
The different components that make up cyber infrastructures, like undersea cables and data centres, are globally interconnected, because of which, risks posed by AI integration can quickly transcend borders. While several global cybersecurity standards exist, the lack of harmonization of these standards presents challenges. The integration of AI into cyber processes has complicated this harmonization. Participants underscored the need for aligned cybersecurity standards and frameworks (including on the use of AI in cyber processes) to facilitate international cooperation. Participants suggested using AI safety institutes for testing AI models for cybersecurity risks and developing common standards. However, there was limited consensus on what such global alignment could look like.
Importance of Reducing Compliance Cost and Facilitating Capacity Building
Participants noted that countries should first build capabilities and identify risks, and then create regulations to ensure regulatory clarity and foster innovation. It was emphasized that regulations can’t be created in a vacuum and need to align with capacities. Participants also emphasized that countries must retain the flexibility to build AI capabilities to foster greater contextualization. This means that a country would emphasize its domestic sector and micro, small, and medium enterprises (MSMEs) to build and incorporate AI models and applications. Participants noted that MSMEs that incorporate AI often lack the resources of large enterprises or Big Tech to ensure compliance and thus require simplified compliance models. Reducing the cost and complexity of cybersecurity requirements, especially those posed by the integration of AI, is crucial. The discussion emphasized the need to facilitate capability-building for reducing compliance costs to address cybersecurity risks of AI infrastructure.
Securing Data as a Foundation for Cybersecurity
Participants identified data protection as the bedrock of ensuring a secure AI infrastructure. India, with its expansive market and comprehensive inventory of AI use cases, is well-positioned to become a proving ground, but it must first ensure strong data security frameworks. Data governance is key, as AI relies on vast amounts of structured and sensitive data. Weak data protection can lead to compromised AI systems. Concerns were also raised over the use of foreign datasets from the United States or China in Indian AI models, especially in sensitive sectors like healthcare. Data centres were highlighted as critical vulnerabilities due to their central role in the AI stack, requiring integrated strategies covering data, network, and physical security of these data centres.
The need to develop a sovereign data infrastructure tailored to India’s multilingual and multicultural context was emphasized. Developing sovereign and contextualized datasets for AI can be an example for other developing and emerging countries as well. Strong privacy principles and data protection rules were noted to be helpful in anchoring national efforts on AI security. Participants also noted that India’s Digital Personal Data Protection (DPDP) Act, passed in 2023, was inadequate for addressing AI-related issues. Others added that the DPDP Act never aimed to focus on AI and that India needed a new regulatory framework for governing data for AI infrastructure.
Balancing National Security and International Humanitarian Law
Participants highlighted the importance of AI infrastructure for defense and national security. There is some hesitancy around fully integrating AI into military and defense systems, owing to cyber vulnerabilities. AI-enabled cyber threats become harder to trace and pose attribution challenges. However, not incorporating AI to its fullest potential for defense would be a missed opportunity.
It was noted that countries like China are investing in AI-driven cyber and electronic capabilities, seen as key to future conflicts. With adversaries advancing rapidly, India must build AI-driven cyber-offensive capabilities while using AI for cyber-resilience. Participants called for urgent upgrades to India’s cyber command, currently being carried out by its Defence Cyber Agency. While emphasizing the need to develop AI for the military, participants also stressed the importance of maintaining compliance with International Humanitarian Law (IHL). The principle of distinction, ensuring technology can differentiate between military and civilian targets, must be preserved. Initiatives like the International Committee for the Red Cross’s Digital Emblem project, which aims to identify non-target civilian servers and afford them the distinctive emblem status for protection under IHL, were discussed.
It was noted that tech companies need to support such efforts, as their platforms will be central in both peace and conflict. While IHL offers a balanced framework, allowing for military maneuverability without unnecessary harm, some argued that adversaries may not follow these norms, putting law-abiding militaries at a disadvantage. This prompted calls for an objective reassessment of the concept of dual-use technologies, identifying cyber capabilities and platforms that are used for military purposes and those purely for civilian purposes.
Cybersecurity in the Space Sector
The cyber vulnerability of space infrastructure also emerged as a critical concern. Participants noted that satellites have also become targets of cyberattacks, citing the Viasat hack that took place before the Russia-Ukraine conflict. India’s space reforms in 2020 boosted private participation, but companies currently lack incentives to protect satellites from cyberattacks. Satellites also use edge computing through AI for faster data processing and transfer. Given satellites’ essential role in remote sensing, navigation, and communication, their cybersecurity is central to a country’s national security. In this strategic area, the government needs to assume the primary role in building cyber defences for satellites and address the vulnerabilities posed by AI in the space sector. Currently, India’s Ministry of Defence, through its iDEX program, is working with the private sector to bolster cybersecurity in the space sector. Greater private sector participation is crucial to ensuring access to cutting-edge innovation in cybersecurity.
Role of the Private Sector and the Importance of Multilateral Dialogue.
Private sector representatives at the discussion noted the shift from broad normative frameworks to practical, technical solutions for cybersecurity that their enterprises have made while incorporating AI. The rise of generative AI has intensified cyber risks, especially around credential security. It was also highlighted that companies acting as both AI providers and hosts of third-party solutions face complex accountability and liability issues. Participants raised questions about how to “red team” effectively and implement data privacy tools while ensuring sustained, structured collaboration with governments and institutions. There was consensus that business adoption and transformation to AI now required an understanding of regulatory environments, technology stacks, and skilled talent. These regulatory frameworks must be interoperable across borders, which can be achieved through continued multilateral dialogue. Multilateral processes similar to the UN’s Open-Ended Working Group (OEWG) on the security of ICTs (2021–2025) were cited as key to shaping cohesive international norms on ensuring cybersecurity of AI infrastructure. With the OEWG mandate ending in 2025, participants suggested that new proposals for OEWG at the UN focus on the nexus of AI and cybersecurity.
Mugdha Satpute and Charukeshi Bhatt provided valuable assistance in preparing this summary.