Donald Trump’s win in the recent presidential election has put the future of the United States’ policy on artificial intelligence (AI) into sharp focus. There are many moving pieces, including new appointments, shifting geopolitical forces, and potential AI breakthroughs that will shape AI policy under the president-elect.
Some analysts have suggested that the new administration will adopt a hands-off approach to AI regulation, strengthen export controls, and promote the use of AI by the U.S. military and intelligence agencies.
This essay advances the discussion by exploring the likely actions of the Trump administration and driving forces on issues of deregulation, the United States’ leadership in AI, national security, and global engagements on AI safety.
The Deregulation Agenda
All indications are that Trump will focus on deregulation to spur innovation and improve the United States’ global competitiveness. For example, the Trump administration is expected to repeal the Biden-era Executive Order on Safe, Secure, and Trustworthy Development and Use of AI (Biden EO) in its current form—as publicly declared by the president-elect and explicitly stated in his election manifesto. The Biden EO, launched in October last year, is a comprehensive regulatory framework mandating federal agencies to report on their use of AI, implement testing protocols, and enforce accountability measures.
The repeal of the Biden EO will influence AI governance globally. India, for instance, which has looked at the executive order to model its AI governance framework, may be prompted to reevaluate its approach.
With a possible repeal of the Biden EO, the Biden-era voluntary commitments to manage AI risks will assume significance. While leading companies have made some progress under these voluntary frameworks, experts have highlighted that industry-wide compliance has been partial.
Companies are unlikely to walk back on their voluntary commitments under the new administration, as some of these—such as security testing of unreleased AI systems—are considered both a business and policy imperative. However, deregulation could further reduce the pressure on companies to be transparent on other issues, such as their governance structures and third-party audits of their AI systems.
AI Leadership Goals, National Security, and Competition With China
While Trump’s deregulation agenda differs from Biden’s in some respects, there is common ground between the two presidents on the desire for American leadership on AI and a focus on national security.
During his first presidential term in 2019, Trump issued the Executive Order on Maintaining American Leadership in AI (Trump EO) geared toward enhancing economic and national security. The Trump EO aligns with Biden’s recent National Security Memorandum of October 2024 (NSM), which prioritizes U.S. leadership on AI through supply chain diversity and security, developing next-generation supercomputers, and integrating AI within the national security apparatus. For these reasons, the Trump administration is likely to support the broad goals of the NSM and its “America First” approach.
Further, export controls targeting China are likely to gain momentum. This may include tighter controls on the latest AI resources like High Bandwidth Memory 3 (HMB3) chips. Reports of the Chinese military using Meta’s open-source AI model Llama 2.0 may further bolster this policy priority.
Carnegie India’s recent Global Technology Summit (GTS) Innovation Dialogue in Bengaluru hosted a closed-door discussion on “U.S.-India Tech Partnership: Opportunities,” which highlighted the unintended spillover effects of U.S. export control restrictions on India and the broader Global South, though not targeted at them. In particular, export control restrictions on open-source foundation models pose significant risks to Indian developers who rely on these tools to build “sovereign AI” solutions. Discussants highlighted the technical difficulty in implementing the export control measures, especially for AI models already in distribution, and suggested collaborative approaches to AI governance to address risks concerning open-source models instead of imposing export control restrictions.
Multilateral Engagements on AI Safety
Trump’s approach to international cooperation indicates a reluctance toward multilateral engagement, as displayed in the withdrawal of the U.S. from the Paris Climate Agreement, disruption of the World Trade Organization process, and Trump’s suggestion to “quiet quit” the NATO alliance.
However, the multilateral network of AI safety institutes (AISI), focused on evaluating the risks of advanced AI models, may remain functional under the Trump administration. The network has gained momentum following its official launch meeting in San Francisco in November 2024. Participation from countries like India, looking to set up its own AISI, demonstrates the network’s growing influence and the U.S. AISI’s leadership role in it. Additionally, the U.S. AISI’s position appears secure, given its bipartisan support and strategic value. From an “America First” perspective, maintaining the U.S. AISI’s role is practical—if U.S.-based AI models are not evaluated domestically, they risk assessment by AISIs in the UK, EU, Singapore, and elsewhere.
Discussions at the GTS Innovation Dialogue highlighted how India and the United States could develop a shared technical understanding of AI and uniquely collaborate on AI safety—the U.S. AISI could help evaluate the safety of “upstream” foundational models while India assesses the safety of “downstream” applications in its local context. However, discussants cautioned that the definition of “safety” should align with India’s local needs and reflect its pro-innovation and development agenda using AI.
Conclusion
With varying degrees of uncertainties on different aspects of Trump’s AI agenda, much remains to be seen during the new administration on the issues analyzed in this essay. Additionally, the unpredictable pace and nature of AI innovation will make risk mitigation a critical challenge. These uncertainties will shape not only AI development in the United States but also technological trajectories across the globe.