• Research
  • Emissary
  • About
  • Experts
Carnegie Global logoCarnegie lettermark logo
Democracy
  • Donate
{
  "authors": [
    "Shatakratu Sahu",
    "Amlan Mohanty"
  ],
  "type": "commentary",
  "centerAffiliationAll": "",
  "centers": [
    "Carnegie Endowment for International Peace",
    "Carnegie India"
  ],
  "englishNewsletterAll": "",
  "nonEnglishNewsletterAll": "",
  "primaryCenter": "Carnegie India",
  "programAffiliation": "",
  "programs": [],
  "projects": [
    "Technology and Society"
  ],
  "regions": [
    "North America",
    "India",
    "China",
    "United States"
  ],
  "topics": [
    "Technology",
    "AI",
    "Foreign Policy"
  ]
}
Source: iStock
Commentary
Carnegie India

What’s Next for U.S. AI Policy?

This commentary explores the likely actions of the Trump administration and driving forces on issues of deregulation, the United States’ leadership in AI, national security, and global engagements on AI safety.

Link Copied
By Shatakratu Sahu and Amlan Mohanty
Published on Dec 18, 2024
Project hero Image

Project

Technology and Society

This program focuses on five sets of imperatives: data, strategic technologies, emerging technologies, digital public infrastructure, and strategic partnerships.

Learn More

Donald Trump’s win in the recent presidential election has put the future of the United States’ policy on artificial intelligence (AI) into sharp focus. There are many moving pieces, including new appointments, shifting geopolitical forces, and potential AI breakthroughs that will shape AI policy under the president-elect.

Some analysts have suggested that the new administration will adopt a hands-off approach to AI regulation, strengthen export controls, and promote the use of AI by the U.S. military and intelligence agencies.

This essay advances the discussion by exploring the likely actions of the Trump administration and driving forces on issues of deregulation, the United States’ leadership in AI, national security, and global engagements on AI safety.

The Deregulation Agenda

All indications are thatTrump will focus on deregulation to spur innovation and improve the United States’ global competitiveness. For example, the Trump administration is expected to repeal the Biden-era Executive Order on Safe, Secure, and Trustworthy Development and Use of AI (Biden EO) in its current form—as publicly declared by the president-elect and explicitly stated in his election manifesto. The Biden EO, launched in October last year, is a comprehensive regulatory framework mandating federal agencies to report on their use of AI, implement testing protocols, and enforce accountability measures.

The repeal of the Biden EO will influence AI governance globally. India, for instance, which has looked at the executive order to model its AI governance framework, may be prompted to reevaluate its approach.

With a possible repeal of the Biden EO, the Biden-era voluntary commitments to manage AI risks will assume significance. While leading companies have madesome progress under these voluntary frameworks, experts have highlighted that industry-wide compliance has been partial.

Companies are unlikely to walk back on their voluntary commitments under the new administration, as some of these—such as security testing of unreleased AI systems—are considered both a business and policy imperative. However, deregulation could further reduce the pressure on companies to be transparent on other issues, such as their governance structures and third-party audits of their AI systems.

AI Leadership Goals, National Security, and Competition With China

While Trump’s deregulation agenda differs from Biden’s in some respects, there is common ground between the two presidents on the desire for American leadership on AI and a focus on national security.

During his first presidential term in 2019, Trump issued the Executive Order on Maintaining American Leadership in AI (Trump EO) geared toward enhancing economic and national security. The Trump EO aligns with Biden’s recent National Security Memorandum of October 2024 (NSM), which prioritizes U.S. leadership on AI through supply chain diversity and security, developing next-generation supercomputers, and integrating AI within the national security apparatus. For these reasons, the Trump administration is likely to support the broad goals of the NSM and its “America First” approach.

Further, export controls targeting China are likely to gain momentum. This may include tighter controls on the latest AI resources like High Bandwidth Memory 3 (HMB3) chips. Reports of the Chinese military using Meta’s open-source AI model Llama 2.0 may further bolster this policy priority.

Carnegie India’s recent Global Technology Summit (GTS) Innovation Dialogue in Bengaluru hosted a closed-door discussion on “U.S.-India Tech Partnership: Opportunities,” which highlighted the unintended spillover effects of U.S. export control restrictions on India and the broader Global South, though not targeted at them. In particular, export control restrictions on open-source foundation models pose significant risks to Indian developers who rely on these tools to build “sovereign AI” solutions. Discussants highlighted the technical difficulty in implementing the export control measures, especially for AI models already in distribution, and suggested collaborative approaches to AI governance to address risks concerning open-source models instead of imposing export control restrictions.

Multilateral Engagements on AI Safety

Trump’s approach to international cooperation indicates a reluctance toward multilateral engagement, as displayed in the withdrawal of the U.S. from the Paris Climate Agreement, disruption of the World Trade Organization process, and Trump’s suggestion to “quiet quit” the NATO alliance.

However, the multilateral network of AI safety institutes (AISI), focused on evaluating the risks of advanced AI models, may remain functional under the Trump administration. The network has gained momentum following its official launch meeting in San Francisco in November 2024. Participation from countries like India, looking to set up its own AISI, demonstrates the network’s growing influence and the U.S. AISI’s leadership role in it. Additionally, the U.S. AISI’s position appears secure, given its bipartisan support and strategic value. From an “America First” perspective, maintaining the U.S. AISI’s role is practical—if U.S.-based AI models are not evaluated domestically, they risk assessment by AISIs in the UK, EU, Singapore, and elsewhere.

Discussions at the GTS Innovation Dialogue highlighted how India and the United States could develop a shared technical understanding of AI and uniquely collaborate on AI safety—the U.S. AISI could help evaluate the safety of “upstream” foundational models while India assesses the safety of “downstream” applications in its local context. However, discussants cautioned that the definition of “safety” should align with India’s local needs and reflect its pro-innovation and development agenda using AI.

Conclusion

With varying degrees of uncertainties on different aspects of Trump’s AI agenda, much remains to be seen during the new administration on the issues analyzed in this essay. Additionally, the unpredictable pace and nature of AI innovation will make risk mitigation a critical challenge. These uncertainties will shape not only AI development in the United States but also technological trajectories across the globe.

Authors

Shatakratu Sahu
Former Senior Research Analyst and Senior Program Manager, Technology and Society Program
Shatakratu Sahu
Amlan Mohanty
Fellow, Technology and Society Program
Amlan Mohanty
TechnologyAIForeign PolicyNorth AmericaIndiaChinaUnited States

Carnegie does not take institutional positions on public policy issues; the views represented herein are those of the author(s) and do not necessarily reflect the views of Carnegie, its staff, or its trustees.

More Work from Carnegie Endowment for International Peace

  • Soldier looking at a drone on the ground
    Commentary
    Emissary
    Are All Wars Now Drone Wars?

    From Sudan to Ukraine, UAVs have upended warfighting tactics and become one of the most destructive weapons of conflict.

      • Jon Bateman

      Jon Bateman, Steve Feldstein

  • Carney speaking on stage
    Commentary
    Emissary
    Carney’s Remarkable Message to Middle Powers

    And how they can respond.

      • +1

      Sophia Besch, Steve Feldstein, Stewart Patrick, …

  • Trump speaking on a stage
    Commentary
    Emissary
    The Greenland Episode Must Be a Lesson for Europe and NATO

    They cannot return to the comforts of asymmetric reliance, dressed up as partnership.

      Sophia Besch

  • Commentary
    Strategic Europe
    The EU and India in Tandem

    As European leadership prepares for the sixteenth EU-India Summit, both sides must reckon with trade-offs in order to secure a mutually beneficial Free Trade Agreement.

      Dinakar Peri

  • Commentary
    Carnegie Politika
    Baku Proceeds With Caution as Ethnic Azeris Join Protests in Neighboring Iran

    Baku may allow radical nationalists to publicly discuss “reunification” with Azeri Iranians, but the president and key officials prefer not to comment publicly on the protests in Iran.

      Bashir Kitachaev

Get more news and analysis from
Carnegie Endowment for International Peace
Carnegie global logo, stacked
1779 Massachusetts Avenue NWWashington, DC, 20036-2103Phone: 202 483 7600Fax: 202 483 1840
  • Research
  • Emissary
  • About
  • Experts
  • Donate
  • Programs
  • Events
  • Blogs
  • Podcasts
  • Contact
  • Annual Reports
  • Careers
  • Privacy
  • For Media
  • Government Resources
Get more news and analysis from
Carnegie Endowment for International Peace
© 2026 Carnegie Endowment for International Peace. All rights reserved.