• Research
  • Strategic Europe
  • About
  • Experts
Carnegie Europe logoCarnegie lettermark logo
EUUkraine
  • Donate
{
  "authors": [
    "Shatakratu Sahu",
    "Amlan Mohanty"
  ],
  "type": "commentary",
  "centerAffiliationAll": "",
  "centers": [
    "Carnegie Endowment for International Peace",
    "Carnegie India"
  ],
  "englishNewsletterAll": "",
  "nonEnglishNewsletterAll": "",
  "primaryCenter": "Carnegie India",
  "programAffiliation": "",
  "programs": [],
  "projects": [
    "Technology and Society"
  ],
  "regions": [
    "North America",
    "India",
    "China",
    "United States"
  ],
  "topics": [
    "Technology",
    "AI",
    "Foreign Policy"
  ]
}
Source: iStock

Source: Getty

Commentary
Carnegie India

What’s Next for U.S. AI Policy?

This commentary explores the likely actions of the Trump administration and driving forces on issues of deregulation, the United States’ leadership in AI, national security, and global engagements on AI safety.

Link Copied
By Shatakratu Sahu and Amlan Mohanty
Published on Dec 18, 2024
Project hero Image

Project

Technology and Society

This program focuses on five sets of imperatives: data, strategic technologies, emerging technologies, digital public infrastructure, and strategic partnerships.

Learn More

Donald Trump’s win in the recent presidential election has put the future of the United States’ policy on artificial intelligence (AI) into sharp focus. There are many moving pieces, including new appointments, shifting geopolitical forces, and potential AI breakthroughs that will shape AI policy under the president-elect.

Some analysts have suggested that the new administration will adopt a hands-off approach to AI regulation, strengthen export controls, and promote the use of AI by the U.S. military and intelligence agencies.

This essay advances the discussion by exploring the likely actions of the Trump administration and driving forces on issues of deregulation, the United States’ leadership in AI, national security, and global engagements on AI safety.

The Deregulation Agenda

All indications are that Trump will focus on deregulation to spur innovation and improve the United States’ global competitiveness. For example, the Trump administration is expected to repeal the Biden-era Executive Order on Safe, Secure, and Trustworthy Development and Use of AI (Biden EO) in its current form—as publicly declared by the president-elect and explicitly stated in his election manifesto. The Biden EO, launched in October last year, is a comprehensive regulatory framework mandating federal agencies to report on their use of AI, implement testing protocols, and enforce accountability measures.

The repeal of the Biden EO will influence AI governance globally. India, for instance, which has looked at the executive order to model its AI governance framework, may be prompted to reevaluate its approach.

With a possible repeal of the Biden EO, the Biden-era voluntary commitments to manage AI risks will assume significance. While leading companies have made some progress under these voluntary frameworks, experts have highlighted that industry-wide compliance has been partial.

Companies are unlikely to walk back on their voluntary commitments under the new administration, as some of these—such as security testing of unreleased AI systems—are considered both a business and policy imperative. However, deregulation could further reduce the pressure on companies to be transparent on other issues, such as their governance structures and third-party audits of their AI systems.

AI Leadership Goals, National Security, and Competition With China

While Trump’s deregulation agenda differs from Biden’s in some respects, there is common ground between the two presidents on the desire for American leadership on AI and a focus on national security.

During his first presidential term in 2019, Trump issued the Executive Order on Maintaining American Leadership in AI (Trump EO) geared toward enhancing economic and national security. The Trump EO aligns with Biden’s recent National Security Memorandum of October 2024 (NSM), which prioritizes U.S. leadership on AI through supply chain diversity and security, developing next-generation supercomputers, and integrating AI within the national security apparatus. For these reasons, the Trump administration is likely to support the broad goals of the NSM and its “America First” approach.

Further, export controls targeting China are likely to gain momentum. This may include tighter controls on the latest AI resources like High Bandwidth Memory 3 (HMB3) chips. Reports of the Chinese military using Meta’s open-source AI model Llama 2.0 may further bolster this policy priority.

Carnegie India’s recent Global Technology Summit (GTS) Innovation Dialogue in Bengaluru hosted a closed-door discussion on “U.S.-India Tech Partnership: Opportunities,” which highlighted the unintended spillover effects of U.S. export control restrictions on India and the broader Global South, though not targeted at them. In particular, export control restrictions on open-source foundation models pose significant risks to Indian developers who rely on these tools to build “sovereign AI” solutions. Discussants highlighted the technical difficulty in implementing the export control measures, especially for AI models already in distribution, and suggested collaborative approaches to AI governance to address risks concerning open-source models instead of imposing export control restrictions.

Multilateral Engagements on AI Safety

Trump’s approach to international cooperation indicates a reluctance toward multilateral engagement, as displayed in the withdrawal of the U.S. from the Paris Climate Agreement, disruption of the World Trade Organization process, and Trump’s suggestion to “quiet quit” the NATO alliance.

However, the multilateral network of AI safety institutes (AISI), focused on evaluating the risks of advanced AI models, may remain functional under the Trump administration. The network has gained momentum following its official launch meeting in San Francisco in November 2024. Participation from countries like India, looking to set up its own AISI, demonstrates the network’s growing influence and the U.S. AISI’s leadership role in it. Additionally, the U.S. AISI’s position appears secure, given its bipartisan support and strategic value. From an “America First” perspective, maintaining the U.S. AISI’s role is practical—if U.S.-based AI models are not evaluated domestically, they risk assessment by AISIs in the UK, EU, Singapore, and elsewhere.

Discussions at the GTS Innovation Dialogue highlighted how India and the United States could develop a shared technical understanding of AI and uniquely collaborate on AI safety—the U.S. AISI could help evaluate the safety of “upstream” foundational models while India assesses the safety of “downstream” applications in its local context. However, discussants cautioned that the definition of “safety” should align with India’s local needs and reflect its pro-innovation and development agenda using AI.

Conclusion

With varying degrees of uncertainties on different aspects of Trump’s AI agenda, much remains to be seen during the new administration on the issues analyzed in this essay. Additionally, the unpredictable pace and nature of AI innovation will make risk mitigation a critical challenge. These uncertainties will shape not only AI development in the United States but also technological trajectories across the globe.

About the Authors

Shatakratu Sahu

Former Senior Research Analyst and Senior Program Manager, Technology and Society Program

Shatakratu Sahu was a senior research analyst and senior program manager with the Technology and Society program at Carnegie India.

Amlan Mohanty

Fellow, Technology and Society Program

Amlan Mohanty was a fellow with Carnegie India. His areas of expertise include privacy, content policy, platform regulation, competition and AI.

Authors

Shatakratu Sahu
Former Senior Research Analyst and Senior Program Manager, Technology and Society Program
Shatakratu Sahu
Amlan Mohanty
Fellow, Technology and Society Program
Amlan Mohanty
TechnologyAIForeign PolicyNorth AmericaIndiaChinaUnited States

Carnegie does not take institutional positions on public policy issues; the views represented herein are those of the author(s) and do not necessarily reflect the views of Carnegie, its staff, or its trustees.

More Work from Carnegie Europe

  • Commentary
    Strategic Europe
    On NATO, Trump Should Embrace France Instead of Bashing It

    Donald Trump’s repudiation of NATO goes against the Make America Great Again vision of a U.S.-centered foreign policy. If the goal is to preserve the alliance by boosting Europe’s commitments, leaning into France’s vision is the most America First way forward.

      • Rym Momtaz

      Rym Momtaz

  • Commentary
    Europe Doesn’t Like War—for Good Reasons

    The wars in Ukraine and the Middle East are existential threats to Europe as a peace project. Leaders and citizens alike must reaffirm their solidarity to face up to today’s multifaceted challenges.

      Marc Pierini

  • Article
    Rewiring the South Caucasus: TRIPP and the New Geopolitics of Connectivity

    The U.S.-sponsored TRIPP deal is driving the Armenia-Azerbaijan peace process forward. But foreign and domestic hurdles remain before connectivity and economic interdependence can open up the South Caucasus.

      • Areg Kochinyan

      Thomas de Waal, Areg Kochinyan, Zaur Shiriyev

  • Commentary
    Strategic Europe
    Taking the Pulse: Is it NATO’s Job to Support Trump’s War of Choice?

    Donald Trump has demanded that European allies send ships to the Strait of Hormuz while his war of choice in Iran rages on. He has constantly berated NATO while the alliance’s secretary-general has emphatically supported him.

      • Rym Momtaz

      Rym Momtaz, ed.

  • Commentary
    Strategic Europe
    Time to Merge the Commission and EEAS

    The EU is structurally incapable of reacting to today’s foreign policy crises. The union must fold the EEAS into the European Commission and create a security council better prepared to take action on the global stage.

      Stefan Lehne

Get more news and analysis from
Carnegie Europe
Carnegie Europe logo, white
Rue du Congrès, 151000 Brussels, Belgium
  • Research
  • Strategic Europe
  • About
  • Experts
  • Projects
  • Events
  • Contact
  • Careers
  • Privacy
  • For Media
  • Gender Equality Plan
Get more news and analysis from
Carnegie Europe
© 2026 Carnegie Endowment for International Peace. All rights reserved.