• Research
  • Emissary
  • About
  • Experts
Carnegie Global logoCarnegie lettermark logo
DemocracyIran
  • Donate
{
  "authors": [
    "Hadrien Pouget"
  ],
  "type": "commentary",
  "centerAffiliationAll": "dc",
  "centers": [
    "Carnegie Endowment for International Peace"
  ],
  "collections": [],
  "englishNewsletterAll": "ctw",
  "nonEnglishNewsletterAll": "",
  "primaryCenter": "Carnegie Endowment for International Peace",
  "programAffiliation": "TIA",
  "programs": [
    "Technology and International Affairs"
  ],
  "projects": [],
  "regions": [
    "North America",
    "United States",
    "Eastern Europe",
    "Western Europe",
    "Iran"
  ],
  "topics": [
    "Global Governance",
    "Technology",
    "AI"
  ]
}

Source: Getty

Commentary

Biden’s AI Order Is Much-Needed Assurance for the EU

It shows that Washington is an active partner in regulating advanced AI systems.

Link Copied
By Hadrien Pouget
Published on Nov 1, 2023
Program mobile hero image

Program

Technology and International Affairs

The Technology and International Affairs Program develops insights to address the governance challenges and large-scale risks of new technologies. Our experts identify actionable best practices and incentives for industry and government leaders on artificial intelligence, cyber threats, cloud security, countering influence operations, reducing the risk of biotechnologies, and ensuring global digital inclusion.

Learn More

On Monday, President Joe Biden’s administration released a first-of-its-kind executive order (EO) tackling the risks from advanced AI systems and requiring those developing such systems to share information with the government and initiating the development of standards.

The EO is not just important domestically—it is also a signal to the international community that the United States intends to take action on AI governance. This signal is especially loud for the EU, which is in the final stages of negotiating its sweeping AI legislation, the AI Act. The regulation of advanced AI systems is one of the final controversies holding up the proposal, with some member states concerned that unilaterally imposing regulations in the EU would curtail the growth of the region’s own industry and favor U.S. companies. Confidence in the United States as a partner in regulation could help assuage those concerns, and details from the EO are a useful reference point, simplifying the EU’s negotiations and setting the scene for international cooperation on governance.

Since the first draft of the AI Act was introduced in April 2021, the EU has sought to lead the global conversation on AI governance. It was well placed to do so, regulating one of the world’s three largest markets and proposing legislation that would be the broadest regulation yet of this technology. When ChatGPT was released in November 2022 and brought the impressive capabilities of large, general models into the public eye, the EU responded by including these advanced AI systems in the scope of the act. Dubbed “foundation models,” a category intended to cover OpenAI’s GPT-4 and Anthropic’s Claude, these systems would have a unique set of requirements to adhere to.

However, where the EU was generally happy to move ahead of the world on AI legislation for other issues, regulation of foundation models proved to be more controversial. Hoping to cultivate their own powerful AI systems with startups such as Mistral AI (from France) and Aleph Alpha (from Germany), some member states worry about the legislation’s impact on their nascent industries. Calls for stronger requirements were seen as being led by dominant U.S. AI companies that hoped to entrench their lead by building a regulatory moat. Skepticism that the United States would take any meaningful legislative or regulatory action against its own AI industry has further diluted motivation to act for some in the EU. Nevertheless, many in the EU still want to impose requirements, and negotiators are now struggling to agree on the intensity of the requirements for foundation models.

At the same time, the EU is under pressure to finalize the act soon. Without a final position, EU member states are left in limbo when operating on the international scene, where efforts are proliferating. The UK’s AI Safety Summit will start on Wednesday, the G7 has released a code of conduct for those developing advanced AI systems, and the UN has announced a high-level advisory body on AI to explore different courses of action. In addition, if the act is not finalized before the end of the year, it could be pushed back to late 2024 as the European Parliament’s elections take center stage.

Against this backdrop, the EO helps the EU in two ways. First, the order demonstrates to the EU that it will not be alone in imposing restrictions on advanced AI systems. The Biden administration’s voluntary commitments signed by leading AI labs were cold comfort to Europeans. By contrast, the EO represents a move toward enforceable requirements from the U.S. government and shows that Washington is willing to restrict U.S. AI companies. It is still not perfect—the EO must work within the U.S. government’s existing authorities and is limited in the kinds of restrictions it can impose—but it’s an important step. New legislation from Congress remains crucial for the future.

Second, the details of the executive order will be useful. Like others, the EU is still grappling with this new and fast-moving technology and struggling to define which advanced AI systems should be regulated and how. The definitions and categories established in the EO could be a helpful reference (including a technical definition for “potential dual-use” AI), and the list of information the EO requires from AI companies under the Defense Production Act could also be mirrored by the EU’s legislation. More generally, the EO remains flexible in its definitions and sets up several processes to refine them and establish precise requirements in different sectors. The EU could act similarly by setting up the legislative structure for requirements while leaving the details to be filled out in the future via “delegated” or “implementing” acts—EU legislative tools designed to do exactly this. It could then work with the United States and other international partners to develop effective standards.

The EO is not a blueprint for EU action, and because it is not legislation, it is not the U.S. version of the AI Act. However, the message it sends and the approaches it takes are still important. Whatever requirements the EU ultimately puts in place won’t be in perfect alignment with the EO—they’ll have a unique EU spin—but this executive order at least signals to EU member states that they’re not acting alone.

About the Author

Hadrien Pouget

Former Associate Fellow, Technology and International Affairs Program

Hadrien Pouget was an associate fellow in the Technology and International Affairs Program at the Carnegie Endowment for International Peace.

    Recent Work

  • Paper
    The Future of International Scientific Assessments of AI’s Risks
      • Jon Bateman
      • +21

      Hadrien Pouget, Claire Dennis, Jon Bateman, …

  • Commentary
    France’s AI Summit Is a Chance to Reshape Global Narratives on AI

      Hadrien Pouget

Hadrien Pouget
Former Associate Fellow, Technology and International Affairs Program
Hadrien Pouget
Global GovernanceTechnologyAINorth AmericaUnited StatesEastern EuropeWestern EuropeIran

Carnegie does not take institutional positions on public policy issues; the views represented herein are those of the author(s) and do not necessarily reflect the views of Carnegie, its staff, or its trustees.

More Work from Carnegie Endowment for International Peace

  • City at night
    Commentary
    Emissary
    The Iran War Is Also Now a Semiconductor Problem

    The conflict is exposing the deep energy vulnerabilities of Korea’s chip industry.

      Darcie Draudt-Véjares, Tim Sahay

  • One man tossing a sack to another to stack on a truck
    Commentary
    Emissary
    The Other Global Crisis Stemming From the Strait of Hormuz’s Blockage

    Even if the Iran war stops, restarting production and transport for fertilizers and their components could take weeks—at a crucial moment for planting.

      • Noah  Gordon ​​​​

      Noah Gordon, Lucy Corthell

  • Commentary
    Diwan
    Shockwaves Across the Gulf

    The countries in the region are managing the fallout from Iranian strikes in a paradoxical way.

      • Angie Omar

      Angie Omar

  • Commentary
    Strategic Europe
    Taking the Pulse: Is France’s New Nuclear Doctrine Ambitious Enough?

    French President Emmanuel Macron has unveiled his country’s new nuclear doctrine. Are the changes he has made enough to reassure France’s European partners in the current geopolitical context?

      • Rym Momtaz

      Rym Momtaz, ed.

  • Commentary
    The Iran War’s Dangerous Fallout for Europe

    The drone strike on the British air base in Akrotiri brings Europe’s proximity to the conflict in Iran into sharp relief. In the fog of war, old tensions in the Eastern Mediterranean risk being reignited, and regional stakeholders must avoid escalation.

      Marc Pierini

Get more news and analysis from
Carnegie Endowment for International Peace
Carnegie global logo, stacked
1779 Massachusetts Avenue NWWashington, DC, 20036-2103Phone: 202 483 7600Fax: 202 483 1840
  • Research
  • Emissary
  • About
  • Experts
  • Donate
  • Programs
  • Events
  • Blogs
  • Podcasts
  • Contact
  • Annual Reports
  • Careers
  • Privacy
  • For Media
  • Government Resources
Get more news and analysis from
Carnegie Endowment for International Peace
© 2026 Carnegie Endowment for International Peace. All rights reserved.