On Monday, President Joe Biden’s administration released one of its most ambitious acts of economic and technological policymaking. In an “interim final rule” whose wonky title—a “Framework for Artificial Intelligence Diffusion”—belies its importance, the Biden administration has sought to reshape the international AI landscape. The rule seeks to set the export and security terms for the AI market that will produce the world’s most powerful technological systems in the coming years.
The rule tightens control over sales of AI chips and turns them into a diplomatic tool. It seeks to enshrine and formalize the use of U.S. AI exports as leverage to extract geopolitical and technological concessions. And it is the Biden administration’s latest attempt to limit Chinese access to the high-end chips that are critical to training advanced AI models.
With a new administration taking office in a week, the rule’s ultimate impact is uncertain. President-elect Donald Trump and his staff will no doubt take a fresh look at how—or whether—to regulate the export of advanced U.S. AI technology. But as they do so, they too will have to reckon with the underlying national security pressures and economic incentives that drove the Biden administration’s development of this policy.
What’s in the Rule?
The rule creates a global licensing regime for the export of advanced AI chips and the parameters that encode a frontier AI system’s core intelligence, known as its model weights. It seeks to encourage AI development in friendly nations and incentivize businesses around the world to adopt U.S. standards. To do so, it creates three tiers of semiconductor and model weight restrictions to govern the sale of AI chips used in data centers.
In tier one, a small group of eighteen allies will maintain essentially unrestricted access to U.S. chips. That group includes the other four countries in the Five Eyes intelligence partnership (Australia, Canada, New Zealand, and the United Kingdom), major partners with key roles in the AI value chain (such as Japan, the Netherlands, South Korea, and Taiwan), and close NATO allies. The vast majority of the world will fall in a middle tier and will face limits on the total computing power they can import, unless that computing power is hosted in trusted and secure environments. In tier three, a group of adversaries will be effectively blocked from importing chips, in essentially no change from the status quo.
Companies can deploy as much computing power as they want in tier one countries. If they are headquartered in tier one countries and wish to expand elsewhere, they can apply for a so-called universal Validated End User designation. That status gives those companies—think of the cloud-service hyperscalers such as Amazon, Google, and Microsoft—blanket permission to deploy chips to data centers in most other countries, so long as they follow relatively straightforward security requirements. And they have to keep at least half of their total computing power on U.S. soil and can deploy no more than a quarter of their total computing power outside of tier one countries—and no more than 7 percent in any singular tier two country. These restrictions allow U.S. tech companies to continue to make large AI infrastructure investments in most of the world, without requiring case-by-case licensing authorizations, while ensuring that the world’s computing power and its most critical data centers largely remain within the United States and its closest partners.
The vast majority of countries fall into the second tier. This group faces caps on the levels of computing power that can go to any one state: roughly around 50,000 advanced AI chips through 2027, although that can double if the state reaches an agreement with the United States. Individual companies headquartered in tier two countries—for example, companies such as Emirati tech giant G42—can access significantly higher limits if they apply for their own national validated end user status. That process involves making verifiable security commitments, both physical and cyber, and assurances that they will not use those chips in ways that violate human rights (for example, by deploying them for large-scale surveillance purposes). If a company obtains this status, its chip imports won’t count toward the country’s overall maximum cap—a move designed to create incentives for foreign firms to adopt U.S. AI standards.
The new rule also limits the export and overseas training of proprietary AI model weights above a certain threshold, which no existing model meets. After a year to adjust, companies will have to abide by security standards to host the model weights of powerful AI systems in tier one or tier two countries. But no open weight models—models that allow the public to access their underlying code—are affected by these restrictions, and the thresholds for controlled models automatically adjust upward as open weight models advance. Overall, the requirements for model weights are less burdensome than leaked versions of the regulation suggested they might be.
The rule is a complex and ambitious piece of economic policymaking. But elements of the framework may appear more dramatic at first glance than they truly are. After all, the majority of global AI compute capacity is already concentrated in the United States, and U.S. compute providers already dominate the global market for cloud infrastructure. Nonetheless, the framework will undoubtedly affect future infrastructure decisions, restricting industry’s ability to plan large-scale clusters of computing power in tier two countries and incentivizing the construction of additional data centers within the United States.
How Did We Get Here?
The rule is the Biden administration’s attempt to answer a question that will soon become central to U.S. foreign policy and economic strategy: How widely should the United States share its AI technologies? It is an attempt to thread the needle between two competing priorities.
On the one hand, the rapidly growing global appetite for U.S. AI technology can generate vast revenues for U.S. tech companies and help lure states that have been drifting toward a Chinese economic ecosystem back into a U.S. technological sphere of influence. This consideration creates powerful incentives for U.S. companies to export products and governance standards overseas as rapidly as possible—to “flood the zone,” as the software giant Oracle has put it. And it encourages U.S. officials to greenlight those exports to places like the Gulf states, which are increasingly interested in acquiring advanced U.S. AI technology, or to southeast Asia, where governments are making major investments in data centers filled with Nvidia chips.
But on the other hand, U.S. policymakers are worried about the proliferation of powerful AI systems that might have critical national security implications. If AI becomes the central strategic technology of the coming years—a technology that might unlock profound economic and military breakthroughs—then U.S. officials want to ensure that the United States and its closest allies retain physical control of the ability to develop and deploy the most capable systems. This imperative cuts in favor of preventing the broad diffusion of cutting-edge AI technology to anyone that isn’t a trusted U.S. ally or partner.
The rule tries to find a compromise between these two considerations. It has its origins in export controls the Biden administration introduced in 2023 that expanded restrictions on the sale of top-end AI chips beyond China to several other countries. These countries included some in the Middle East, such as the UAE and Saudi Arabia, that are hungry for access to U.S. computing power to fuel their AI ambitions but also have close ties with China and other U.S. competitors. U.S. officials were worried that Chinese institutions might be accessing AI chips remotely, circumventing U.S. controls by using cloud computing services overseas or building data centers under shell companies in countries that could still import chips. To mitigate those risks, Washington required countries like the UAE to obtain a license before they could purchase chips. It ultimately granted some of those licenses, but only after months of negotiations that culminated in G42 divesting from Chinese firms, stripping out its Huawei technology, and partnering with Microsoft in exchange for access to Nvidia chips.
In the view of Microsoft and the Biden administration, that deal represented an important achievement: It brought an important regional swing state closer to the United States, luring it away from China’s tech champions, and it encouraged the UAE to adopt a variety of security protocols and practices necessary for responsible AI development—all under the supervision of a major U.S. tech company. But it also took months to negotiate, to the frustration of most parties. Doing this on a country-by-country, deal-by-deal basis seemed far too cumbersome and difficult to scale.
U.S. officials wanted to introduce a framework that would replace elements of this ad hoc licensing process with a bulk, standardized approval system. To limit the risks of diversion to or remote access by China, U.S. policymakers sought to use American cloud providers as gatekeepers for AI access in these countries. Through the expansion of the validated end user program, they wanted to allow a broader group of trusted local entities with a track record of safeguarding against theft and misuse to play a similar gatekeeping role. They tried to replace a laborious case-by-case license approach to acquire chips in places like the Middle East with a single authorization that, once obtained, should allow a local company to move forward efficiently with investments with fewer requirements. And they wanted to do all this while ensuring that U.S. tech companies would not enable the offshoring of American computing power to nondemocratic, nonallied states.
Much of the rule’s conceptual basis is sound. But it’s also a highly complex and lengthy piece of policymaking, filled with compromises and carve-outs that accrued during a long and at times contentious interagency process. It will require competent bureaucratic administration to avoid excessive licensing delays. And the rule makes sense only in conjunction with reforms to the U.S. permitting and energy policy landscape that will enable the domestic AI infrastructure buildout envisaged by the rule to proceed.
What Is the Reaction?
Only just published today, the rule represents a complex intervention in a global market, and more reactions will follow as experts, lawyers, and companies dig into the details. But some of the contours are clear already.
For starters, the rule has faced strong—at times vitriolic—opposition from some U.S. companies that warn of serious unintended consequences. Nvidia and Oracle, in particular, have lobbied intensely against the rule. Oracle described it as “the most destructive to ever hit the U.S. technology industry” and as “by far history’s worst government technology idea.”
Some of this opposition is self-serving. The rule restricts those companies’ abilities to sell their products, creates new compliance requirements, and forces them to rethink some of their locations for international data center expansions. But some industry opposition is grounded in a broader strategic logic: They argue that the rule’s primary effect will be to hand most of the global AI market to Chinese competitors such as Alibaba, Huawei, and Tencent. By imposing too many restrictions and procedural hurdles on tier two countries—the vast majority of the world—the rule will simply encourage those countries to turn to China, the only viable alternative in the AI marketplace, for access to AI chips.
Worries about China’s ability to “backfill” AI chip orders—to step in as an alternative supplier for tier two countries reluctant to jump through U.S. hoops—played a prominent role in the administration’s deliberations over the past year. But in the administration’s view, for all the impressive progress Chinese companies have made in AI model development, there is no evidence they have the capacity to export large numbers of AI chips now and in the near future. Huawei, for example, does not appear to have built any data centers equipped with its own advanced AI chips outside of China, and China continues to expend enormous efforts stockpiling, smuggling, and importing lower quality Western AI chips for its own domestic purposes. So long as its domestic needs remain unmet, U.S. officials argue, China will not be exporting many cutting-edge AI chips—giving the United States substantial leverage to demand the adoption of U.S. standards and security assurances in exchange for access to U.S. computing power. That leverage may erode as China builds the capacity to manufacture advanced chips at scale, but for now, U.S. firms remain by far the dominant players when it comes to the export of AI-capable hardware.
The U.S. government will need to monitor the international reaction carefully. Almost two dozen NATO states find themselves in tier two, as does India—a key Asian partner and a country with major technological ambitions. What U.S. policymakers might view as responsible efforts to ensure the secure development and deployment of powerful AI systems might look to other nations like the coercive imposition of U.S. values. The rule might reinforce the narrative that the United States is determined to entrench the long-term dominance of U.S. tech firms while riding roughshod over the autonomy of much of the world. Some diplomatic tensions are inevitable in any attempt to balance responses to legitimate national security risks with support for global business expansion, though the rule’s impact for most nations will be limited: It primarily affects the few countries with the ambition and capability to build major AI data centers, and it does not affect their ability to engage in and reap the benefits from most elements of the AI value chain.
The domestic political reaction, meanwhile, has been mixed. Some Republican voices have attacked the rule, but it has also attracted bipartisan support. In the Wall Street Journal, Anthropic CEO Dario Amodei and former Trump deputy national security advisor Matt Pottinger hailed the rule’s approach. The House Select Committee on the Chinese Communist Party “strongly encourage[d]” the administration to move forward in a letter released on January 2. The rule enjoys support from a broad coalition of China hawks, and its framework echoes elements of Trump’s transactional approach to foreign policy. But as with so many elements of the incoming administration, the rule’s ultimate longevity is hard to predict.
If the Trump administration does scrap the rule, it will have to come up with its own approach. It will have to figure out how it wants to prevent Chinese circumvention of U.S. export controls through chip smuggling and remote access. It will have to develop its own plan for sharing sensitive U.S. technologies with swing states that also have expanding ties with China. It will have to design its own way to ensure that the United States and its close allies set the norms and standards by which AI is employed while retaining control of critical AI data centers. The framework outlined in this rule is not the only way to achieve these goals. But it is an approach that the next administration should study carefully.
Emissary
The latest from Carnegie scholars on the world’s most pressing challenges, delivered to your inbox.