staff
Hadrien Pouget
Associate Fellow, Technology and International Affairs Program

about


Hadrien Pouget is an associate fellow in the Technology and International Affairs Program at the Carnegie Endowment for International Peace. With a background in technical AI research, he studies practical aspects of AI policy and regulation. His work has focused on the EU's AI Act and the role technical standards will play in AI governance. His work has appeared in Lawfare, the Ada Lovelace Institute Blog, and Bandwidth, and he has been a guest on the ChinaTalk podcast.

Previously, he worked as a research assistant at the computer science department at the University of Oxford. In this role, he published several papers on the testing and evaluation of machine learning systems.


areas of expertise
education
MPhil, Technology Policy, University of Cambridge, MCompSci + BS, Computer Science, University of Oxford  
languages
English, French

All work from Hadrien Pouget

filters
10 Results
event
Advancing a Transatlantic AI Agenda
April 16, 2024

Great powers around the world have entered a race for AI supremacy. In the EU, the United States, China, and India, policymakers are putting forward competing frameworks to regulate AI globally while trying to achieve technological superiority.

  • +1
article
AI and Product Safety Standards Under the EU AI Act

For the EU’s Artificial Intelligence Act to set a global benchmark for AI regulation, the resulting standards need to balance detail and legal clarity with flexibility to adapt to the emerging technologies.

· March 5, 2024
commentary
Biden’s AI Order Is Much-Needed Assurance for the EU

It shows that Washington is an active partner in regulating advanced AI systems.

· November 1, 2023
article
A Letter to the EU’s Future AI Office

Once the EU’s AI Act becomes law, the EU faces a long journey to successfully implementing it. We have a message for the artificial intelligence office that will likely be created to help along the way, as well as for others involved in the implementation process.

· October 3, 2023
article
Reconciling the U.S. Approach to AI

America’s AI policy has been—and likely will remain—a mosaic of individual agency approaches and narrow legislation rather than a centralized strategy.

In The Media
in the media
Europe’s AI Act Nears Finishing Line—Worrying Washington

It is understandable that the potential broadening of the scope of the EU’s AI Act’ makes the United States nervous. Washington should come to the EU with targeted suggestions, as its domestic conversation around AI risks matures.

· May 1, 2023
CEPA
REQUIRED IMAGE
In the Media
What Will the Role of Standards Be In AI Governance?

Why standards are at the centre of AI regulation conversations and the challenges they raise

· April 5, 2023
Ada Lovelace Institute
In The Media
in the media
The EU’s AI Act Is Barreling Toward AI Standards That Do Not Exist

Efforts to regulate artificial intelligence must aim to balance protecting the health, safety, and fundamental rights of individuals while reaping the benefits of innovation.

· January 12, 2023
Lawfare
REQUIRED IMAGE
In the Media
Standard Setting: The EU AI Act

“Harmonised standards” play an important role in EU legislation by making what are at times vague essential requirements into concrete technical requirements.

· December 20, 2022
Future of Life Institute
REQUIRED IMAGE
In the Media
Institutional Context: The EU AI Act

As unprecedented as the EU AI Act is, it remains fundamentally a piece of EU legislation. Much of it is borrowed from common EU frameworks, to the extent that it cannot be properly understood without this broader context.

· December 20, 2022
Future of Life Institute