staff
Hadrien Pouget
Associate Fellow, Technology and International Affairs Program

about


Hadrien Pouget is an associate fellow in the Technology and International Affairs Program at the Carnegie Endowment for International Peace. With a background in technical AI research, he studies practical aspects of AI policy and regulation. His work has focused on the EU's AI Act and the role technical standards will play in AI governance. His work has appeared in Lawfare, the Ada Lovelace Institute Blog, and Bandwidth, and he has been a guest on the ChinaTalk podcast.

Previously, he worked as a research assistant at the computer science department at the University of Oxford. In this role, he published several papers on the testing and evaluation of machine learning systems.


areas of expertise
education
MPhil, Technology Policy, University of Cambridge, MCompSci + BS, Computer Science, University of Oxford  
languages
English, French

All work from Hadrien Pouget

filters
12 Results
Cover image was generated using artificial intelligence (AI) technology
paper
The Future of International Scientific Assessments of AI’s Risks

Managing the risks of artificial intelligence will require international coordination among many actors with different interests, values, and perceptions.

  • +21
  • Hadrien Pouget
  • Claire Dennis
  • Jon Bateman
  • Robert Trager
  • Renan Araujo
  • Haydn Belfield
  • Belinda Cleeland
  • Malou Estier
  • Gideon Futerman
  • Oliver Guest
  • Carlos Ignacio Gutierrez
  • Vishnu Kannan
  • Casey Mahoney
  • Matthijs Maas
  • Charles Martinet
  • Jakob Mökander
  • Kwan Yee Ng
  • Seán Ó hÉigeartaigh
  • Aidan Peppin
  • Konrad Seifert
  • Scott Singer
  • Maxime Stauffer
  • Caleb Withers
  • Marta Ziosi
· August 27, 2024
Macron gesturing as he speaks at a dais
commentary
France’s AI Summit Is a Chance to Reshape Global Narratives on AI

But Paris first must hone its alternative vision.

· July 24, 2024
event
Advancing a Transatlantic AI Agenda
April 16, 2024

Great powers around the world have entered a race for AI supremacy. In the EU, the United States, China, and India, policymakers are putting forward competing frameworks to regulate AI globally while trying to achieve technological superiority.

  • +1
article
AI and Product Safety Standards Under the EU AI Act

For the EU’s Artificial Intelligence Act to set a global benchmark for AI regulation, the resulting standards need to balance detail and legal clarity with flexibility to adapt to the emerging technologies.

· March 5, 2024
commentary
Biden’s AI Order Is Much-Needed Assurance for the EU

It shows that Washington is an active partner in regulating advanced AI systems.

· November 1, 2023
article
A Letter to the EU’s Future AI Office

Once the EU’s AI Act becomes law, the EU faces a long journey to successfully implementing it. We have a message for the artificial intelligence office that will likely be created to help along the way, as well as for others involved in the implementation process.

· October 3, 2023
article
Reconciling the U.S. Approach to AI

America’s AI policy has been—and likely will remain—a mosaic of individual agency approaches and narrow legislation rather than a centralized strategy.

In The Media
in the media
Europe’s AI Act Nears Finishing Line—Worrying Washington

It is understandable that the potential broadening of the scope of the EU’s AI Act’ makes the United States nervous. Washington should come to the EU with targeted suggestions, as its domestic conversation around AI risks matures.

· May 1, 2023
CEPA
REQUIRED IMAGE
In the Media
What Will the Role of Standards Be In AI Governance?

Why standards are at the centre of AI regulation conversations and the challenges they raise

· April 5, 2023
Ada Lovelace Institute
In The Media
in the media
The EU’s AI Act Is Barreling Toward AI Standards That Do Not Exist

Efforts to regulate artificial intelligence must aim to balance protecting the health, safety, and fundamental rights of individuals while reaping the benefits of innovation.

· January 12, 2023
Lawfare