experts
Matt O'Shaughnessy
Visiting Fellow, Technology and International Affairs Program

about


Matt O’Shaughnessy is a visiting fellow in the Technology and International Affairs Program at the Carnegie Endowment for International Peace, where he applies his technical background in machine learning to research on the geopolitics and global governance of technology. His work examines the impacts of artificial intelligence and other emerging technologies on democracy, inequality, and human rights.

Before joining the Carnegie Endowment, Matt received his PhD from Georgia Tech’s Center for Machine Learning. His technical research developed mathematical tools that use structure hidden in data to help scientists understand complex systems, including applications in the efficient collection of high-dimensional data, explaining black-box machine learning systems, and causal inference.


areas of expertise
education
PhD, Electrical & Computer Engineering, Georgia Tech, MS, Mathematics, Georgia Tech, BS, Electrical Engineering, Georgia Tech  
languages
English

All work from Matt O'Shaughnessy

filters
8 Results
commentary
How Hype Over AI Superintelligence Could Lead Policy Astray

AI’s risks—and its policy solutions—are often more evolutionary than revolutionary.

· September 14, 2023
commentary
What a Chinese Regulation Proposal Reveals About AI and Democratic Values

Despite its authoritarian origins, the draft offers lessons for building a truly democratic framework.

· May 16, 2023
article
Reconciling the U.S. Approach to AI

America’s AI policy has been—and likely will remain—a mosaic of individual agency approaches and narrow legislation rather than a centralized strategy.

In The Media
in the media
Challenges of Implementing AI With “Democratic Values”: Lessons From Algorithmic Transparency

A closer look at one of the most accepted norms for AI systems—algorithmic transparency— demonstrates the challenges inherent in incorporating democratic values into technology.

· April 26, 2023
Lawfare
commentary
Lessons From the World’s Two Experiments in AI Governance

Policymakers can study the measures’ successes and failures to guide their own regulatory approaches.

· February 14, 2023
REQUIRED IMAGE
In the Media
Five Policy Uses of Algorithmic Explainability

The notion that algorithmic systems should be "explainable" is common in the many statements of consensus principles developed by governments, companies, and advocacy organizations.

· February 6, 2023
arxiv.org
In The Media
in the media
Building AI With Democratic Values Starts With Defining Our Own

The challenges to meaningfully defining and implementing a democratic vision for AI are significant, requiring financial, technical and political capital. Policymakers must make real investments to address them if “democratic values” are meant to be more than the brand name for an economic alliance.

· January 24, 2023
The Hill
commentary
One of the Biggest Problems in Regulating AI Is Agreeing on a Definition

Subtle differences in wording can have major impacts on some of the most important problems facing policymakers.

· October 6, 2022