• Research
  • Emissary
  • About
  • Experts
Carnegie Global logoCarnegie lettermark logo
Democracy
  • Donate
{
  "authors": [
    "Charlotte Stanton"
  ],
  "type": "other",
  "centerAffiliationAll": "dc",
  "centers": [
    "Carnegie Endowment for International Peace"
  ],
  "collections": [],
  "englishNewsletterAll": "",
  "nonEnglishNewsletterAll": "",
  "primaryCenter": "Carnegie Endowment for International Peace",
  "programAffiliation": "TIA",
  "programs": [
    "Technology and International Affairs"
  ],
  "projects": [],
  "regions": [
    "Iran"
  ],
  "topics": [
    "Technology"
  ]
}

Source: Getty

Other

Joining the Partnership on AI

The rapid advances in the field of artificial intelligence (AI) offer extraordinary opportunities and risks.

Link Copied
By Charlotte Stanton
Published on May 15, 2019
Program mobile hero image

Program

Technology and International Affairs

The Technology and International Affairs Program develops insights to address the governance challenges and large-scale risks of new technologies. Our experts identify actionable best practices and incentives for industry and government leaders on artificial intelligence, cyber threats, cloud security, countering influence operations, reducing the risk of biotechnologies, and ensuring global digital inclusion.

Learn More

The rapid advances in the field of artificial intelligence (AI) offer extraordinary opportunities and risks. The opportunities span almost every societal domain -- from improving the accuracy and speed of medical diagnoses to reducing the energy consumption of data centers. But the risks are equally ubiquitous and significant. AI accidents, AI-enabled synthetic media (e.g., deepfakes), mass unemployment, and algorithmic bias are just some examples of how AI, if hastily developed or deployed, could undermine long-standing social, economic, and political institutions.

Consider AI safety—the collection of research and regulatory efforts seeking to ensure that AI systems reliably perform as desired. Cooperation between technical experts, civil society, and governments is essential for creating the kinds of technical standards and governance mechanisms needed to reduce the risk of AI accidents, particularly as AI is increasingly integrated into critical military and energy systems. The Partnership on AI leverages a powerful community of technical experts and civil society organizations working on AI safety to which Carnegie brings experience navigating the intergovernmental landscape.

Another arena where multi-stakeholder partnerships are essential is synthetic media—images, audio, or video created with AI. Synthetic media has several beneficial applications. For instance, it can recreate the voices of people with ALS who have lost their ability to speak. But synthetic media can also cause harm, for example if a seemingly-realistic false depiction of a political leader doing something she or he didn’t do incites civil unrest. Perhaps more worrisome, a proliferation of synthetic media could increase public skepticism of authentic media, leading to what some have called a ‘post-truth’ society. Maintaining the public’s trust in authentic media against threats from AI-enabled forgeries requires cooperation between journalists, civil society organizations, and social media platforms which the Partnership on AI is forging.

Across these and other areas, we look forward to working with the Partnership’s diverse community of experts to help build the policy infrastructure required for AI’s safe and beneficial use.

Charlotte Stanton
Former Director, Silicon Valley Office
TechnologyIran

Carnegie does not take institutional positions on public policy issues; the views represented herein are those of the author(s) and do not necessarily reflect the views of Carnegie, its staff, or its trustees.

More Work from Carnegie Endowment for International Peace

  • Hochel stading behind a dais, with a hand raised
    Commentary
    Emissary
    With the RAISE Act, New York Aligns With California on Frontier AI Laws

    The bills differ in minor but meaningful ways, but their overwhelming convergence is key.

      Alasdair Phillips-Robins, Scott Singer

  • Research
    International AI Safety Report 2026

    The second International AI Safety Report is the next iteration of the comprehensive review of latest scientific research on the capabilities and risks of general-purpose AI systems. It represents the largest global collaboration on AI safety to date.

      Scott Singer, Jane Munga

  • Stack of Iranian newspapers featuring Trump's face and a burning American flag
    Commentary
    Emissary
    The United States Should Apply the Arab Spring’s Lessons to Its Iran Response

    The uprisings showed that foreign military intervention rarely produced democratic breakthroughs.

      • Sarah Yerkes

      Amr Hamzawy, Sarah Yerkes

  • Commentary
    The PSLV Setback: Restoring India’s Workhorse

    On January 12, 2026, India's "workhorse," the Polar Satellite Launch Vehicle, experienced a consecutive mission failure for the first time in its history. This commentary explores the implications of this incident on India’s space sector and how India can effectively address issues stemming from the incident.

      Tejas Bharadwaj

  • Police standing watch
    Commentary
    Emissary
    What’s Keeping the Iranian Regime in Power—for Now

    A conversation with Karim Sadjadpour and Robin Wright about the recent protests and where the Islamic Republic might go from here.

      Aaron David Miller, Karim Sadjadpour, Robin Wright

Get more news and analysis from
Carnegie Endowment for International Peace
Carnegie global logo, stacked
1779 Massachusetts Avenue NWWashington, DC, 20036-2103Phone: 202 483 7600Fax: 202 483 1840
  • Research
  • Emissary
  • About
  • Experts
  • Donate
  • Programs
  • Events
  • Blogs
  • Podcasts
  • Contact
  • Annual Reports
  • Careers
  • Privacy
  • For Media
  • Government Resources
Get more news and analysis from
Carnegie Endowment for International Peace
© 2026 Carnegie Endowment for International Peace. All rights reserved.