• Research
  • Emissary
  • About
  • Experts
Carnegie Global logoCarnegie lettermark logo
DemocracyIran
  • Donate
{
  "authors": [
    "Jon Bateman"
  ],
  "type": "legacyinthemedia",
  "centerAffiliationAll": "dc",
  "centers": [
    "Carnegie Endowment for International Peace"
  ],
  "collections": [
    "Deepfakes"
  ],
  "englishNewsletterAll": "ctw",
  "nonEnglishNewsletterAll": "",
  "primaryCenter": "Carnegie Endowment for International Peace",
  "programAffiliation": "TIA",
  "programs": [
    "Technology and International Affairs"
  ],
  "projects": [
    "Partnership for Countering Influence Operations"
  ],
  "regions": [
    "North America",
    "United States",
    "Iran"
  ],
  "topics": [
    "Technology"
  ]
}

Source: Getty

In The Media

Get Ready for Deepfakes to be Used in Financial Scams

But artificial intelligence (AI) is enabling new, more sophisticated forms of digital impersonation. The next big financial crime might involve deepfakes—video or audio clips that use AI to create false depictions of real people.

Link Copied
By Jon Bateman
Published on Aug 10, 2020
Program mobile hero image

Program

Technology and International Affairs

The Technology and International Affairs Program develops insights to address the governance challenges and large-scale risks of new technologies. Our experts identify actionable best practices and incentives for industry and government leaders on artificial intelligence, cyber threats, cloud security, countering influence operations, reducing the risk of biotechnologies, and ensuring global digital inclusion.

Learn More
Project hero Image

Project

Partnership for Countering Influence Operations

The goal of the Partnership for Countering Influence Operations (PCIO) is to foster evidence-based policymaking to counter threats in the information environment. Key roadblocks as found in our work include the lack of: transparency reporting to inform what data is available for research purposes; rules guiding how data can be shared with researchers and for what purposes; and an international mechanism for fostering research collaboration at-scale.

Learn More

Source: Techdirt

Last month, scammers hijacked the Twitter accounts of former President Barack Obama and dozens of other public figures to trick victims into sending money. Thankfully, this brazen act of digital impersonation only fooled a few hundred people. But artificial intelligence (AI) is enabling new, more sophisticated forms of digital impersonation. The next big financial crime might involve deepfakes—video or audio clips that use AI to create false depictions of real people.

Deepfakes have inspired dread since the term was first coined three years ago. The most widely discussed scenario is a deepfake smear of a candidate on the eve of an election. But while this fear remains hypothetical, another threat is currently emerging with little public notice. Criminals have begun to use deepfakes for fraud, blackmail, and other illicit financial schemes.

This should come as no surprise. Deception has always existed in the financial world, and bad actors are adept at employing technology, from ransomware to robo-calls. So how big will this new threat become? Will deepfakes erode truth and trust across the financial system, requiring a major response by the financial industry and government? Or are they just an exotic distraction from more mundane criminal techniques, which are far more prevalent and costly?

The truth lies somewhere in between. No form of digital disinformation has managed to create a true financial meltdown, and deepfakes are unlikely to be the first. But as deepfakes become more realistic and easier to produce, they offer powerful new weapons for tech-savvy criminals.

Consider the most well-known type of deepfake, a “face-swap” video that transposes one person’s expressions onto someone else’s features. These can make a victim appear to say things she never said. Criminals could share a face-swap video that falsely depicts a CEO making damaging private comments—causing her company’s stock price to fall, while the criminals profit from short sales.

At first blush, this scenario is not much different than the feared political deepfake: a false video spreads through social or traditional media to sway mass opinion about a public figure. But in the financial scenario, perpetrators can make money on rapid stock trades even if the video is quickly disproven. Smart criminals will target a CEO already embroiled in some other corporate crisis, who may lack the credibility to refute a clever deepfake.

In addition to video, deepfake technology can create lifelike audio mimicry by cloning someone’s voice. Voice cloning is not limited to celebrities or politicians. Last year, a CEO’s cloned voice was used to defraud a British energy company out of $243,000. Financial industry contacts tell me this was not an isolated case. And it shows how deepfakes can cause damage without ever going viral. A deepfake tailored for and sent directly to one person may be the most difficult kind to thwart.

AI can generate other forms of synthetic media beyond video and audio. Algorithms can synthesize photos of fictional objects and people, or write bogus text that simulates human writing. Bad actors could combine these two techniques to create authentic-seeming fake social media accounts. With AI-generated profile photos and AI-written posts, the fake accounts could pass as human and earn real followers. A large network of such accounts could be used to denigrate a company, lowering its stock price due to false perceptions of a grassroots brand backlash.

These are just a few ways that deepfakes and other synthetic media can enable financial harm. My research highlights ten scenarios in total—one based in fact, plus nine hypotheticals. Remarkably, at least two of the hypotheticals already came true in the few months since I first imagined them. A Pennsylvania attorney was scammed by imposters who reportedly cloned his own son’s voice, and women in India were blackmailed with synthetic nude photos. The threats may still be small, but they are rapidly evolving.

What can be done? It would be foolish to pin hopes on a silver bullet technology that reliably detects deepfakes. Detection tools are improving, but so are deepfakes themselves. Real solutions will blend technology, institutional changes, and broad public awareness.

Corporate training and controls can help inoculate workers against deepfake phishing calls. Methods of authenticating customers by their voices or faces may need to be re-examined. The financial industry already benefits from robust intelligence sharing and crisis planning for cyber threats; these could be expanded to cover deepfakes.

The financial sector must also collaborate with tech platforms, law enforcement agencies, journalists, and others. Many of these groups are already working to counter political deepfakes. But they are not yet as focused on the distinctive ways that deepfakes threaten the financial system.

Ultimately, efforts to counter deepfakes should be part of a broader international strategy to secure the financial system against cyber threats, such as the one the Carnegie Endowment is currently developing together with the World Economic Forum.

Deepfakes are hardly the first threat of financial deception, and they are far from the biggest. But they are growing and evolving before our eyes. To stay ahead of this emerging challenge, the financial sector should start acting now.

This article was originally published by Techdirt.

About the Author

Jon Bateman
Jon Bateman

Senior Fellow and Co-Director, Technology and International Affairs Program

Jon Bateman is a senior fellow and co-director of the Technology and International Affairs Program at the Carnegie Endowment for International Peace.

    Recent Work

  • Q&A
    Are All Wars Now Drone Wars?
      • Jon Bateman

      Jon Bateman, Steve Feldstein

  • Q&A
    The Most Likely Outcomes of Trump’s Order Targeting State AI Laws
      • Jon Bateman
      • Anton Leicht
      • +1

      Jon Bateman, Anton Leicht, Alasdair Phillips-Robins, …

Jon Bateman
Senior Fellow and Co-Director, Technology and International Affairs Program
Jon Bateman
TechnologyNorth AmericaUnited StatesIran

Carnegie does not take institutional positions on public policy issues; the views represented herein are those of the author(s) and do not necessarily reflect the views of Carnegie, its staff, or its trustees.

More Work from Carnegie Endowment for International Peace

  • Commentary
    Carnegie Politika
    Is There a Place for Russia in the New Race Back to the Moon?

    Despite having the resources and expertise, the Russian space industry missed the opportunity to offer the United States or China a mutually rewarding partnership in the lunar race.

      Georgy Trishkin

  • Man standing next to a pile of burned cars
    Commentary
    Emissary
    The Myriad Problems With the Iran Ceasefire

    Four Middle East experts analyze the region’s reactions and next steps.

      • Andrew Leber
      • Eric Lob
      • +1

      Amr Hamzawy, Andrew Leber, Eric Lob, …

  •  A machine gun of a Houthi soldier mounted on a police vehicle next to a billboard depicting the U.S. president Donald Trump and Mohammed Bin Salman, the Crown Prince and Prime Minister of Saudi Arabia, during a protest staged to show support to Iran against the U.S.-Israel war on March 27, 2026 in Sana'a, Yemen.
    Collection
    The Iran War’s Global Reach

    As the war between the United States, Israel, and Iran continues, Carnegie scholars contribute cutting-edge analysis on the events of the war and their wide-reaching implications. From the impact on Iran and its immediate neighbors to the responses from Gulf states to fuel and fertilizer shortages caused by the effective shutdown of the Strait of Hormuz, the war is reshaping Middle East alliances and creating shockwaves around the world. Carnegie experts analyze it all.

  • Commentary
    Carnegie Politika
    Power, Pathways, and Policy: Grounding Central Asia’s Digital Ambitions

    Central Asia’s digital ambitions are achievable, but only if policy is aligned with the region’s physical constraints.

      Aruzhan Meirkhanova

  • Commentary
    Strategic Europe
    Taking the Pulse: Can NATO Survive the Iran War?

    Donald Trump has repeatedly bashed NATO and European allies, threatening to annex Canada and Greenland and deploring their lack of enthusiasm for his war of choice in Iran. Is this latest round of abuse the final straw?

      • Rym Momtaz

      Rym Momtaz, ed.

Get more news and analysis from
Carnegie Endowment for International Peace
Carnegie global logo, stacked
1779 Massachusetts Avenue NWWashington, DC, 20036-2103Phone: 202 483 7600Fax: 202 483 1840
  • Research
  • Emissary
  • About
  • Experts
  • Donate
  • Programs
  • Events
  • Blogs
  • Podcasts
  • Contact
  • Annual Reports
  • Careers
  • Privacy
  • For Media
  • Government Resources
Get more news and analysis from
Carnegie Endowment for International Peace
© 2026 Carnegie Endowment for International Peace. All rights reserved.