• Research
  • About
  • Experts
Carnegie India logoCarnegie lettermark logo
AI
{
  "authors": [
    "Jon Bateman"
  ],
  "type": "legacyinthemedia",
  "centerAffiliationAll": "dc",
  "centers": [
    "Carnegie Endowment for International Peace"
  ],
  "collections": [
    "Deepfakes"
  ],
  "englishNewsletterAll": "ctw",
  "nonEnglishNewsletterAll": "",
  "primaryCenter": "Carnegie Endowment for International Peace",
  "programAffiliation": "TIA",
  "programs": [
    "Technology and International Affairs"
  ],
  "projects": [
    "Partnership for Countering Influence Operations"
  ],
  "regions": [
    "North America",
    "United States",
    "Iran"
  ],
  "topics": [
    "Technology"
  ]
}

Source: Getty

In The Media

Get Ready for Deepfakes to be Used in Financial Scams

But artificial intelligence (AI) is enabling new, more sophisticated forms of digital impersonation. The next big financial crime might involve deepfakes—video or audio clips that use AI to create false depictions of real people.

Link Copied
By Jon Bateman
Published on Aug 10, 2020
Project hero Image

Project

Partnership for Countering Influence Operations

The goal of the Partnership for Countering Influence Operations (PCIO) is to foster evidence-based policymaking to counter threats in the information environment. Key roadblocks as found in our work include the lack of: transparency reporting to inform what data is available for research purposes; rules guiding how data can be shared with researchers and for what purposes; and an international mechanism for fostering research collaboration at-scale.

Learn More

Source: Techdirt

Last month, scammers hijacked the Twitter accounts of former President Barack Obama and dozens of other public figures to trick victims into sending money. Thankfully, this brazen act of digital impersonation only fooled a few hundred people. But artificial intelligence (AI) is enabling new, more sophisticated forms of digital impersonation. The next big financial crime might involve deepfakes—video or audio clips that use AI to create false depictions of real people.

Deepfakes have inspired dread since the term was first coined three years ago. The most widely discussed scenario is a deepfake smear of a candidate on the eve of an election. But while this fear remains hypothetical, another threat is currently emerging with little public notice. Criminals have begun to use deepfakes for fraud, blackmail, and other illicit financial schemes.

This should come as no surprise. Deception has always existed in the financial world, and bad actors are adept at employing technology, from ransomware to robo-calls. So how big will this new threat become? Will deepfakes erode truth and trust across the financial system, requiring a major response by the financial industry and government? Or are they just an exotic distraction from more mundane criminal techniques, which are far more prevalent and costly?

The truth lies somewhere in between. No form of digital disinformation has managed to create a true financial meltdown, and deepfakes are unlikely to be the first. But as deepfakes become more realistic and easier to produce, they offer powerful new weapons for tech-savvy criminals.

Consider the most well-known type of deepfake, a “face-swap” video that transposes one person’s expressions onto someone else’s features. These can make a victim appear to say things she never said. Criminals could share a face-swap video that falsely depicts a CEO making damaging private comments—causing her company’s stock price to fall, while the criminals profit from short sales.

At first blush, this scenario is not much different than the feared political deepfake: a false video spreads through social or traditional media to sway mass opinion about a public figure. But in the financial scenario, perpetrators can make money on rapid stock trades even if the video is quickly disproven. Smart criminals will target a CEO already embroiled in some other corporate crisis, who may lack the credibility to refute a clever deepfake.

In addition to video, deepfake technology can create lifelike audio mimicry by cloning someone’s voice. Voice cloning is not limited to celebrities or politicians. Last year, a CEO’s cloned voice was used to defraud a British energy company out of $243,000. Financial industry contacts tell me this was not an isolated case. And it shows how deepfakes can cause damage without ever going viral. A deepfake tailored for and sent directly to one person may be the most difficult kind to thwart.

AI can generate other forms of synthetic media beyond video and audio. Algorithms can synthesize photos of fictional objects and people, or write bogus text that simulates human writing. Bad actors could combine these two techniques to create authentic-seeming fake social media accounts. With AI-generated profile photos and AI-written posts, the fake accounts could pass as human and earn real followers. A large network of such accounts could be used to denigrate a company, lowering its stock price due to false perceptions of a grassroots brand backlash.

These are just a few ways that deepfakes and other synthetic media can enable financial harm. My research highlights ten scenarios in total—one based in fact, plus nine hypotheticals. Remarkably, at least two of the hypotheticals already came true in the few months since I first imagined them. A Pennsylvania attorney was scammed by imposters who reportedly cloned his own son’s voice, and women in India were blackmailed with synthetic nude photos. The threats may still be small, but they are rapidly evolving.

What can be done? It would be foolish to pin hopes on a silver bullet technology that reliably detects deepfakes. Detection tools are improving, but so are deepfakes themselves. Real solutions will blend technology, institutional changes, and broad public awareness.

Corporate training and controls can help inoculate workers against deepfake phishing calls. Methods of authenticating customers by their voices or faces may need to be re-examined. The financial industry already benefits from robust intelligence sharing and crisis planning for cyber threats; these could be expanded to cover deepfakes.

The financial sector must also collaborate with tech platforms, law enforcement agencies, journalists, and others. Many of these groups are already working to counter political deepfakes. But they are not yet as focused on the distinctive ways that deepfakes threaten the financial system.

Ultimately, efforts to counter deepfakes should be part of a broader international strategy to secure the financial system against cyber threats, such as the one the Carnegie Endowment is currently developing together with the World Economic Forum.

Deepfakes are hardly the first threat of financial deception, and they are far from the biggest. But they are growing and evolving before our eyes. To stay ahead of this emerging challenge, the financial sector should start acting now.

This article was originally published by Techdirt.

About the Author

Jon Bateman
Jon Bateman

Senior Fellow and Co-Director, Technology and International Affairs Program

Jon Bateman is a senior fellow and co-director of the Technology and International Affairs Program at the Carnegie Endowment for International Peace.

    Recent Work

  • Q&A
    Are All Wars Now Drone Wars?
      • Jon Bateman

      Jon Bateman, Steve Feldstein

  • Q&A
    The Most Likely Outcomes of Trump’s Order Targeting State AI Laws
      • Jon Bateman
      • Anton Leicht
      • +1

      Jon Bateman, Anton Leicht, Alasdair Phillips-Robins, …

Jon Bateman
Senior Fellow and Co-Director, Technology and International Affairs Program
Jon Bateman
TechnologyNorth AmericaUnited StatesIran

Carnegie India does not take institutional positions on public policy issues; the views represented herein are those of the author(s) and do not necessarily reflect the views of Carnegie, its staff, or its trustees.

More Work from Carnegie India

  • Research
    For People, Planet, and Progress: Perspectives from India's AI Impact Summit

    This collection of essays by scholars from Carnegie India’s Technology and Society program traces the evolution of the AI summit series and examines India’s framing around the three sutras of people, planet, and progress. Scholars have catalogued and assessed the concrete deliverables that emerged and assessed what the precedent of a Global South country hosting means for the future of the multilateral conversation.

      • +3

      Nidhi Singh, Tejas Bharadwaj, Shruti Mittal, …

  • Article
    India’s Press Note 3 Gamble: Opening the FDI Door to China

    On March 10, 2026, India’s Union Cabinet approved amendments to Press Note 3, a regulation that mandated government approval on all foreign direct investment (FDI) from countries sharing a land border with India. This amendment raises questions primarily about whether its stated benefits will materialize and if the risks have been adequately weighed. This piece will address the same.

      Konark Bhandari

  • Commentary
    The Coming of Age of India’s Nuclear Triad

    The induction of INS Aridhaman, which features several technological enhancements, now gives India the third nuclear ballistic missile submarine to ensure continuous at-sea deterrent.

      Dinakar Peri

  • Article
    What Could a Reciprocal Defense Procurement Agreement Do for U.S.-India Ties?

    India and the United States are close to concluding a Reciprocal Defense Procurement Agreement (RDPA) that will allow firms from the two countries to sell to each other’s defense establishments more easily. While this may not remedy the specific grievances both sides may have regarding larger bilateral issues, an RDPA could restore some momentum, following the trade deal announcement.

      Konark Bhandari

  • Commentary
    India Signs the Pax Silica—A Counter to Pax Sinica?

    On the last day of the India AI Impact Summit, India signed Pax Silica, a U.S.-led declaration seemingly focused on semiconductors. While India’s accession to the same was not entirely unforeseen, becoming a signatory nation this quickly was not on the cards either.

      Konark Bhandari

Get more news and analysis from
Carnegie India
Carnegie India logo, white
Unit C-4, 5, 6, EdenparkShaheed Jeet Singh MargNew Delhi – 110016, IndiaPhone: 011-40078687
  • Research
  • About
  • Experts
  • Projects
  • Events
  • Contact
  • Careers
  • Privacy
  • For Media
Get more news and analysis from
Carnegie India
© 2026 Carnegie Endowment for International Peace. All rights reserved.