• Research
  • About
  • Experts
Carnegie India logoCarnegie lettermark logo
{
  "authors": [
    "Charlotte Stanton"
  ],
  "type": "legacyinthemedia",
  "centerAffiliationAll": "dc",
  "centers": [
    "Carnegie Endowment for International Peace"
  ],
  "collections": [],
  "englishNewsletterAll": "ctw",
  "nonEnglishNewsletterAll": "",
  "primaryCenter": "Carnegie Endowment for International Peace",
  "programAffiliation": "TIA",
  "programs": [
    "Technology and International Affairs"
  ],
  "projects": [],
  "regions": [
    "North America",
    "United States",
    "Iran"
  ],
  "topics": [
    "Political Reform",
    "Democracy",
    "Economy",
    "Security",
    "Foreign Policy",
    "Technology"
  ]
}

Source: Getty

In The Media

Is There a Difference Between Good and Bad Online Election Targeting?

With the elections heating up and news feeds brimming with ads for this candidate and that cause, voters need to be adept at recognizing persuasive from manipulative microtargeting.

Link Copied
By Charlotte Stanton
Published on Oct 15, 2018

Source: Hill

Setting your personal goals such as saving for retirement, running a marathon, or learning to meditate have never been easier. Four thousand years ago, Babylonians, who were the first people to make New Year resolutions, had only willpower to push them forward. Today, anyone with a smartphone has apps to guide them on a path toward their aspirations.

Underpinning those apps is behavioral microtargeting, the union of behavioral sciences with machine learning, which predicts and refines through experimentation the message most likely to persuade a person to perform some action based on the data of that person. Microtargeting can be a powerful force for good. Imagine if everyone saved for retirement. Like any tool, however, it can also be used unethically to manipulate.

With the elections heating up and news feeds brimming with ads for this candidate and that cause, voters need to be adept at recognizing persuasive from manipulative microtargeting. But what is the difference? Whereas persuasion involves convincing your audience that your position advances their agenda, manipulation involves convincing your audience that your position advances their agenda when, in reality, it does not. It advances your own agenda. Persuasion, in short, relies on integrity whereas manipulation relies on deception.

Instances of deception in the 2012 and 2016 elections illustrate how microtargeting can be used ethically to persuade and unethically to manipulate. In 2012, the Obama campaign created an app for supporters to donate money and find houses to canvas. By asking the people who downloaded the app for permission to scan their Facebook news feeds and friends list, the campaign also collected data on the friends of supporters, which it used to determine who might be persuadable. The campaign then encouraged supporters to contact their most persuadable friends.

Importantly, the campaign complied with Facebook’s terms of service and federal election law. The campaign had the consent of its supporters to access their data and the supporters knew the campaign would use their data for political purposes. The campaign also only directly messaged those who downloaded the app. The transgression, albeit legal, was that although its supporters gave consent, their friends did not and so were unaware that a political campaign obtained and used their data.

In 2016, the deceptions by Cambridge Analytica on behalf of candidate Donald Trump were numerous and more egregious. Cambridge Analytica was the American commercial subsidiary of a British company, which purchased Facebook data from a developer who duped people into relinquishing their data and friends list under the auspices of a personality quiz that would be used for academic research. Cambridge Analytica then sent targeted ads to those people and any person with a similar profile.

These activities violated not only Facebook’s terms of service, which bans developers from selling its data to businesses, but also federal election law, which bans foreign nationals from participating in decisions that affect American elections. Worse, none of the people targeted by Cambridge Analytica, not the people who took the personality quiz nor their friends, knew that a political campaign had their data.

Probably most disturbing, however, was the microtargeting content of Cambridge Analytica. According to former employee turned whistleblower, Christopher Wylie, Cambridge Analytica “sought to identify mental vulnerabilities in voters and worked to exploit them by targeting information designed to activate some of the worst characteristics in people such as neuroticism, paranoia, and racial biases” that were “making them believe things that are not necessarily true.”

How do voters avoid becoming a victim of manipulative microtargeting? Congress has two bipartisan bills that would significantly increase online transparency standards. Introduced by Senators John Kennedy (R-La.) and Amy Klobuchar (D-Minn.), the Social Media Privacy Protection and Consumer Rights Act of 2018 would give people the right to opt out of microtargeting and keep their information private. The Honest Ads Act, also introduced by Klobuchar, with Senators Mark Warner (D-Va.) and John McCain (R-Ariz.), would ensure that online ads are subject to the same rules that apply to television, radio, and print ads.

In the meantime, technology companies have started introducing features to make it easier for people to ascertain the identities and agendas behind the ads they see. Facebook introduced an online archive of all of its political ads, who paid for them, as well as the demographics of those who were targeted. Twitter launched a similar policy to help users identify political ads and who paid for them. These features should boost online transparency, but they do not delegate enough control to users. This means that until privacy and election laws catch up with microtargeting, voters will have to determine the integrity or deception in the ads they see on their own. My advice going into this election is to ask yourself whether the ads trigger your “inner demons” or your aspirations.

This article was originally published by the Hill.

About the Author

Charlotte Stanton

Former Director, Silicon Valley Office

Charlotte Stanton was the inaugural director of the Silicon Valley office of the Carnegie Endowment for International Peace as well as a fellow in Carnegie’s Technology and International Affairs Program.

    Recent Work

  • Q&A
    The World Isn’t Ready for AI to Upend the Global Economy

      Steven Weber, Charlotte Stanton

  • Article
    What the Machine Learning Value Chain Means for Geopolitics
      • +3

      Charlotte Stanton, Vivien Lung, Nancy (Hanzhuo) Zhang, …

Charlotte Stanton
Former Director, Silicon Valley Office
Political ReformDemocracyEconomySecurityForeign PolicyTechnologyNorth AmericaUnited StatesIran

Carnegie India does not take institutional positions on public policy issues; the views represented herein are those of the author(s) and do not necessarily reflect the views of Carnegie, its staff, or its trustees.

More Work from Carnegie India

  • Article
    What Could a Reciprocal Defense Procurement Agreement Do for U.S.-India Ties?

    India and the United States are close to concluding a Reciprocal Defense Procurement Agreement (RDPA) that will allow firms from the two countries to sell to each other’s defense establishments more easily. While this may not remedy the specific grievances both sides may have regarding larger bilateral issues, an RDPA could restore some momentum, following the trade deal announcement.

      Konark Bhandari

  • Commentary
    India Signs the Pax Silica—A Counter to Pax Sinica?

    On the last day of the India AI Impact Summit, India signed Pax Silica, a U.S.-led declaration seemingly focused on semiconductors. While India’s accession to the same was not entirely unforeseen, becoming a signatory nation this quickly was not on the cards either.

      Konark Bhandari

  • Commentary
    The PSLV Setback: Restoring India’s Workhorse

    On January 12, 2026, India's "workhorse," the Polar Satellite Launch Vehicle, experienced a consecutive mission failure for the first time in its history. This commentary explores the implications of this incident on India’s space sector and how India can effectively address issues stemming from the incident.

      Tejas Bharadwaj

  • Commentary
    AI Adoption Journey for Population Scale

    Connecting real-world AI use cases across sectors such as health, education, agriculture, and livelihoods can help policymakers, innovators, and institutions align around a shared goal. This article looks at a framework ensuring that AI works for everyone.

      Shalini Kapoor, Tanvi Lall

  • Commentary
    The Impact of U.S. Sanctions and Tariffs on India’s Russian Oil Imports

    This piece examines India’s response to U.S. sanctions and tariffs, specifically assessing the immediate market consequences, such as alterations in import costs, and the broader strategic implications for India’s energy security and foreign policy orientation.

      Vrinda Sahai

Get more news and analysis from
Carnegie India
Carnegie India logo, white
Unit C-4, 5, 6, EdenparkShaheed Jeet Singh MargNew Delhi – 110016, IndiaPhone: 011-40078687
  • Research
  • About
  • Experts
  • Projects
  • Events
  • Contact
  • Careers
  • Privacy
  • For Media
Get more news and analysis from
Carnegie India
© 2026 Carnegie Endowment for International Peace. All rights reserved.