in the media

Is There a Difference Between Good and Bad Online Election Targeting?

With the elections heating up and news feeds brimming with ads for this candidate and that cause, voters need to be adept at recognizing persuasive from manipulative microtargeting.

published by
Hill
 on October 15, 2018

Source: Hill

Setting your personal goals such as saving for retirement, running a marathon, or learning to meditate have never been easier. Four thousand years ago, Babylonians, who were the first people to make New Year resolutions, had only willpower to push them forward. Today, anyone with a smartphone has apps to guide them on a path toward their aspirations.

Underpinning those apps is behavioral microtargeting, the union of behavioral sciences with machine learning, which predicts and refines through experimentation the message most likely to persuade a person to perform some action based on the data of that person. Microtargeting can be a powerful force for good. Imagine if everyone saved for retirement. Like any tool, however, it can also be used unethically to manipulate.

With the elections heating up and news feeds brimming with ads for this candidate and that cause, voters need to be adept at recognizing persuasive from manipulative microtargeting. But what is the difference? Whereas persuasion involves convincing your audience that your position advances their agenda, manipulation involves convincing your audience that your position advances their agenda when, in reality, it does not. It advances your own agenda. Persuasion, in short, relies on integrity whereas manipulation relies on deception.

Instances of deception in the 2012 and 2016 elections illustrate how microtargeting can be used ethically to persuade and unethically to manipulate. In 2012, the Obama campaign created an app for supporters to donate money and find houses to canvas. By asking the people who downloaded the app for permission to scan their Facebook news feeds and friends list, the campaign also collected data on the friends of supporters, which it used to determine who might be persuadable. The campaign then encouraged supporters to contact their most persuadable friends.

Importantly, the campaign complied with Facebook’s terms of service and federal election law. The campaign had the consent of its supporters to access their data and the supporters knew the campaign would use their data for political purposes. The campaign also only directly messaged those who downloaded the app. The transgression, albeit legal, was that although its supporters gave consent, their friends did not and so were unaware that a political campaign obtained and used their data.

In 2016, the deceptions by Cambridge Analytica on behalf of candidate Donald Trump were numerous and more egregious. Cambridge Analytica was the American commercial subsidiary of a British company, which purchased Facebook data from a developer who duped people into relinquishing their data and friends list under the auspices of a personality quiz that would be used for academic research. Cambridge Analytica then sent targeted ads to those people and any person with a similar profile.

These activities violated not only Facebook’s terms of service, which bans developers from selling its data to businesses, but also federal election law, which bans foreign nationals from participating in decisions that affect American elections. Worse, none of the people targeted by Cambridge Analytica, not the people who took the personality quiz nor their friends, knew that a political campaign had their data.

Probably most disturbing, however, was the microtargeting content of Cambridge Analytica. According to former employee turned whistleblower, Christopher Wylie, Cambridge Analytica “sought to identify mental vulnerabilities in voters and worked to exploit them by targeting information designed to activate some of the worst characteristics in people such as neuroticism, paranoia, and racial biases” that were “making them believe things that are not necessarily true.”

How do voters avoid becoming a victim of manipulative microtargeting? Congress has two bipartisan bills that would significantly increase online transparency standards. Introduced by Senators John Kennedy (R-La.) and Amy Klobuchar (D-Minn.), the Social Media Privacy Protection and Consumer Rights Act of 2018 would give people the right to opt out of microtargeting and keep their information private. The Honest Ads Act, also introduced by Klobuchar, with Senators Mark Warner (D-Va.) and John McCain (R-Ariz.), would ensure that online ads are subject to the same rules that apply to television, radio, and print ads.

In the meantime, technology companies have started introducing features to make it easier for people to ascertain the identities and agendas behind the ads they see. Facebook introduced an online archive of all of its political ads, who paid for them, as well as the demographics of those who were targeted. Twitter launched a similar policy to help users identify political ads and who paid for them. These features should boost online transparency, but they do not delegate enough control to users. This means that until privacy and election laws catch up with microtargeting, voters will have to determine the integrity or deception in the ads they see on their own. My advice going into this election is to ask yourself whether the ads trigger your “inner demons” or your aspirations.

This article was originally published by the Hill.