Source: Getty
article

Tackling Online Abuse and Disinformation Targeting Women in Politics

Worldwide, women in politics are frequent targets of abuse and threats online, but social media companies and governments are not doing nearly enough to combat it.

by Lucina Di Meco and Saskia Brechenmacher
Published on November 30, 2020

In 2017, soon after then Ukrainian member of parliament Svitlana Zalishchuk gave a speech to the United Nations on the impact of the Russian-Ukrainian conflict on women, a fake tweet began to circulate on social media claiming that she had promised to run naked through the streets of Kiev if Russia-backed separatists won a critical battle. Zalishchuk said, “The story kept circulating on the Internet for a year,” casting a shadow over her political accomplishments.

Zalishchuk is not alone in her experience. Around the world, women in politics receive an overwhelming amount of online abuse, harassment, and gendered defamation via social media platforms. For example, a recent analysis of the 2020 U.S. congressional races found that female candidates were significantly more likely to receive online abuse than their male counterparts. On Facebook, female Democrats running for office received ten times more abusive comments than male Democratic candidates. Similar trends have been documented in India, the UK, Ukraine, and Zimbabwe.

Social media companies have come under increasing pressure to take a tougher stance against all forms of hate speech and harassment on their platforms, including against women, racial minorities, and other marginalized groups. Yet their patchwork approach to date has proven insufficient. Governments and international institutions need to press for more action and develop new standards for platform transparency and accountability that can help address the widespread toxicity that is currently undermining online political debate. If effectively designed and implemented, the EU’s Digital Services Act and U.S. President-elect Joe Biden’s proposed National Task Force on Online Harassment and Abuse will represent steps in the right direction.

The Global Challenge

Online abuse against politicians is often misunderstood as inevitable: after all, most public figures occasionally find themselves on the receiving end of vitriolic attacks. Yet over the past several years, the gendered and racialized nature of the phenomenon has received increasing policy attention, as women appear to be disproportionately targeted by online abuse and disinformation attacks.

This pattern tends to be even more pronounced for female political leaders from racial, ethnic, religious, or other minority groups; for those who are highly visible in the media; and for those who speak out on feminist issues. In India, for example, an Amnesty International investigation found that one in every seven tweets that mentioned women politicians was problematic or abusive—and that both Muslim women politicians and women politicians belonging to marginalized castes received substantially more abuse than those from other social groups.

Female politicians are not only targeted disproportionately but also subjected to different forms of harassment and abuse. Attacks targeting male politicians mostly relate to their professional duties, whereas online harassment directed at female politicians is more likely to focus on their physical appearance and sexuality and include threats of sexual violence and humiliating or sexualized imagery. Women in politics are also frequent targets of gendered disinformation campaigns, defined as the spreading of deceptive or inaccurate information and images. Such campaigns often create story lines that draw on misogyny and gender stereotypes. For example, a recent analysis shows that immediately following Kamala Harris’s nomination as the 2020 U.S. vice presidential candidate, false claims about Harris were being shared at least 3,000 times per hour on Twitter, in what appeared to be a coordinated effort. Similar tactics have been used throughout Europe and in Brazil.

The disproportionate and often strategic targeting of women politicians and activists has direct implications for the democratic process.

The disproportionate and often strategic targeting of women politicians and activists has direct implications for the democratic process: it can discourage women from running for office, push women out of politics, or lead them to disengage from online political discourse in ways that harms their political effectiveness. For those women who persevere, the abuse can cause psychological harm and waste significant energy and time, particularly if politicians struggle to verify whether or when online threats pose real-life dangers to their safety.

What’s Driving Gendered Online Abuse

Some political scientists and social psychologists point to gender role theory to explain harassment and threats targeting female politicians. In many societies, the characteristics traditionally associated with politicians—such as ambition and assertiveness—tend to be coded “male,” which means that women who display these traits may be perceived as transgressing traditional social norms. Online harassment of women seeking political power could thus be understood as a form of gender role enforcement, facilitated by anonymity.

However, online abuse and sexist narratives targeting politically active women are not just the product of everyday misogyny: they are reinforced by political actors and deployed as a political strategy. Illiberal political actors often encourage online abuse against female political leaders and activists as a deliberate tactic to silence oppositional voices and push feminist politicians out of the political arena.

Laura Boldrini, an Italian politician and former UN official who served as president of the country’s Chamber of Deputies, experienced this situation firsthand: following sexist attacks by Matteo Salvini, leader of the far-right Northern League party, and other male politicians, she was targeted by a wave of threatening and misogynistic abuse both online and offline. “Today, in my country, threats of rape are used to intimidate women politicians and push them out of the publish sphere—even by public figures,” notes Boldrini. “Political leaders themselves unleash this type of reaction.”1

What Can Be Done

In recent years, women politicians and activists have launched campaigns to raise awareness of the problem and its impact on democratic processes. Last August, the U.S. Democratic Women’s Caucus sent a letter to Facebook urging the company to protect women from rampant online attacks on the platform and to revise algorithms that reward extremist content. Similar advocacy initiatives have proliferated in different parts of the world, from the global #NotTheCost campaign to Reclaim the Internet in the UK, #WebWithoutViolence in Germany, and the #BetterThanThis campaign in Kenya.

Civil society organizations that support women running for office are also spearheading new strategies to respond to gendered online abuse. Some are offering specialized training and toolkits to help women political leaders protect themselves and counter sexualized and racialized disinformation. In Canada, a social enterprise created ParityBOT, a bot that detects problematic tweets about women candidates and responds with positive messages, thus serving both as a monitoring mechanism and a counterbalancing tool.

Social media companies’ responses have so far been inadequate to tackle a problem as vast and complex as gendered disinformation and online abuse.

Yet despite rising external pressure from politicians and civil society, social media companies’ responses have so far been inadequate to tackle a problem as vast and complex as gendered disinformation and online abuse—whether it targets female politicians, activists, or ordinary citizens. For example, Facebook recently created an Oversight Board tasked with improving the platform’s decisionmaking around content moderation—yet many experts are highly skeptical of the board’s ability to drive change given its limited scope and goals. Twitter reportedly increased enforcement of its hate speech and abuse policies in the second half of 2019, as well as expanded its definition of dehumanizing speech. However, its policies to date lack a clear focus on the safety of women and other marginalized groups. Broader reforms are urgently needed.

Increase Platform Transparency and Accountability

Major social media platforms should do more to ensure transparency, accountability, and gender sensitivity in their mechanisms for content moderation, complaints, and redress. They should also take steps to proactively prevent the spread of hateful speech online, including through changes in risk assessment practices and product design.

To date, most tech companies still have inadequate and unclear content moderation systems. For example, social media companies currently do not disclose their exact guidelines on what constitutes hate speech and harassment or how they implement those guidelines. To address this problem, nonprofits such as Glitch and ISD have suggested that social media platforms allow civil society organizations and independent researchers to access and analyze their data on the number and nature of complaints received, disaggregated by gender, country, and the redress actions taken. According to Amnesty International, tech companies should also be more transparent about their language detection mechanisms, the number of content moderators employed by region and language, the volume of reports handled, and how moderators are trained to recognize culturally specific and gendered forms of abuse. To this day, most tech companies focus on tackling online abuse primarily in Europe and the United States, resulting in an enforcement gap in the Global South. Greater transparency about companies’ current content moderation capacity would enable governments and civil society to better identify shortcomings and push for targeted resource investments.

Greater transparency about companies’ current content moderation capacity would enable governments and civil society to better identify shortcomings and push for targeted resource investments.

The move to more automated content moderation is unlikely to solve the problem of widespread and culturally specific gendered and racialized online abuse. Until now, social media companies have used automated tools primarily for content that is easier to identify computationally. Yet these tools are blunt and often biased. So far during the coronavirus pandemic, Facebook, Twitter, and Google have all relied more heavily on automation to remove harmful content. As a result, significantly more accounts have been suspended and more content has been flagged and removed than in the months leading up to the pandemic. But some of this content was posted by human rights activists who had no mechanism for appealing those decisions, and some clearly hateful content—such as racist and anti-Semitic hate speech in France—remained online. “Machine learning will always be a limited tool, given that context plays an enormous part of how harassment and gendered disinformation work online,” notes Chloe Colliver, the head of digital policy and strategy at ISD. “We need some combination of greater human resources and expertise along with a focus on developing AI systems that are more accurate in detecting gendered disinformation.”2

The proliferation of online harassment, hate speech, and disinformation is not only driven by gaps in content moderation but also by a business model that monetizes user engagement with little regard for risk. At the moment, Twitter and other platforms rely on deep learning algorithms that prioritize disseminating content with greater engagement. Inflammatory posts often quickly generate comments and retweets, which means that newsfeed algorithms will show them to more users. Online abuse that relies on sensational language and images targeting female politicians thus tends to spread rapidly. Higher levels of engagement generate more user behavior data that brings in advertising revenue, which means social media companies currently have few financial incentives to change the status quo.

Advocates and experts have put forward different proposals to tackle this problem. For example, social media companies could proactively tweak their recommendation systems to prevent users from being nudged toward hateful content. They also could improve their mechanism for detecting and suspending algorithms that amplify gendered and racialized hate speech—a step that some organizations have suggested to help address pandemic-related mis/disinformation. As part of this process, companies could disclose and explain their content-shaping algorithms and ad-targeting systems, which currently operate almost entirely beyond public scrutiny.

In addition, they could improve their risk assessment practices prior to launching new products or tools or before expanding into a new political and cultural context. At the moment, content moderation is often siloed from product design and engineering, which means that social media companies are permanently focused on investigating and redressing complaints instead of building mechanisms that “increase friction” for users and make it harder for gendered hate speech and disinformation to spread in the first place. Moreover, decisions around risk are often taken by predominantly male, white senior staffers: this type of homogeneity frequently leads to gender and race blindness in product development and rollout. Across all of these domains, experts call for greater transparency and collaboration with outside expertise, including researchers working on humane technology and ethical design.

Step Up Government Action

Given tech companies’ limited action to date, democratic governments also have a responsibility to do more. Rather than asking social media companies to become the final arbiters of online speech, they should advance broader regulatory frameworks that require platforms to become more transparent about their moderation practices and algorithmic decisionmaking, as well as ensure compliance through independent monitoring and accountability mechanisms. Governments also have an important role to play in supporting civil society advocacy, research, and public education on gendered and racialized patterns of online abuse, including against political figures.

The first wave of legislation aimed at mitigating abuse, harassment, and hate speech on social media platforms focused primarily on criminalizing and removing different types of harmful online content. Some efforts have targeted individual perpetrators. For example, in the UK, legal guidelines issued in 2016 and in 2018 enable the Crown Prosecution Service to prosecute internet trolls who create derogatory hashtags, engage in virtual mobbing (inciting people to harass others), or circulate doctored images. In 2019, Mexico passed a new law that specifically targets gendered online abuse: it punishes, with up to nine years in prison, those who create or disseminate intimate images or videos of women or attack women on social networks. The law also includes the concept of “digital violence” in the Mexican penal code.

Such legal reforms are important steps, particularly if they are paired with targeted resources and training for law enforcement. Female politicians often report that law enforcement officials do not take their experiences with online threats and abuse seriously enough; legal reforms and prosecution guidelines can help change this pattern. However, efforts to go after individual perpetrators are insufficient to tackle the current scale of misogynistic online harassment and abuse targeting women politicians and women and girls more generally: even if applicable legal frameworks exist, thresholds for prosecution are often set very high and not all victims want to press charges. Moreover, anonymous perpetrators can be difficult to trace, and the caseload easily exceeds current policing capacity. In the UK, for example, fewer than 1 percent of cases taken up by the police unit charged with tackling online hate crimes have resulted in charges.

Other countries have passed laws that make social media companies responsible for the removal of illegal material. For example, in 2017, Germany introduced a new law that requires platforms to remove hate speech or illegal content within twenty-four hours or risk millions of dollars in fines. However, this approach has raised strong concerns among human rights activists, who argue that this measure shifts the responsibility to social media companies to determine what constitutes legal speech without providing adequate mechanisms for judicial oversight or judicial remedy. In June 2020, the French constitutional court struck down a similar law due to concerns about overreach and censorship. French feminist and antiracist organizations had previously criticized the measure, noting that it could restrict the speech of those advocating against hate and extremism online and that victims would benefit more from sustained investments in existing legal remedies.

Many researchers and advocates have started pushing for a shift away from content-based rules toward a more comprehensive regulatory framework for social media companies.

In light of these challenges, many researchers and advocates have started pushing for a shift away from content-based rules toward a more comprehensive regulatory framework for social media companies. One example of this approach is the UK’s 2019 Online Harms White Paper, which “proposes establishing in law a new duty of care towards users” to deal proactively with possible risks that platform users might encounter, under the oversight of an independent regulator. The proposed regulatory framework—which is set to result in a new UK law in early 2021—would “outline the systems, procedures, technologies and investment, including in staffing, training and support of human moderators, that companies need to adopt to help demonstrate that they have fulfilled their duty of care to their users.” It would also set strict standards for transparency and require companies to ensure that their algorithms do not amplify extreme and unreliable material for the sake of user engagement. The EU’s Digital Services Act, currently in development, is another opportunity to advance a regulatory approach focused on harm prevention. The act should demand greater transparency from social media platforms about content moderation practices and algorithmic systems, as well as require better risk assessment practices. It also should incentivize companies to move away from a business model that values user engagement above everything else.

Of course, governments can take action beyond passing and enforcing platform regulations. They can promote digital citizenship education in school curricula to ensure that teenagers and young adults develop the skills to recognize and report inappropriate online conduct and to communicate respectfully online. In Europe, as part of negotiations around the Digital Services Act, activists are demanding that governments dedicate part of the Digital Services Tax to fund broader efforts to tackle online abuse, including additional research on patterns of gendered and racialized online harassment. In the United States, Biden’s proposal to set up a national task force—bringing together federal and state agencies, advocates, law enforcement, and tech companies—to tackle online harassment and abuse and understand its connection to violence against women and extremism represents a welcome and important step toward developing longer-term solutions. Equally welcome are his proposals to allocate new funding for law enforcement trainings on online harassments and threats and to support legislation that establishes a civil and criminal cause of action for unauthorized disclosure of intimate images.

Who Is Responsible

The problem of gendered and racialized harassment and abuse targeting women political leaders extends far beyond the online realm: traditional media outlets, political parties, and civil society all have crucial roles to play in committing to and modeling a more respectful and humane political discourse.

However, social media companies have the primary responsibility to prevent the amplification of online abuse and disinformation—a responsibility that they are currently failing to meet. As the coronavirus pandemic has further accelerated the global shift to online campaigning and mobilization, there is now an even greater need for governments to hold these companies accountable for addressing all forms of hate speech, harassment, and disinformation on their platforms. Both Biden’s proposed national task force and the EU’s Digital Services Act represent key opportunities for developing new regulatory approaches mandating greater transparency and accountability in content moderation, algorithmic decisionmaking, and risk assessment.

These reform efforts need to include a gender lens. As Boldrini emphasizes, “It is extremely important to speak out against sexism and misogyny in our societies, particularly in light of the global movement against women’s rights inspired by the far right. The time has come to start a new feminist revolution to defend the rights we already have—as well as to acquire new rights.” Ensuring that all women political leaders and activists can engage in democratic processes online without fear of harassment, threats, and abuse will be a central piece of this struggle.3

Notes

1 Authors’ interview with Laura Boldrini, written communication, November 1, 2020.

2 Authors’ interview with Chloe Colliver, video call, October 28, 2020.

3 Authors’ interview with Laura Boldrini, written communication, November 1, 2020.