Source: Getty
Q&A

Cut Loose by Tech Giants, Will Far-Right Extremists Be Adrift?

As far-right extremists search for their next online home, the deluge of political misinformation might be waning. But will any potential drop in rhetoric and conspiracy theories be permanent?

Published on January 19, 2021

Has Banning Trump From Facebook and Twitter Triggered a Platform Migration?

There are signs of cross-platform movement. Following the news that Twitter and Facebook were suspending U.S. President Donald Trump’s accounts, there was a big uptick in downloads of alternative messaging apps. This could certainly reflect a migration of far-right supporters caught up in the sweeps around Trump and the Capitol Hill insurrection, but it’s difficult to pinpoint given today’s complex and fluid information environment. For example, WhatsApp’s recent and confusing policy update might have also driven many users to other platforms, including Telegram—shortly after the update, the latter saw a 500-percent increase in new users, adding more than 25 million worldwide in seventy-two hours.

That said, anecdotal evidence shows that banned users are jumping ship. Concordia University’s Marc-André Argentino noted that so-called evasion accounts on Twitter are being used to drive traffic to new communities off platform. Likewise, according to the Shorenstein Center’s Joan Donovan, Trump supporters are teaching others via YouTube how to start Telegram channels and how to prevent updates and the removal of pulled apps on mobile devices. Those banned from one platform will continue to seek out creative alternatives, including chat groups inside gaming services.

Will Social Media’s Recent Actions Take the Far-Right’s Megaphone Away?

For the moment, yes. Banning far-right, radicalized Trump supporters from major social media platforms will undoubtedly shrink their audience sizes and reach. Zignal Labs reported a 73 percent drop in election misinformation on Twitter after the accounts of Trump and some 70,000 of his supporters were taken down. But as the University of Washington’s Kate Starbird points out, it’s altogether too soon to make such claims of efficacy; it requires a solid baseline of misinformation beforehand, and counting keywords as this study did might not suffice as a reliable indicator for measurement.

Moreover, some evidence shows that groups pushed to the internet’s fringes can become even more extreme. Shutting down accounts will help prevent average unsuspecting users from being exposed to dangerous content, but it won’t necessarily stop those who already endorse that content. The bigger question here is whether the recent actions of major tech companies will cause a parallel Web to emerge—infrastructure and all—that offers refuge to these Trump supporters.

Alternative platforms like Parler were poised to benefit from the migration had Google and Apple not removed the app from their stores and Amazon not refused to provide its cloud-hosting services. And although the pulpit has shrunk, some say Parler is courting a new Web host, Epik, which currently supports other far-right sites. Others report that Parler is already resurfacing thanks to a Russian-owned tech company, DDos-Guard. Further, Trump has made noises about building his own digital media empire. But this would be a massive undertaking requiring not just messaging and distribution platforms but also internet infrastructure, including hosting. Far-right extremists don’t need to be tech-savvy, but they need a massive amount of resources to hire those people who are. 

Is There a Danger That Far-Right Platforms Will Become Mainstream?

It’s unlikely in the short or medium term at least. Parler’s user numbers have fluctuated in recent months, dropping from 2.9 million in November to 2.3 million in December and rising to 15 million in January 2021. By comparison, Twitter reported 187 million monetizable daily active users in the third quarter of 2020. Even before Parler was dropped by Amazon, Apple, and Google, its user base likely wasn’t big enough to be a financial success.

It’s anyone’s guess how many of the more than 70 million people who voted for Trump will jump to a fringe platform—but without mass audiences, neither a new social network nor a digital media outlet seem financially viable, especially without the distribution power of a Facebook, for example. Yet these communities, even with a so-called alt-tech ecosystem, are not operating in a void. Mainstream media coverage, such as via cable TV, is a double-edged sword, informing the public while also providing considerable free promotion of far-right narratives.

Will Recent Events Fundamentally Change How Social Media Platforms Manage Harmful Narratives?

It’s already happening, but the Capitol siege may speed up the trend. Deplatforming—the shutting down of controversial speakers or speech—is the bluntest tool in the countering influence operations toolkit and isn’t new. Reddit has been deplatforming for hate speech since 2015. And there was a noticeable increase in tech companies deplatforming in 2018, famously with Facebook, Twitter, Apple, Spotify, and YouTube all removing Alex Jones and InfoWars from their platforms.

It’s difficult to say whether deplatforming works, as there are few studies on the effectiveness of countering influence operations. Facilitating the sort of collaborative research needed between tech companies and academics is an ongoing challenge. But those who support deplatforming often point to the economic repercussions for those booted off. Many far-right personalities, like Milo Yiannopoulos, made their livelihoods off of social media platforms, and once deplatformed struggled to find income.

What is new is that a sitting president was banned, a move widely avoided in the past due to public interest in a politician’s views and commentary—including the good, the bad, and the ugly. But while many Americans applauded the move to deplatform Trump, some European leaders raised alarm that the move foreshadows a rise of a “digital oligarchy.” A growing number are instead calling for tech giants to be held legally responsible by “treating social media providers not as owners of neutral platforms connecting consumers with digital content creators but as publishers in their own right.” 

One thing is for certain—even if some circles in the United States are relieved about a change in presidents, pressure on social media platforms to do more about misinformation, conspiracy theories, and hate speech will only mount. Moreover, if the bipartisan support for anti-trust action against big tech companies is any indication, few will be sympathetic to the effects of that pressure on industry.

Carnegie’s Partnership for Countering Influence Operations (PCIO) is grateful for funding provided by the William and Flora Hewlett Foundation, Craig Newmark Philanthropies, Facebook, Twitter, and WhatsApp. PCIO is wholly and solely responsible for the contents of its products, written or otherwise. We welcome conversations with new donors. All donations are subject to Carnegie’s donor policy review. We do not allow donors prior approval of drafts, influence on selection of project participants, or any influence over the findings and recommendations of work they may support.

Carnegie does not take institutional positions on public policy issues; the views represented herein are those of the author(s) and do not necessarily reflect the views of Carnegie, its staff, or its trustees.