Source: Getty
PCIO Policy Proposal

Securing the United States from Online Disinformation—A Whole-of-Society Approach

Disinformation is disrupting democracies. Yet responses between social media platforms lack formal coordination and investments in counter-disinformation approaches are scarce.

by Steven Bradley
Published on August 24, 2020

The 2020 National Defense Authorization Act (NDAA) calls for the establishment of a Social Media Data and Threat Analysis Center to address the rising threat of disinformation.1 Indeed, since the 2016 U.S. election, false and misleading narratives have been increasingly distributed with the effect of dividing the population and eroding public trust in democratic institutions. The Cognitive Security Intelligence Center (CSIC), which was created in the spirit of the NDAA’s call, was a step forward, but it needs additional support to realize its full potential to address disinformation. There are still major gaps in the community working to counter disinformation that could be best addressed through two new organizations.

One gap is a lack of coordination and guidelines among social media platforms for content moderation. Too often defensive measures from social platforms—acting on their own—have been insufficient to counter the growing threat of disinformation. Pursuant to their own policies, they reactively remove or flag what they find objectionable, but these platform-specific policies are not consistent. What is flagged or removed from one site may be left to stand by another.

This is where a new organization, the National Commission for Countering Influence Operations (NCCIO), could come in. The NCCIO would be charged with creating and maintaining a set of concrete guidelines to govern content moderation efforts by social platform operators, the scope of which would be constrained strictly to threats that impact national security matters, such as military operations, disaster response, human health, and election security. The guidelines would strive to include scenarios that illustrate concrete examples of how disinformation could pose a threat to these security domains. As coordination and consistency are critical, the NCCIO would advise the CSIC—which enables private industry to share threat indicators and collaborate to enable a more rapid, complete response to disinformation threats2—on the creation of operational procedures to ensure effective cross-sector counter-disinformation response.

Some have suggested that the U.S. government create an organization like the National Counterterrorism Center (NCTC), which was established after the September 11 terrorist attacks, to target disinformation. But such an organization would suffer from a lack of involvement from nongovernment stakeholders.3 On the other hand, one of the key features of the CSIC and the NCCIO is their engagement of multiple stakeholders. Adhering to a whole-of-society approach, the NCCIO would be staffed with representatives from U.S. government stakeholders (such as the DOD, State Department, DHS, HHS, and FDA) and disinformation experts. Civil society organizations like the American Civil Liberties Union, the Electronic Frontier Foundation, and PEN America should also be included in the commission to address free speech concerns.

Private sector engagement could pose a challenge and for that reason, alongside the commission, new policies may be needed to incentivize platforms to participate with the CSIC and adopt NCCIO guidelines. These could include Congress’s creating a narrow “carve-out” amendment to Section 230 of the Communications Decency Act, similar to the way it incentivized social platforms in 2018 to defend against online sex trafficking. (Section 230 largely protects the platforms from legal liability arising from third-party content published on their sites; Congress’s 2018 exemption stated that it does not prohibit the enforcement of state and federal civil and criminal laws relating to sex trafficking.)4

While there is reasonable objection to the use of Section 230 carve-outs on the basis that governments should not be the arbiters of acceptable speech, national security poses a clear exception; the assurance of U.S. security cannot be delegated to a collection of private entities who have demonstrated an inability, and at times unwillingness, to defend it. And a recent groundswell of calls to completely repeal Section 230—on the basis that it was established in 1997 and may no longer be applicable in an age of ubiquitous social media—creates more problems than it solves. Holding social platforms accountable for all third-party content on their sites is simply not a workable solution; the platforms would be mired in endless legal challenges and would likely overmoderate, causing grave harm to free speech. Clearly, some legislative change regarding online content is needed, and given their thoughtful analysis on previous Section 230 proposals, civil society organizations would be key to helping devise an appropriate solution.5

The creation of the NCCIO and its collaboration with the CSIC would excel in coordinating multiple stakeholder groups to target a problem that threatens all of society. A similar approach is needed to address the second major shortcoming of current counter-disinformation efforts: the lack of sufficient advancement in counter-disinformation technologies.

Counter-disinformation technology innovation is currently being held back by the size of the market, which is tiny in comparison to the $173 billion market for cybersecurity, a practice that deals withs threats that are in some ways similar and are arguably of comparable technical complexity.6 While some boutique start-up companies are now joining the fight against disinformation—as are some universities through grants from nonprofit organizations like the Knight Foundation, which invested $50 million in 2019 to research technology’s impact on democracy with respect to how we receive and engage with online information7—these levels of investment are not enough to counter the serious national security threat that online disinformation poses. Investment from the U.S. government is also relatively slim; there is still a heavy reliance on the large social platforms to solve this problem themselves, and they do so with minimal transparency.

This lack of investment is disproportionate to the importance of technological innovation in the fight against disinformation. While technology alone will not solve all disinformation challenges, technology is what dramatically increased the impact of disinformation through social media and bots, and advances will increasingly be used by adversaries to automate the generation of false or misleading content. Given the scale of online disinformation and the speed at which it spreads, technological advances to counter disinformation will be crucial.

To that end, the NCCIO should also establish a consortium of technology organizations, similar to the COVID-19 High Performance Computing Consortium, to address the unmet technological needs of the counter-disinformation community.8 This new consortium would be responsible for developing a research agenda and collaborating on R&D activities supported by funding and guidance from a suitable U.S. government sponsor, such as the Defense Advanced Research Projects Agency (DARPA), an organization that has already managed a variety of research programs around disinformation.9 Consortium participants should include the large social platforms, commercial technology companies, and academic centers focused on disinformation research.10

Not enough is being done today to collectively manage the disinformation threat. The key stakeholders have essential roles: private industry moderates content on social platforms, federal government provides national security, academia drives technological advancement, and civil society organizations ensure free speech. But in this dynamic a collective approach is lacking. Taken together, the NCCIO, its technology consortium, and the CSIC would enable the collaboration and coordination that are essential to defend against the threat that online disinformation poses to national security.

Steven Bradley works at the intersection of technology, security operations, and policy to advance U.S. national cyber defense. He has over twenty years of experience managing the development and application of advanced data analytics solutions to support national security missions, and he currently serves as the director of the Cognitive Security Intelligence Center (CSIC).

Notes

1 “National Defense Authorization Act for Fiscal Year 2020: Conference Report,” Section 5323 on the “Encouragement of Cooperative Actions to Detect and Counter Foreign Influence Operations,” U.S. House of Representatives, https:/docs.house.gov/billsthisweek/20191209/CRPT-116hrpt333.pdf.

2 “Executive Order -- Promoting Private Sector Cybersecurity Information Sharing,” White House, February 13, 2015, https://obamawhitehouse.archives.gov/the-press-office/2015/02/13/executive-order-promoting-private-sector-cybersecurity-information-shari; Deborah Kobza, “Community Update: Deborah Kobza Announces Formation Of Cognitive Security ISAO,” People-Centered Internet, January 15, 2020, https://peoplecentered.net/2020/01/15/community-update-deborah-kobza-announces-formation-of-cognitive-security-isao/; “International Assoc. Of Certified ISAOs Welcomes Steven Bradley, Director Cognitive Security Intelligence Center/CS-ISAO,” International Association of Certified ISAOs, August 20, 2020.

3 Global Internet Forum to Counter Terrorism, https://www.gifct.org/; Paul M. Barrett, Tara Wadhwa, and Dorothee Baumann-Pauly, “Combating Russian Disinformation: The Case for Stepping Up the Fight Online,” Center for Business and Human Rights, New York University, July 2018, https://issuu.com/nyusterncenterforbusinessandhumanri/docs/nyu_stern_cbhr_combating_russian_di.

4 “H.R. 1865: Allow States and Victims to Fight Online Sex Trafficking Act of 2017,” U.S. House of Representatives, April 11, 2018, https://www.congress.gov/115/plaws/publ164/PLAW-115publ164.pdf.

5 Karen Kornbluh, Ellen P. Goodman, and Eli Weiner, “Safeguarding Digital Democracy,” Digital Innovation and Democracy Initiative, German Marshall Fund, March 2020, https://www.gmfus.org/sites/default/files/Safeguarding%20Democracy%20against%20Disinformation_v7.pdf; Matt Bailey, “Three and a Half Ways to Not to Fix the Internet,” PEN America, July 1, 2020, https://pen.org/three-and-a-half-ways-not-to-fix-the-internet/.         

6 Louis Columbus, “2020 Roundup of Cybersecurity Forecasts and Market Estimates,” Forbes, April 5, 2020, https://www.forbes.com/sites/louiscolumbus/2020/04/05/2020-roundup-of-cybersecurity-forecasts-and-market-estimates.

7 Laura Dickinson, “Knight Invests $50M to Develop New Field of Research Around Technology’s Impact on Democracy,” Knight Foundation, July 22, 2019, https://knightfoundation.org/press/releases/knight-fifty-million-develop-new-research-technology-impact-democracy/.

8 COVID-19 High Performance Computing (HPC) Consortium, https://covid19-hpc-consortium.org/projects.

9 Media Forensics, Defense Advanced Research Projects Agency, https://www.darpa.mil/program/media-forensics; Semantic Forensics, Defense Advanced Research Projects Agency, https://www.darpa.mil/program/semantic-forensics; Computational Simulation of Online Social Behavior, Defense Advanced Research Projects Agency, https://www.darpa.mil/program/computational-simulation-of-online-social-behavior.

10 Observatory on Social Media, Indiana University, https://truthy.indiana.edu/; Center for Informed Democracy & Social - Cybersecurity, Carnegie Mellon University, https://www.cmu.edu/ideas-social-cybersecurity/; Center for an Informed Public, University of Washington, https://www.cip.uw.edu/; Internet Observatory, Stanford, https://cyber.fsi.stanford.edu/io.

Carnegie does not take institutional positions on public policy issues; the views represented herein are those of the author(s) and do not necessarily reflect the views of Carnegie, its staff, or its trustees.