Traditional approaches to governing have never gotten a handle on cyberspace. There are too many actors and there is too much information, from too many sources, moving too quickly across too many jurisdictions. Bad actors further compound the problem, and discord among the big powers prevents the international community from cooperating effectively.
Although most software and digital infrastructure is commercially owned and operated, technology companies lack the legitimacy, breadth of interests, and public policy impulse to make life online safer and more civil. It’s not enough to have Facebook rules, or Google rules, or Alibaba and Huawei rules. And it certainly won’t help if bureaucrats in Beijing, Brussels, or Washington try to divide and conquer the digital political economy, pushing the rest of the world into one bloc or another.
Technology companies lack the legitimacy, breadth of interests, and public policy impulse to make life online safer and more civil.
It’s clear a new approach is needed. Strengthening cyber civilization will require hybrid strategies that channel the inventiveness of market forces to further security, development, human rights, and rational discourse.
Because scaled technology can produce and deliver goods and services that people want and profit from, its inventors and distributors must be involved in setting rules and standards, lest regulation unduly or inadvertently stifle business. Conversely, governments must demand more of firms that might otherwise view social harms as irrelevant externalities on their paths to quashing competition and maximizing profit.
Government and business leaders have already begun to experiment with new hybrid models to enhance cybersecurity and counter disinformation. The latter, in particular, has highlighted that the challenge is too complex, too novel, and too big to be effectively tackled by any individual actors but requires collaboration between governments and industry.
Shoring Up Cybersecurity: Insurers and Investors
Cybersecurity was not built into the first versions of the internet or the hardware and software that connect people to it. Decades later, many inexpensive Internet of Things devices still lack adequate security. Too often, breaches and losses go unreported and culprits go unpunished. Yet the biggest incidents do make headlines: The NotPetya computer virus caused an estimated $10 billion in worldwide damage and disruption in 2017. In 2019, an Amazon Web Services employee exposed the personal records of more than 100 million Capitol One customers. In 2016, the Mirai botnet took over more than 100,000 networked cameras and used them to knock off-line Etsy, Spotify, Twitter, and dozens of other popular websites.
Some valuable work has already been done to increase trust in digital products and services. Governmental bodies like the United States’ National Institute of Standards and Technology and the UK’s National Cyber Security Centre have put together frameworks that can help businesses and consumers prevent, detect, and respond to cyber attacks. A number of sectors have seen greater cooperation among companies on improving cyber hygiene and shortening response times. More thorough reporting of incidents has heightened industry leaders’ awareness of the risks of underinvesting in cybersecurity.
Nevertheless, much more can be done to incentivize producers and users to strengthen cybersecurity. Governments can establish clearer and higher guidelines and harsher penalties—particularly for producers that make security claims and then fail to fulfill them. More transparency and more detailed postmortem reports, which reveal the exact cause and extent of an attack, should be the norm. Most recently, the reinsurance firm Swiss Re has called upon commercial organizations to produce reports on their preparedness for cyber attacks.
Unfortunately, some companies will still not act without outside pressure. Mutually reinforcing measures by insurers, investors, and independent policy experts could make a difference.
The need for life insurance at affordable rates motivates individuals to live healthier, just as the need for affordable liability insurance motivates businesses to produce safer products. When oil-fired heating furnaces dramatically increased the number of building fires early in the twentieth century, for example, insurers declined to cover unsafe models and promoted those that should be readily adopted. A category of “highly protected risk” emerged to designate properties so well protected and engineered that they qualified for superior coverage and more cost-efficient premiums.
Similarly, insurers and reinsurers can peg coverage and rates to companies’ practices. Those that deploy highly rated technology, carefully monitor their supply chains, and rigorously train and test their employees should receive better and cheaper coverage than those that do not. The insurance industry would thereby provide not only coverage but also insights into best practices.
Governments can speed this process by requiring companies to disclose security breaches so that insurers can build the data sets needed to model cyber risk. Because many businesses and insurers operate transnationally, cyber insurance can likely promote security best practices faster, more widely, and more efficiently than national governments.
Investors could promote cybersecurity in similar ways, advancing their own interests and the global good in the process. Companies less vulnerable to cyber theft, ransomware, lost-privacy liabilities, and other cyber risks make for more secure investments.
The world’s five largest asset management funds invest more than $20 trillion in companies, most of which buy and use digital goods and services. Those funds have leverage to lean on. They could consult security experts and insurers to identify the best security standards and user practices and then invest only in companies that apply them. Companies in need of funding would have to report annually on their compliance.
The power of capital in this case would promote security on a transnational scale, whether or not individual governments adopted, updated, and enforced adequate standards. Governments could reinforce private sector pressure by having their financial market regulators—the equivalents of the Securities and Exchange Commission in the United States—require publicly owned companies to report on their cybersecurity practices. Experts at think tanks and civil society organizations could invite investors into working groups, encouraging them to adopt a best practices approach and helping refine their policies.
Countering Influence Operations: Platforms and Civil Society
States, their proxies, and other actors increasingly employ influence operations to persuade audiences, often disrupting authentic, fact-based decisionmaking. In most cases, recipients and transmitters of influence operations, including disinformation, do not know they are being manipulated.
While Russia’s interference in the 2016 U.S. presidential election has become the best-known example, there have been many others just in the past few years. Ahead of the UK’s Brexit referendum, the Leave.EU campaign targeted voters using personal insurance data provided by Arron Banks, a financier with links to Russia. Chinese state actors and proxies use online communications to manipulate residents of Hong Kong and citizens of Taiwan. All sorts of state and nonstate actors have taken advantage of the coronavirus pandemic to pursue their interests by fomenting polarization and discord: The self-proclaimed Islamic State issued travel advisories to project legitimacy. Sunni-majority countries in the Middle East seized the opportunity to blame predominately Shia Iran for the pandemic, exacerbating sectarian tensions in the region.
For years, platforms such as Facebook, Google, and Twitter resisted calls for regulation, but their positions are changing as backlash grows. “I think our role as a company should be that of an educator, helping regulators and legislators understand what’s happening with technology,” Twitter CEO Jack Dorsey said in an April 2019 interview. “The job of a regulator is to ensure protection of the individual and a level playing field. So, as long as we’re working together on that, it has good outcomes.”
Governments in open societies struggle to combat influence operations without infringing on free speech. Moreover, they often lack the data and sophisticated understanding of social media necessary to regulate platforms effectively. Paralyzed by unresolved conflicts and disagreements, some countries—including the United States—fail to take any action at all.
Platforms understandably find it impractical to meet the uncoordinated demands of various national governments.
For their part, platforms understandably find it impractical to meet the uncoordinated demands of various national governments. Privacy standards and regulations limit what data they can share, adding an obstacle for publics and government officials concerned about how platforms’ algorithms and policies shape public discourse. The European Union’s General Data Protection Regulation, for example, has placed obligations on companies located outside of the EU—at times setting a standard more stringent than that of a company’s home jurisdiction. Inconsistent content regulations also shape platform behavior. Germany’s Network Enforcement Act requires platforms to remove hateful content that might be visible to German citizens, even if those platforms and their data are based and hosted outside of Germany. This has led Facebook, for example, to filter content availability based on user location. Given that many laws meant to apply to social media companies are intended to be extraterritorial, companies may be forced to silo content based on user location or adhere internationally to the strictest national standard to avoid fragmentation. In some cases, content regulation laws conflict with privacy rules.
Moreover, fragmented regulatory environments can enable illiberal governments to enlist platforms in suppressing legitimate dissent and opposition movements online. In Hungary, Singapore, and Thailand, laws around disinformation have encroached on free speech. Such inconsistencies may pose the greatest challenge for platforms: What are the implications of taking down content in some jurisdictions but not others? What checks and balances exist to deter government abuse, particularly in democracies?
The Latest
For a guide to cloud security, read our latest paper, “Cloud Security: A Primer for Policymakers.”
Learn MoreCivil society groups have proposed measures to protect users by promoting, demoting, or removing content. But they will likely need to partner with platforms to reconcile the multiple interests at stake, since it’s the social media companies that have the data and technological knowledge to assess whether and how those policies could be implemented. Outside researchers could then attempt to measure the effectiveness of policies put in place to counter influence operations, in the hope that other platforms replicate the most successful approaches.
Rather than write mandatory regulations that would be difficult to evaluate and update, governments should instead require platforms to report on their implementation of voluntary policies. Government disclosure requirements could add teeth to such policies, enabling civil society groups, consumers, and others to assess how well platforms execute them. The EU’s 2018 Code of Practice on Disinformation offered one starting point, but fostering trust and collaboration among government, industry, academia, and civil society remains challenging.
Bet on Hybrid Governance
Experimentation will be the watchword in developing new forms of public, corporate, and civil society governance, with different technologies and applications requiring different players and approaches. For instance, as machine learning and artificial intelligence are applied in new ways—from education to eldercare to employee screening—the public will demand safeguards against digital discrimination. Biotech firms will have to secure the privacy of people’s genetic and health data as well as the integrity of that data and the algorithms that utilize it. In complex cases like these, no single solution imposed by governments would work—whether because the technologies transcend national boundaries or because the pace of technological change would quickly render a tight regulation obsolete. When faced with a hard problem, trying multiple approaches allows civil society, industry, and governments to refine the ones that work and discard the others.
Experimentation will be the watchword in developing new forms of public, corporate, and civil society governance.
Competition will complement experimentation in shaping norms, standards, and rules. China’s government and companies will promote one approach, while U.S. and European entities may advance another. In India, businesses, government, and civil society organizations may rally others to join their protocols that are more tailored to less-wealthy markets. Facilitators outside the government and the for-profit industry could expand the benefits of such competition by scoring technologies and public-private governance protocols.
Hybrid experimental approaches won’t produce historic accords or neat treaties. In such a rapidly changing landscape, iteration and adaptation will drive the governance process. It will be messy—like life online—but it is the only realistic way to capture the best of what technological innovation can offer while reducing systemic risks and protecting citizens.