Source: Getty
commentary

How Hype Over AI Superintelligence Could Lead Policy Astray

AI’s risks—and its policy solutions—are often more evolutionary than revolutionary.

Published on September 14, 2023

The belief that advances in AI will soon lead to the creation of a sci-fi-esque “superintelligence” that would surpass human abilities across “virtually all domains” has long existed in corners of Silicon Valley. OpenAI advertises that its future AI systems could have “power beyond any technology yet created.” Employees at AI company Anthropic described themselves as “modern-day Robert Oppenheimers.”

These ideas are increasingly gaining prominence in high-level policy discourse. After a meeting with AI company CEOs, British Prime Minister Rishi Sunak referred to the threat of “superintelligent” AI. U.S. President Joe Biden recounted a recent Oval Office meeting that discussed whether AI could “overtake human thinking and planning.” In recent months, superintelligence has featured in congressional hearings and a meeting of the UN Security Council.

Talk of superintelligence has also begun to shape policy action. The UK’s upcoming global summit on AI is notably centered on “safety”—a broad term that is often associated with superintelligence fears—rather than democracy, privacy, labor impacts, human rights, or efforts to ensure AI’s benefits are shared around the globe.

But claims that revolutionary superintelligent AI systems are just around the corner are speculative and technically disputed. As leaders consider steps as dramatic as a new international agency for AI, they should be mindful of the many uncertain assumptions behind superintelligence claims. By focusing on more evolutionary problems posed by AI, they can avoid diverting resources from more certain AI challenges, risky policy proposals, and prematurely excluding important voices from policy debate.

Superintelligence’s Many Assumptions

The future existence of superintelligence can neither be conclusively proven nor disproven; instead, we rely on assumptions and inferences. This is a familiar problem for policymakers—not entirely unlike predicting whether China will invade Taiwan, for example. With emerging technologies such as AI, however, leaders are far less equipped to evaluate claims made in a media and investment environment that incentivizes hype over level-headed assessment. In particular, leaders don’t seem to fully appreciate that superintelligence theories rest on several questionable assumptions.

First, much of the current sense of urgency around superintelligence stems from extrapolating a recent trend in machine learning: increased data and computing power have delivered increasingly impressive AI systems. Indeed, the success of large language models like ChatGPT have stunned even experts, and incremental improvements are almost certain to continue. But the history of machine learning research is littered with short-lived trends, and nothing guarantees that this one will continue forever. OpenAI CEO Sam Altman’s recent suggestion that training of the company’s next-generation AI model might not begin for “some time” indicates that, at a minimum, new technical breakthroughs are needed before improvements stemming from data and computation can continue apace.

Even if it is technically possible for these trends in AI advancement to continue, they could become increasingly infeasible as computation and high-quality training data become more expensive and difficult to acquire. AI developers who depend on vacuuming up huge amounts of data might exhaust high-quality text available online. Similarly, companies have continually increased access to computing power by striking deals with the world’s largest cloud computing platforms. They’ve even changed their business models to allow them to pour hundreds of millions more dollars into training. But these ever-increasing costs will be hard to sustain, especially if current investor exuberance over generative AI dissipates.

Moreover, the modern paradigm of AI—reproducing statistical relationships found in massive amounts of data—continues to suffer from fundamental flaws. The incredible capabilities of today’s AI systems make them seem convincingly humanlike, but they fabricate basic facts, give bewilderingly unreliable answers to even simple prompts, and lack the ability to cope with unfamiliar events.

Research advances in the current mold of AI development seem likely to incrementally improve reliability. But many computer scientists believe that fundamental breakthroughs in new machine learning paradigms will be needed to build provably reliable systems that better approximate true understanding. These are uncertain endeavors that, even in the face of huge research investments, may never bear fruit.

It’s far from certain that incremental advances in current statistical methods could ever fully replicate the remarkable breadth of human intelligence. The phrase “artificial intelligence” invites us to project humanlike qualities onto algorithms. This can mislead us into thinking that breathtaking performance on some tasks is a prelude to more general humanlike intelligence. But reality continues to confound our understanding of what makes human intelligence unique. Even as recent generative AI systems have stunned experts with their performance on creative tasks, for instance, massive investments into self-driving vehicles have yet to pay off. Qualities like emotional intelligence and leadership, which require multifaceted social capabilities and the ability to quickly adapt to new circumstances, will be harder still to replicate. In ascribing humanlike intelligence to impressive but flawed AI systems, developers risk letting their own marketing hype color their perceptions of how the technology might evolve.

A Better Model for AI

AI doesn’t require superintelligence to pose important policy challenges. Even today’s AI has made it easier to generate disinformation, for example. Advances in AI that fall well short of superintelligence could still widen inequality (among many other problems). In short, the rise of AI will be disruptive, superintelligence or no. Yet such disruptions may not be as revolutionary as many seem to think. What we call AI is arguably a more incremental evolution in a larger, decades-long trend: the growing role of math and data in modern life.

Consider the increasing autonomy of AI systems—something widely seen as a qualitative shift leading toward superintelligence. Dig deeper into the most eye-popping examples, though, and you’ll usually still find humans playing a central role, with AI serving as an enabler. For example, much-hyped recent scientific discoveries supposedly made by AI are actually owed not to autonomous machines but to the ingenuity of scientists who recast longstanding problems into formats that could take advantage of AI advances.

To be sure, recent AI advances seem likely to increase productivity by making tasks like drafting text, finding information, or analyzing data more efficient. But the same could be said of previous digital advances such as word processing, search engines, and social media. While these tools have been incredibly useful (and disruptive), their impacts, like those of recent AI advances, are ultimately orchestrated by humans.

If we view generative AI as another mile marker in the longer journey of society’s digital transformation, seemingly novel challenges look much more familiar. Seemingly new problems about the values embedded in large language models echo unresolved questions about the politics of social media content selection algorithms. Similarly, the controversial use of AI to forecast crime hotspots reprises the well-known flaws of predictive policing techniques. Increasingly complex and opaque algorithms bring new twists, of course. For instance, they make errors and bias harder to recognize and remedy. But the underlying policy challenges—privacy, fairness, and civil rights impacts of the use of math and data—have been churning for decades, if not longer.

Avoiding Pitfalls of a Policy Focus on Superintelligence

As national and international policymakers begin to craft responses to AI advances, there is danger in devoting disproportionate effort to address hypothetical superintelligence concerns. To avoid this, leaders should ask three questions.

First, are superintelligence concerns diverting resources from concrete and longstanding AI policy challenges? In theory, policymakers would be able to address both evolutionary risks of AI and hypothetical superintelligence risks. But in reality, limited attention, resources, and political capital often force prioritization.

The voluntary commitments recently negotiated by the White House and leading AI developers illustrate how this might play out. Mirroring lobbying from companies like OpenAI, the commitments apply only to hypothetical “models that are overall more powerful than the current industry frontier.” The commitments largely let companies off the hook for the reliability and economic risks posed by the systems they are currently moving to market, even as many companies lay off the “responsible AI” teams that focused on these issues.

This danger was on vivid display during Altman’s recent congressional testimony. Even as the OpenAI CEO won bipartisan praise for his perceived forthrightness, he neatly avoided difficult questions about the real-world impacts of his company’s products by focusing on speculative impacts of hypothetical AI systems his company might never be able to build. Altman’s proposed licensing scheme was “not for what these models are capable of today.” Questions on privacy and copyright—evolutionary AI concerns of paramount importance to both society and AI developers’ bottom lines—were brushed off.

Second, are poorly understood superintelligence concerns being used to justify policy options with steep costs? Take the high-wire decisionmaking guiding the U.S.-led restrictions on China’s access to AI-related technology like semiconductors. The Biden administration has thus far carefully balanced the benefits of blunting China’s technological prowess with the risks of fanning the flames of great power conflict. But as decisions on the scope of these restrictions continue, a belief in the imminent arrival of a hypothetical superintelligence could tilt the scales in favor of more extreme policy options whose benefits might not justify their weighty consequences.

Or consider proposals to require companies to obtain licenses from the government before even training advanced AI systems. These requirements would make it difficult for open-source and small businesses to compete with large players, slowing innovation crucial to scientific progress and U.S. competitiveness. Of course, policymakers routinely balance uncertain costs and benefits. But policy measures that address only speculative superintelligence concerns (and not more evolutionary AI policy challenges) are especially likely to impose steep costs in exchange for minimal benefits.

Third, are superintelligence concerns excluding important perspectives and voices from AI policy debates? The perceived urgency of controlling a hypothetical future superintelligence, for example, positions AI as a national security issue, pushing it to forums such as the National Security Council and G7. These security-oriented forums tend to focus on a narrow set of issues, such as great power competition and the economic security of wealthy countries.

Leaders aren’t wrong to consider security issues raised by AI, but when the most influential decisions about AI are made in less accessible venues, the strongest advocates for evolutionary AI challenges like social and economic impacts are left out. These venues exclude civil society bodies, which have historically had the strongest expertise and advocacy regarding AI’s civil rights impacts. And when conversations become more exclusive, the first groups to be left out are often marginalized people and less powerful countries, reducing attention to topics such as economic inclusion and blunting the United States’ ability to avoid global regulatory fragmentation.

At home, too, public trust in institutions could be further threatened by policy seen as lacking concern for the concrete economic and social harms of AI. (Witness the media coverage of actors’ worries that they will not be compensated when AI is trained on their likeness.) Senate Majority Leader Chuck Schumer warned that failure to carefully address dislocations caused by AI could lead to “political backlash” like that created by globalization. Policy responses driven more by nebulous superintelligence concerns than concrete everyday economic issues would risk exactly that.

Perhaps in the future, evidence will emerge that superintelligence is more than just science fiction. But until it does, policy leaders should ensure that Silicon Valley’s latest hype cycle doesn’t cause collateral damage to policy issues that remain urgent and important no matter how AI evolves.