Susumu Hirano
Susumu Hirano is dean of the Graduate Institute of Policy Studies at Chuo University.

Introduction

Slide 1

Ladies and gentlemen, good afternoon. First of all, please allow me to express my appreciation for being given this honorable opportunity to make my speech on AI guiding principles and relevant matters at the Carnegie Endowment for International Peace in Washington, DC.

Slide 2

I am Susumu Hirano, dean and professor at Chuo University in Tokyo, Japan.

Slide 3

The university’s name, Chuo, means “center” or “central,” but actually the main campus of the university is located in an area which is a bit far away from the center of Tokyo’s urban district. Therefore, in winter the campus is sometimes covered by snow.

Slide 4

But in spring, it becomes beautiful because cherry blossoms cover the entire campus; therefore, if you have an opportunity to visit Tokyo, especially in spring time, please visit our campus.

Slide 5

By the way, for approximately one year, I have participated in the discussions on AI’s impacts and possible counter-measures against the risks of AI at the Ministry of Internal Affairs and Communications of Japan (MIC). So, this afternoon, I would like to show you a summary of the discussions. And for maximizing the benefits and minimizing the risks of AI, I hope that you agree with the idea that some kind of soft law or guiding principles would be desirable.

What You Can Do Is Different From What You May Do

Slide 6

Before I talk about the summary of MIC’s discussions, I first wish to clarify the misunderstanding about what we are doing now.

Slide 7

Currently, I preside over discussions on the so-called AI R&D Principles, as the chairperson of a subcommittee, under the Conference Toward AI Network Society, where I also take the role of a core member. These discussions try to collect opinions from various major stakeholders, such as industries, academics, AI engineers, and users.

Slide 8

We create opportunities to voice their opinions at the meetings periodically held at Kasumigaseki, Tokyo, in which the MIC headquarters is located, as well as through a public comment procedure. And at these meetings, AI engineers and leading global corporate representatives often insist that MIC “should not” prevent the development of AI. This seems to come from their concern over possible future laws and regulations which might strictly and unreasonably regulate their activities toward AI’s development. And this concern is the misunderstanding that I would like to clear up now. What we are doing is not to prevent AI development. That is absolutely not our intent.

Slide 6

This misunderstanding might have arisen from the stereotype that engineers harbor doubts toward lawyers. As a popular Japanese saying expresses: “Engineers make the pie grow larger; lawyers only decide how to carve it up.”1

No matter whether the saying is true or not, what we are doing is not carving the pie up. Rather, what we are doing is helping the pie grow larger. In other words, what we are doing, and what we have done so far, is to allow AI development. What we wish is to let AI be developed in a manner which could minimize any risks. We believe fundamentally that the development of AI would benefit the world society. But at the same time, we understand that AI has some problems including, but not limited to, uncontrollability, unpredictability, and un-transparency.

Slide 9

For example, many say that AI’s decisionmaking would sometimes be uncontrollable by, or beyond expectations of, developers or manufacturers, due to “emergence” or the autonomous nature of AI.2

Also, many people say that it is difficult to trace the reason why AI made a certain decision because it is too complicated to understand.

In addition, many people are concerned about a future society which would be transformed and which would be totally different from the present-day society.3 For example, many jobs are said to be replaced by AI or machines with AI, which would cause mass unemployment. Thus, it is said that we will need to reconsider distribution of wealth.

Also, in my field of expertise, torts and products liability, some scholars argue that the so-called responsibility vacuum,4 which I call the liability gap, would occur because accidents caused by AI’s decisions would be unforeseeable even for its developers, manufacturers, or service providers.5 Foreseeability is an indispensable element to prove negligence and proximate causation in torts.6 Therefore, as a general rule, if a plaintiff is unable to prove it, then he or she would lose the case.

In addition, some scholars even argue that the so-called Singularity would occur in or around the year 2045.7 They predict that a so-called Strong AI will emerge and take over not only employment but also human beings themselves.8 And nobody will be able to stop the self-development of AI at or after the Singularity. Given these concerns about AI, I believe that corporations and engineers will not be able to keep on developing AI, unless people’s concerns are reduced substantially—in other words, unless people around the world are willing to accept AI.9

To help you understand what I am talking about, please let me introduce a statement made by a person representing a leading automotive manufacturer in Japan. As far as I recall the statement, he said roughly as follows: Social acceptability is very important. Unless people accept autonomous vehicles, we cannot disseminate them.

I think the same applies to AI. In order for corporations and AI engineers to keep on developing AI, they first have to persuade people to accept AI by explaining, for example, that AI would not become unreasonably dangerous because its development is carried out under some sort of reasonable norms which could prevent a disaster for human beings. And I believe that our discussions under MIC of Japan would contribute to persuading the people of the world to accept AI.

Studies on AI in Japan

Slide 10

Now, let me talk about the big picture in Japan’s policy toward AI.

Slide 11

Please look at the upper-right hand. This is the Public-Private Council for the 4th Industrial Revolution. Under the PM and his office, the Council for Strategy on AI Technology had been established. The Council consists of three major ministries and other public organizations.

Slide 12

The Cabinet Office and MIC study AI’s various social, ethical, and legal issues—with the Cabinet Office in particular using a viewpoint centered on technology and innovation and MIC coming from an AI networking perspective. 

Overall History of MIC’s Studies

Slide 13

Slide 13 shows the history of MIC’s studies on AI’s social and economic impacts. They started in early 2015 through a study group with a small number of members.10

Slide 13

In early 2016, MIC convened larger number of scholars and some AI-related law practitioners under the name of the Conference on Networking among AIs. I call this conference the First Conference for the sake of convenience.

Slide 15

As you can see, I took the role of acting chairperson in this conference.

Slide 13

The conference issued an interim report including the AI R&D guidelines, which were tentative ones. As for the guidelines themselves, I will explain them later.

Slide 14

The purpose of the First Conference was as follows:

  • To predict the future society that we should aim for; and
  • to evaluate social and economic impacts and risks; and
  • to list up future challenges in a concrete way.
Slide 13

In the end of April last year, the G7 ICT Ministers’ Meeting was held in Japan. At the meeting, Mrs. Sanae Takaichi, Minister of Internal Affairs and Communications of Japan, proposed to start international discussions toward establishing AI R&D guidelines, by distributing a tentative draft for the discussion based on the achievements of the First Conference. I have heard that the member states agreed to her proposal on international discussions toward establishing AI R&D guidelines by referring to the tentative draft.

Slide 13

Since last October, the First Conference has been reorganized and expanded, encompassing more members from not only academia but also industries including leading electronic corporations, computer-software corporations, and a mega bank. The name of the reorganized conference, or the Second Conference, as I call herein for convenience, is the Conference toward AI Network Society, where I have taken the role of a core member. The Second Conference has two subcommittees.

Slide 7

The Second Conference and its two subcommittees are still active. At the end of December, that is to say two weeks ago, the Second Conference published several important issues that could be used to formulate AI R&D Guidelines—including the newly added 9th Principle of Linkage—in order to receive comments from the public.

The Concept of AI Network Systems and AI Networking

Slide 14

Slide 14 and several following ones show the outline of the “First Conference or the Conference on the Networking among AIs. First, let me explain the term “AI network systems.” The term, AI network systems, means information and communications network systems that include AI as their component. Secondly, the term “AI networking”  means the establishment and advancement of AI network systems.

Slide 16

On Slide 16, entitled “Stages in Progress of the AI Networking,” the first paragraph indicates that the first stage is a society where stand-alone AIs would be used. The second paragraph indicates that in the second stage, AIs would become connected. For example, autonomous vehicles with AIs would be connected to one another using the so-called vehicle-to-vehicle (V2V) communications. Or using the so-called vehicle-to-infrastructure (V2I) communications, autonomous vehicles could obtain information from AIs located in infrastructure. Meanwhile, Slide 17 indicates that in this second stage, AIs would be located in various layers of networks.

Slide 18

In the third stage, human beings would be connected to the AI network systems. In this stage, humans’ abilities would become enhanced. For example, the capabilities of the five human senses would be enhanced by connecting with artificial sensors, while human bodies could also be enhanced by using robotics or wearable robot suits. And in the fourth stage, the First Conference predicted that a society of co-existence between both human beings and AI network systems would be desirable. In this fourth stage, AI network systems would work as capable concierges and help human beings, while the latter would appreciate the benefits of the systems.

Slide 21

This shows the society that we should aim for. In summary, the First Conference predicted that a future society would become more creative and dynamic, due to synergistic effects between, inter alia, data, information, knowledge, people, things, and activities. As the first paragraph indicates, the First Conference referred to this future society as WINS, an abbreviation of the Wisdom Network Society.

Slide 22

The First Conference thought that the criteria of WINS are as follows:

(1) Everyone can enjoy the benefits of AI Network Systems;
(2) Dignity of humans and autonomy of individuals should be respected;
(3) Innovative research, development, and fair competitions should be maintained;
(4) Controllability and transparency should be maintained;
(5) Participation of stakeholders should be maintained;
(6) Harmony of physical space and cyberspace should be aimed for;
(7) Regional communities should become more vibrant by breaking through, and cooperating beyond, space barriers; and
(8) Resolution of global issues under distributed cooperation should be aimed for.

Social Impacts Caused by AI Networking

Slide 23

The First Conference also published its results of analysis regarding social and economic impacts caused by AI networking. However, because of limitation of time, I omit my explanation on this matter. Slides 23 and 24 show examples of our study results.

Risks Caused by AI Networking

Slide 25

Now, let me speak about risks caused by AI networking. The First Conference classified risks caused by AI networking mainly in two contexts:

  • First, the risks associated with functions; and
  • Second, risks related to legal systems, rights, or interests.

Of course, some risks may belong to both contexts. For example, an autonomous vehicle, which uses AI as its component part, might become uncontrollable (which is a functional risk), and at the same time the autonomous vehicle could cause an accident (which is a legal risk) due to its uncontrollability.

The First Conference proposed to consider specific scenarios or hypothetical cases, in order to tackle promptly and flexibly the risks which might occur in the future.

Slide 26

Examples of risks associated with functions that the First Conference considered are listed here. Again, due to the limitation of time, let me explain only one of the risks:

  • Opacification [unidentified/un-transparent] risks. For example, the complex nature of the AI Network System might make it a black box and difficult to trace why a certain decision was made. “Complex nature” means that the system becomes complicated because multiple AIs, located in various layers, are interconnected one another.
Slide 27

As for the risks related to legal systems, rights, or interests, the First Conference proposed consideration of such risks as listed in Slide 27. For example:

  • Privacy risks. For example, AI network systems might compile a wrong profile of an individual, resulting in the individual possibly being treated in an unreasonable, discriminatory manner.

Future Challenges

Slides 28 and 29

The First Conference published future challenges to which we should pay careful attention. And due to the limitation of time, I will focus on the AI R&D guidelines.

AI R&D Guidelines

Slide 30

The First Conference considered it was necessary to begin discussions on formulating international guidelines. And the First Conference adopted the tentative AI R&D Principles, which had been published in the conference’s interim report in April last year, as I mentioned before.

Slide 31

The First Conference created the guidelines based upon the fundamental ideals as listed in Slide 31. As you can understand, these ideals have been explained before as the principles of Wisdom Network Society, or WINS, in Slide 22.

Slides 32 and 33

And the guidelines were comprised of the 8 principles listed in Slides 32 to 33.

As for the First Principle of Transparency, it requires the ability to explain and verify AI network systems’ operations. In other words, decisions made by AI network systems should be traceable.

The Second Principle of User Assistance requires that AI network systems should assist users and, at the same time, give users reasonable opportunities to make intelligent decisions through the use of the so-called “nudge” or other means.

The Third Principle of Controllability requires a human’s ability to control AI network systems.

The Fourth Principle of Security requires that AI network systems should be dependable with robustness.

Slide 33

The Fifth Principle of Safety requires that safety be taken into consideration so that AI network systems would not harm lives or bodies of users or third parties.

The Sixth Principle of Privacy requires that privacy be taken into consideration so that AI network systems would not infringe the privacy rights of users or third parties.

The Seventh Principle of Ethics requires that human dignity and individual autonomy be respected in the R&D of AI network systems, especially when human brains and AIs are connected through, for example, a brain-machine interface.

The Eighth Principle of Accountability requires that researchers and developers of AI network systems be accountable to users and other stakeholders. For example, the researchers and developers should explain and disclose relevant information. Also, the researchers and developers should maintain adequate communications with stakeholders.

G7 ICT Ministers’ Meeting in the Last April

Slides 34 and 35

The G7 ICT Ministers’ Meeting was held last April in Japan. At the meeting, Mrs. Sanae Takaichi, Minister of Internal Affairs & Communications of Japan, proposed to start international discussions toward establishing the AI R&D guidelines by distributing material on a tentative draft for the discussion consisting of eight principles based on the achievements of the First Conference. Member states agreed to her proposal. (Please look at Slide 35).

OECD Technology Foresight Forum

Slides 36 and 37

Slide 36 depicts the OECD’s building in Paris, and the Technology Foresight Forum held on November 17 last year. There, I and my colleague, Professor Kurosaka, made apresentation regarding the AI R&D Guidelines and social and economic impacts of AI.11 The Head of Digital Economy of OECD wrapped up the forum and suggested that she would consider Japan’s proposal regarding the AI R&D Guidelines.

Second Conference and Future Events

Slides 38 to 39

Slide 38 depicts a notice for the first meeting of the Second Conference, or the Conference toward AI Network Society, and also the official website of the Second Conference. The structure of the Second Conference is in Slide 39.

MIC assembled the Second Conference to contribute to OECD’s discussions on and considerations of social, economic, ethical, and legal issues caused by AI networking.

Slide 40

The purpose of the Second Conference is two-fold:

  • First, to draft the AI R&D guidelines for OECD’s and international discussions; and
  • Second, to analyze details of social and economic impacts and risks caused by AI networking.
Slides 41 to 42

The Second Conference and MIC will hold an international symposium in Tokyo in March of this year. The purpose of the symposium is to accelerate discussions on and formulation of AI R&D guidelines in the world’s communities.

Slide 42

At the symposium, the Second Conference plans to introduce the progress of the study in Japan.

Slide 43

During the meetings held at the Second Conference, we have found that an additional principle, the Ninth Principle of Linkage, should be added as shown in this slide. That is:

Smooth interconnection or interoperation between AIs or AI Network Systems should be ensured.

Slide 44

In addition, the important issues related to formulating AI R&D guidelines, including the Ninth Principle of Linkage, have been published for comments from the public.

A Hypothetical Case to Which the Guidelines Would Apply

Slides 45 to 47

Slides 45 to 47 show a hypothetical case named the Bridge or the Bridge Problem.12 This is a variant of a thought experience which is called the Trolley Problem.13 I borrow the Bridge Problem in order to explain the importance of the guiding principles. The Bridge Problem is as follows:

Slide 46

Suppose that a school bus suddenly enters the lane directly in front of an autonomous vehicle due to, for example, heart attack of the driver of the school bus.

And suppose that the autonomous vehicle has only two options:

(1) to continue straight on and kill thirty children and one driver in the school bus; or
(2) to turn right suddenly and kill the occupant of the autonomous vehicle.

According to some scholars who discuss the Bridge Problem or similar hypothetical, in the context of autonomous vehicles, manufacturers might prefer option 1 rather than 2 because their vehicles would not sell well if they chose option 2.14 On the contrary, utilitarians might prefer option 2 because it would minimize the casualties of the accident.15

Slide 48

Now let’s apply some of the guidelines to this Bridge Problem. For example, without any guideline, it is said that manufacturers might manipulate (or pre-program) the AI in a convert manner so that the autonomous vehicles would always choose to protect its occupants by sacrificing the 30 children in the bus. In addition, this manipulation would not be easily discovered because of the unidentifiable and complex nature of AI. Therefore, this kind of manipulation would continue in a hidden manner.16 However, such a hidden manipulation seems to breach the First Principle of Transparency. Please recall that this transparency principle requires the ability to explain and verify a AI network system’s operations. In other words, it requires the traceability of decisions made by AI. Applying this transparency principle, we can say that manufacturers’ hidden manipulation of AI is not acceptable.

Slide 49

And please recall the Second Principle of User Assistance, which requires that AI Network Systems assist users and provide them with reasonable opportunities to make intelligent decisions. If manufacturers pre-program AI in the autonomous vehicles so that it would always choose option 1 rather than 2 without taking into consideration the users’ intent, then that attitude might go against the Second Principle of User Assistance. This is because some users might prefer option 2 rather than 1.

Slide 46

The Third Principle of Controllability and Fourth Principle of Security would seem to be irrelevant to the Bridge Problem. As for the Fifth Principle of Safety, I will talk about it later at the end, because it contains a profound issue to be considered. The Sixth Principle of Privacy would seem to be irrelevant, too.

Slide 50

As for the Seventh Principle of Ethics, it requires that human dignity and individual autonomy be respected in the R&D of AI. Applying this principle, we have to consider whether option 1 rather than 2 is really ethical. To tell the truth, I myself cannot find a single and correct answer to this morally difficult question.17 Even moral philosophers are maybe unable to reach a consensus. But design choice by researchers, developers, or manufacturers would not be socially acceptable if they were to totally ignore a moral perspective.

Slide 51

The Eighth Principle of Accountability seems to be applicable to the Bridge Problem. The Principle requires that researchers and developers of AI network systems should be accountable to users and other stakeholders. For example, the researchers and developers should explain and disclose relevant information. Also, the researchers and developers should maintain adequate communications with stakeholders. Any decision by manufacturers to choose option 1 without disclosing information, to say nothing of explanation, is plain breach of this requirement. In addition, manufacturers should consider whether their choice to choose option 1—without disclosing or explaining this choice to parents or families of the thirty children in the school bus—would be seen as a failure in accountability.

Slide 52

Finally, I would like to apply the Fifth Principle of Safety to the Bridge Problem. This principle requires that safety be taken into consideration so that AI network systems would not harm lives or bodies of users and third parties. Theoretically, this principle might seem to be plausible. That is because it is naturally everyone’s wish that a product should work safely. But by applying this Principle of Safety to the Bridge Problem, we come to realize that it does not work well. First, option 1 requires the sacrifice of third parties’ lives, that is, thirty children’s lives and a bus driver’s life. Secondly, option 2 requires the sacrifice of the life of the user. Therefore, in either case, the manufacturer is forced to breach this principle of safety. It is a true dilemma. And interestingly, this dilemma is very similar to the one found in Isaac Asimov’s 1942 novel “Runaround,” in the collection I, Robot.

Conclusion

Slide 53
  • It seems that people around the world are somehow worried about AI’s development and usage; they want some assurance that the world would not become worse off.
  • But strict and inflexible norms or “hard law” like statutes and regulations would not be suitable, especially in the field of newly developing area like AI, because such hard law might prevent beneficial development.
  • Thus, some kind of guiding principles or “soft law” could take the role of the assurance for people to accept development and usage of AI.
  • I hope that our AI R&D guidelines would contribute to world’s discussions on building some soft law, which would promote healthy development of AI.
Slide 54

Thank you for your attention!

Susumu Hirano is dean of the Graduate Institute of Policy Studies at Chuo University. He specializes in civil jurisprudence and conducts research on product liability, cyberspace law, robot law, and American law.

Notes

1 Derek C. Bok, “A Flawed System of Law Practice and Training,” Journal of Legal Education 33, no. 4 (December 1983): 570–585.

2 As for the terms “emergence” and “autonomy,” see, for example, Ryan Calo, “Robotics and the Lessons of Cyberlaw,” California Law Review 103, no. 3 (2015).

3 Robotics, which is a cousin of AI, is said to be “a socially and economically transformative technology. [And its] widespread deployment everywhere . . . requires rethinking a wide variety of philosophical and public policy issues . . . .” A. Michael Froomkin, “Introduction,” in Robot Law, eds. Ryan Calo, A. Michael Froomkin, and Ian Kerr (Cheltenham, UK: Edward Elgar Publishing, 2016).

4 The term “responsibility vacuum” is often found in articles on criminal liability. See, for example,Sabine Gless, Emily Silverman, and Thomas Weigend, “If Robots Cause Harm, Who Is to Blame? Self-Driving Cars and Criminal Liability, New Criminal Law Review 19, no. 3 (Summer 2016): 412–436. But a similar problem is found in the context of civil liability. See, for example, Matthew U. Scherer, “Regulating Artificial Intelligence Systems: Risks, Challenges, Competencies, and Strategies,” Harvard Journal of Law and Technology 29, no. 2 (Spring 2016): 353, 366.

5 Scherer, “Regulating Artificial Intelligence Systems,” 365–366.

6 See Curtis E. A. Karnow,“The Application of Traditional Tort Theory to Embodied Machine Intelligence,” in Robot Law, eds., Cali, Froomkin, and Kerr, 51–77.

7 Ray Kurzweil, The Singularity is Near (New York: Penguin Group, 2006).

8 As for the term “Strong AI,” see, for example, John O. McGinnis, “Accelerating AI,” Northwestern University Law Review 104, no. 3 (2011).

9 It is said that, generally speaking, the American people have negative feelings (such as the Terminator-like dystopia image) about robots or AI while the Japanese people have positive ones (such as the Astro-Boy-like hero image) due to cultural and religious differences, though I do not have time to mention to this issue. See, P.W. Singer, Wired for War: The Robots Revolution and Conflict in the 21st Century (New York: Penguin Group, 2009), 164–168.

10 “Study Group Concerning the Vision of the Future Society Brought by Accelerated Advancement of Intelligence in ICT,” Ministry of Internal Affairs and Communications, June 30, 2015.

11 “Technology Foresight Forum 2016 on Artificial Intelligence (AI),” Organization for Economic Cooperation and Development, November 17, 2016, http://www.oecd.org/sti/ieconomy/technology-foresight-forum-2016.htm.

12 While many scholars briefly mention the Bridge Problem, for an article that practically analyzes the problem, see Noah J. Goodall, “Ethical Decision Making During Automated Vehicle Crashes,” Transportation Research Record 2,424 (2014).

13 Philippa Foot is said to be the origin of this thought experiment. But the following article is famous especially for lawyers: Judith Jarvis Thomson, “The Trolley Problem,” Yale Law Journal 94, no. 6 (May 1985): 1,395–1,415.

14 Goodall, “Ethical Decision Making,” 63.

15 See, for example, Lauren Cassani Davis, “Would You Pull the Trolley Switch? Does It Matter? The Lifespan of a Thought Experiment,” Atlantic, October 9, 2015, https://www.theatlantic.com/technology/archive/2015/10/trolley-problem-history-psychology-morality-driverless-cars/409732.

16 Goodall, “Ethical Decision Making,” 63.

17 Actually, many users might prefer to option 1 rather than 2. See, for example, Jean-François Bonnefon, Azim Shariff, and Iyad Rahwan, “The Social Dilemma of Autonomous Vehicles,” Science 352 (June 24, 2016): 1,573–1,576.