Fumio Shimpo
Fumio Shimpo is a professor of the Faculty of Policy Management at Keio University.

Artificial intelligence (AI) technology assumes that robots that connect to the Internet will be widely used on a daily basis. Autonomous robots will operate automatically without any form of human control. Such robots may well be able to realize the creation and development of a totally new, and as yet unknown, set of social and legal systems. It is necessary for us all to overcome the disadvantages as well as embrace the advantages of utilizing AI, while considering societal changes pertaining to the unfolding of AI and a society characterized by human-robot symbiosis.

In the future, a new generation of robots, connected to networks and therefore ubiquitous as well as autonomous, will operate in completely different ways and may cause serious problems for people in their everyday lives. Therefore, we have to consider carefully all of the serious possible problems that could arise in order to secure safe procedures for their use in a potentially very different and new human environment.

AI will be applied to sectors such as manufacturing, distribution, information communication, business, finance, medicine, and highly automated vehicles, as well as science and technology, aiming to further improve productivity within the next few years. Such AI is expected to be incorporated into the fabric of our societies. However, because of its highly versatile technological nature, AI’s influence on society is immeasurable at the moment, and the possibility of AI causing seismic shifts in traditional human values cannot be denied.

When the Internet was in its earliest stages, its developers were not thinking about viruses, malware, hacking, or individuals using it for malicious purposes. As a result, security was an afterthought in its development. We should avoid repeating this mistake with AI and interconnected devices. Therefore, policymakers have to recognize the importance of considering AI-related policy issues because AI researchers are still engaging in a first-stage process of improving AI design. If human beings do not directly manipulate smart autonomous robots (SARs) and AI, accidents could arise due to these autonomous systems misbehaving or acting in a manner unforeseen and unintended by their creators.

A consideration of how precisely legal liability will be judged with respect to SARs and AI surely must include a legal idea of how SARs must act as automatons in optimal terms. If this is to be an accepted premise under a future law, then any form of SAR or AI malfunction resulting in any number of negative consequences must be seen as legal irresponsibility. After all, the responsibility associated with tort liability itself is based on the foreseeability of accidents and the associated liability for negligence that may occur as a result of the SAR and AI acting autonomously and not in the caring and responsible way human beings should act.

It seems to be a real issue for developers and manufacturers that they should bear the legal responsibility in such cases. The current legal system is likely to be unable to solve the conundrum of who is responsible for accidents caused by so-called runaway SARs or AI, since the lack of a direct human controller and the legal intangibility of human programming on AI complicates this legal issue.

Furthermore, since AI incorporates deep learning (DL), any new law will consequently support the view that AI is responsible for any judgments and/or actions. This is because DL assumes that AI can be trusted to learn and thus act responsibly as human beings do when making final judgments based on the handling of “accumulated data,” thus eliminating fears of AI becoming unrestrainable runaway systems. Of course, this is the theory, and there are many problems to consider and solve, such as who will take legal responsibility when theory does not match reality.

In light of the societal and legal issues raised by AI, the Japanese government in 2015 announced a “New Robot Strategy,” which has strengthened collaboration in this new and vital area between industry, the government, and academia.1 The Japanese government wants to strengthen its robot manufacturing ability to include the service sector. As for the research and development of AI, the Ministry of Internal Affairs and Communications of Japan issued “The Conference on Networking among AIs Report (2016): Impacts and Risks of AI Networking―Issues for the Realisation of Wisdom Network Society, (WINS).”2 This report was the first systematic review of AI networking issues in Japan. Referring to the OECD guidelines governing privacy and security, it is necessary to begin discussions and considerations toward formulating international guidelines and principles to govern the R&D of AI.

My main concern is with having a discussion about the importance of legal issues that will underpin a future society characterized by human-robot symbiosis. When introducing SAR into society, government and industry both will be required to ensure that future robot usage conforms to both existing and future laws and regulations related to AI. In the event that such laws and regulations are considered inappropriate in the light of socioeconomic circumstances and existing laws and regulations, regulations will need to be streamlined via amendments to existing laws and the establishment of new laws. I really believe that it is possible for us all to provide the legal framework necessary to underpin this new social system.

Finally, I would like to address the necessity of developing robot law—including issues related to AI and the Internet of Things—through an academic research lens. Ultimately, I would like to clarify and analyze the various problems related to AI’s specific technological developments and then conduct research in various connected interdisciplinary fields such as law, economics, and ethics.

Biography

Dr. Fumio Shimpo is a professor of the Faculty of Policy Management at Keio University. He has served as a committee member of several councils within the Government of Japan as a specialist in privacy and information security: as director of the Constitutional Law Society of Japan, as director of the Law and Computer Society of Japan, and as a senior research fellow at the Institute for Information and Communications Policy of the Ministry of Internal Affairs and Communications. He was a vice-chair and Japanese delegate for the OECD Working Party on Security and Privacy in the Digital Economy (SPDE) from 2009–2016.

Notes

1 Headquarters for Japan’s Economic Revitalization, “New Robot Strategy,” Japanese Ministry of Economy, Trade and Industry, October 2, 2015, http://www.meti.go.jp/english/press/2015/pdf/0123_01b.pdf.

2 Telecommunications Research Laboratory, “AI network kentōkai kaigi hōkokusho 2016: AI network no eikyō to risk— chiren shakai (WINS) no jitsugen ni muketa kadai” (AIネットワーク化検討会議 報告書2016 の公表-「AIネットワーク化の影響とリスク -智連社会(WINS(ウインズ))の実現に向けた課題-」) [The conference on networking among AIs report (2016): Impacts and risks of AI networking―issues for the realization of Wisdom Network Society, (WINS)], Japanese Ministry of Internal Affairs and Communications, June 20, 2016, http://www.soumu.go.jp/menu_news/s-news/01iicp01_02000050.html.