Policy Analysis: Asilomar AI Principles, EDW 2017

Naina Yadav

Associate Editor

The Indian Learning

“Advanced AI could represent a profound change in the history of life on Earth, and should be planned for and managed with commensurate care and resources.”[1]

The “Asilomar AI Principles” are a result of a conference organized by Future of Life Institute’s on “Beneficial AI” in Asilomar. These principles essentially presume a widely accepted concurrence of a technic optimistic conception of the future, which on one hand accepts A.I. as an inevitable fate, however, on the other hand, remains unsettled and involves the risk due to absence of socio-economic analysis that only a few will determine it. The 23 principles that have been adopted by several scientists and researchers signify a proposal for a voluntary commitment for development, research and application of AI. These principles are most definitely not certain and are open to different interpretations by various researchers, scientists etc. Ethical issues related to AI and to describe morally derived beast practices related to AI R&D are addressed by the Asilomar principles and also allows a broader scope of interpretation. The principles are alert for the threats of an AI arms race and of the recursive self-improvement of AI. They include the need for “shared benefit” and “shared prosperity from AI.

The Asilomar principles are divided into various components, those being research issues, ethics and values, and long-term issues.

Research issues include research goal, funding, science policy link, culture and race avoidance wherein “The goal of AI research should be to create not undirected intelligence, but beneficial intelligence.” A.I.’s R&D needs that only “useful” A.I. should be created. The link between “directed” and “beneficial” created in the first principle is irrational since both are complete two divergent categories2. For instance something can be uncontrolled but still useful while some things which are controlled in a way can be harmful and of no use. Hence, the aim of AI research should be to control the intelligence as well as to ensure that it acts as sustainable at every place in an ecological and social manner. In addition to this principle, a reasonable research policy is in need which would simply define what does ethically responsible innovation in the area of AI mean extensively. Coming to the next principle, which deals with “accompanying research”, it is necessary to include issues of irreversibility and risk assessment in research to have a better understanding of all present and appropriate challenges. The next principle talks about “constructive and healthy exchange between AI researchers and policy-makers”. Further, the next principles talk about the assumption to create a culture of cooperation, trust and transparency among AI researchers and developers. Symptomatic of this is the principle “ Teams developing AI systems should actively co-operate to avoid corner-cutting on safety standards.” The last principle of research talks about race avoidance. As generally the research projects are tied to deadlines, which often pushed developers to corner-cut through safety standards to reduce time to bring it in the market. This can actually turn out to one of the major reason behind AI casualties.

Principles surrounding ethics and values include firstly safety wherein it is important for developers to concentrate extensively on making systems safe throughout their operation. They should be safe wherever applicable and feasible. Next, under the principle of failure transparency, AI developers should spend a considerable amount of time on determining the reasons a particular AI system might malfunction or cause harm. This would help to derive better practices fro AI. Next principle talks about judicial transparency wherein it should be necessary to ensure that there is valid reasoning to any involvement by an autonomous system in judicial decision-making and can be audited by a competent human authority. To an extent on similar lines are the principles of liberty and privacy, shared benefit, shared prosperity, human control and so on.

Principles surrounding long-term issues include the capability caution wherein one should avoid strong assumptions regarding upper limits on future AI capabilities. The best approach is to avoid setting affirm assumptions regarding upper limits on future AI capabilities. The next principle talking about importance says that technology isn’t simply considered troublesome but withholds the potential to create a profound change in the history. Researchers are suggested to make practical plans to manage the surrounding technology with important resources. Other principles include risks, recursive self-improvement and the common good, which should be followed with the absolute strategy for the development of AI.

The researchers and scientists of the Asilomar Principles agree that AI will absolutely change life on Earth and the creation of a strong AI must be assumed. These principles are a brilliant starting point for deliberations on how to further develops AI in the world. These principles, in my opinion, require a regulatory framework. The research funding must also launch some inter and trans-disciplinary research in the field of law, economics sciences etc. to promote public dissemination of the knowledge gained. These principles should furnish opportunities for people all around the world for AI to develop for the coming decades and centuries.


0 views
 

8/12 Patrika Marg, Civil Lines
Allahabad, 211001
India

©2019 by Indian Society of Artificial Intelligence and Law.
We are a proud member of Internationalism (AbhiGlobal Legal Research & Media LLP).