top of page

Artificial Intelligence Needs Law Regulation

Updated: Dec 30, 2020


Artificial Intelligence is gradually more present in our lives, reflecting a growing propensity to turn for advice, decisions altogether, to algorithms. AI is the ability established by machines, is smartphones, tablets, laptops, drones, self-operating vehicles, robots, face detecting machines, military AI, Online tool etc. that might take on tasks ranging from household support, welfare, policing, defense. The role of AI facilitating discrimination is well documented and is one of the key issues in the ethics dispute today.

Artificial Intelligence is a tool of humanity is wielding with increasing recklessness. Though it is for our common good, code of ethics, laws, government accountability, corporate transparency, the capability of monitoring are few still unanswered questions. AI regulation is not just a complex environment, its uncharted territory from human leadership to machine learning emergence, automation, robotic manufacturing etc. AI is largely seen as a commercial tool, but its quickly becoming an ethical dilemma for the largely spread Internet area.

Can International Human Rights or any Laws help to guide and govern new emerging technology Artificial Intelligence? The largest organization of Technical Professionals Institute of Electrical and Electronics Engineers’ latest report on ethically aligned design for Artificial Intelligence as its first principle that it should not infringe upon International Human Rights. Last year, Human Rights investigators from the United Nations found that Facebook exacerbated the circulation of hate speech and incitement to violence in Myanmar. UN’s International Telecommunications Union second annual AI for Global Good Summit in Geneva opined that in order for AI to benefit the common good, it should avoid harms to fundamental human values and International Human Rights provide a robust and global formulation of these values.

Artificial Intelligence can drive global GDP and productivity but surely will have a social cost. A rising ubiquity of AI implementation appears to coincide with an accelerating wealth unequally disrupting the business world. AI creators are not employing best practice and effective management. When it comes to public trust, global institutions to protect humanity from potential dangers of machine learning are noticeably absent. Regulating AI may not be possible to achieve without better AI, ironically. A special emphasis must be laid to the prospective of treating AI as an autonomous legal personality, separate subject of Law and control.


Artificial Intelligence is designed on algorithmic decision making is largely in digital and employs statistical methods. Earlier, algorithms were pre-programmed and unchanging. This is rising to new challenges as the problems found with biased data and measurement error as their deterministic predecessors. On another side, its impact of error rates, for instance, US Customs and Border Patrol photographs every person enters and exits the US borders and cross-references it with a database of photos of known criminals and terrorists. In just 2018, approximately 8 Crores people arrived in the US, even if the facial recognition system is 99 % accurate, a 1% error rate would result in 8 Lakhs people being misidentified. What would be the impact be on their lives? Conversely, how many known criminals would get away? The number may be beyond the number after putting all the countries year-wise together.

There are many documented cases of AI gone wrong in the Criminal Justice System. Use of machine learning for risk scoring defendants is advertised as removing the known human bias judges in the sentencing and bail decisions. Predictive policing efforts seek to best allocate often-limited police resources to prevent crime, though there is always a high-risk mission creep. AI provides the capacity to process and analyze multiple data streams in real-time; it is no surprise that it is already being used to enable mass surveillance around the world. The most pervasive and dangerous example is facial recognition software. Facial recognition software is not just being used to scrutiny and identity, but also to target and discrimination. AI can be used to create and disseminate targeted propaganda. Machine learning powers the data analysis social media companies use to create the profiles of users for targeted advertising. This is also capable of creating realistic-sounding video and audio recordings of real people. Hiring processes have long been fraught with bias and discrimination. Algorithms have long been used to create credit scores and inform finance loan screening.

AI impacted many human rights. The risks due to the ability of AI to track and analyze our digital lives are compounded because of the sheer amount of data we produce today as we use the Internet. With the increased use of the Internet to Things devices and attempts to shift toward smart cities, people soon will be creating a trail of data for nearly every aspect of their lives. In such a world, not only are there huge risks to privacy, but the situation raises the question of whether data protection will even be possible. GPS mapping apps may risk violating freedom of movement. A looming direct threat to free expression is through bot-enabled online harassment. Just as people can use AI-powered technology to facilitate the spread of disinformation or influence public debate, they can use it to create and propagate content designed to incite war, discrimination, hostility, terrorism, or violence. If automation shifts the labour market significantly, it may lead to a rise in unemployment. There is a danger that health insurance providers could use AI for profiling based on certain behaviours and history. AI-powered DNA and genetics testing could be used in efforts to produce children with only desired qualities. If AI is used to track and predict students’ performance in such a way that limits the eligibility to study certain subjects or have access to certain educational opportunities, the right to education will be put at risk.

Human Rights cannot address all the present and unforeseen concerns pertaining to AI. Technology companies and researchers should conduct Human Rights Impact Assessments through the life cycle of their AI systems. Governments should acknowledge their human rights obligations and incorporate a duty to protect fundamental rights in national AI policies. Social institutions, professionals of various departments/sectors and individuals should work together to operate human rights in all areas. UN leadership should also assume a central role in international technology meets by promoting shared global values based on fundamental rights and human dignity. It is mandatory to identify the potential adverse outcomes for human rights, private-sector actors should assess risks and AI system may cause human rights violations. Taking effective action to prevent and mitigate the harms, as well as track the responses is also necessary. Providing transparency to the maximum extent, establishing appropriate mechanisms for accountability and remedy is also recommended.

Microsoft completed the first Human Rights Impact Assessment on AI for a major tech company. It is the methodology for the business sector that is used to examine the impact of a product or action from the viewpoint of the rights holders-whether they be direct consumers or external stakeholders. In April 2008, around four thousand Google employees sent a letter to their CEO demanding the company to cease its contract to participate in an AI development project called Maven with the US Department of Defense (biased and weaponized AI.) While some major international Human Rights organizations are starting to focus on AI, additional attention is needed from civil society on potential risks and harms. Dozens of countries have initiated national strategies on AI, yet human rights are not central to many of their efforts vis-a-vis European Union’s General Data Protection Regulation, Global Affairs Canada’s Digital Inclusion Lab, Australian Human Rights Commission project, A law passed by New York city etc.

The UN has yet to sustain a focus on AI from a right perspective with some notable exceptions, particularly from UN independent investigators, special rapporteurs, and their Secretary General’s strategy on new technology. In September 2018, the UN Secretary-General released a strategy on new technologies that seek to align the use of technologies like AI with global values found in the UN Charter, the Universal Declaration of Human Rights, and International law. Intergovernmental

organizations may play an influential role, including the organization for economic cooperation and development which is preparing guidance related to AI for its 36 member countries. More work can be done to bridge academics in Human Rights Law, Social Science, Computer Science, Philosophy, and other disciplines in order to connect research on the social impact of AI, norms and ethics, technical development, and policies.

Isaac Asimov’s three laws of robotics are important factors in the development of artificial laws. They are as follows:

  1. AI may not injure a human being or, through inaction, allow a human being to come to harm

  2. AI must obey orders given it by human beings except where such orders would conflict with the I-Law.

  3. AI protects its own existence as long as such protection does not conflict the I & II – Laws.

These laws were designed to motivate his authorship on short stories and books, they have also impacted theories on the ethics of AI.


A sociological, demographic survey is conducted with an aim to establish the compulsion and necessity of Law Regulation of Artificial Intelligence. This survey is conducted from 460 respondents from whom 260 are male and 200 are female having basic computer science background through direct interaction, telephonic conversation, via mailing, social media etc maintaining scientifically standard methodology. The respondents of this survey are from US, UK, Germany, Singapore, Australia, South-Africa, Newzeland, Saudi-Arabia, Qatar, Malaysia, Srilanka, and India.

TABLE 1: Basic Knowledge of What AI is.

Table 1 illustrates that 85 % of respondents have knowledge of AI

TABLE 2: AI and its harmfulness to human

Table 2 shows that 70 % of respondents are agreeing that AI harms human, 20 % are unable to express their opinion and below 10% are not agreeing.

TABLE 3: AI affected cases

Table 3 explains that 60% of male respondents have noticed AI affected cases where only 43% of the female have noticed.

TABLE 4: AI threats International Human Rights

Table 4 reveals that 75% of respondents are expressing their consent in favour of argument AI - threats International Human Rights

TABLE 5: Necessity for enforcement of new laws for AI

Table 5 notifies that 82% of respondents are recommending new laws for AI

TABLE 6: India’s efforts on Law Regulation of AI

Table 6 reveals that 80% of respondents are not happy with the Indian Government towards Law Regulation of AI.

TABLE 7: Groups have to look into the matter for more governance of AI

Table 7 states that 70% of respondents opined that Governments, Private Sector and Judiciary Departments should work together to bring news laws for AI.

The conducted survey accomplished that the establishment of suitable laws for AI is most warranted an immediate and it also stated that, otherwise the society witnesses dangers of AI.


This article findings identified that AI does not have a moral or ethical set of qualities that should be inherent in a civil servant. Equalization of AI and Human Rights, we encounter problems with current legislation and the lack of effective regulatory mechanisms for this subject. According to current legislation, public authority functions could be implemented by AI which could raise the issue of legal capacity legitimacy and assessment of legal and human risks. Till today, the issue of legal personality of a fully AI, its legal capacity and responsibility have not been resolved in current versions of national legislations. The specific purpose of AI to explore and identify its legal nature with respect to the Spirit of the law specified at the basic concept of prospective legislation. Therefore shaping of legal relations arising between AI and Human is most warranted.


The Indian Learning, e-ISSN: 2582-5631, Volume 1, Issue 1, July 31, 2020.


The Indian Society of Artificial Intelligence and Law is a technology law think tank founded by Abhivardhan in 2018. Our mission as a non-profit industry body for the analytics & AI industry in India is to promote responsible development of artificial intelligence and its standardisation in India.


Since 2022, the research operations of the Society have been subsumed under VLiGTA® by Indic Pacific Legal Research.

ISAIL has supported two independent journals, namely - the Indic Journal of International Law and the Indian Journal of Artificial Intelligence and Law. It also supports an independent media and podcast initiative - The Bharat Pacific.

bottom of page