top of page

Summary and Analysis of NITI Aayog's Responsible AI Report on July 21, 2020

M Tanvi


National Law University, Odisha

Isha Prakash


Government Law College, Mumbai


The term ‘Artificial Intelligence’ or AI was coined in 1956 at Dartmouth University; however, it is only in the 2000s AI was practically implemented by IBM.[1] The rest is history! The global AI market is expected to grow approximately 154% and would reach a size of  22.6 billion U.S. dollars by the end of this year.[2] These figures would only rise in the future. Most of us have an AI system installed for our personal usage, be it a chat-bot, Amazon’s Alexa, Apple’s Siri or any other voice assistant. There are more such AI systems with complex features which are on the verge of developing. AI will soon formulate an integral part of all sectors of businesses and will boost India’s annual growth by 1.3% by 2020. [3]

This gives rise to some pertinent questions like: How are AI systems governed in India? Who would be responsible for any default in the part of the AI system- the developer, the company using the AI or the AI itself? Can an AI system infringe our privacy? To tackle these Niti Aayog released a working paper on ‘Responsible Artificial Intelligence (AI)’ on 21st July 2020.[4] The following are the key points of discussion in the working paper:

Challenges to study AI systems

1) Direct impact challenge: Occurs due to people being subject to a specific AI system. This is also known as systems consideration. For example - privacy concerns.

2) Indirect impact challenge: Occurs due to the overall deployment of AI solutions in society. This is also known as societal consideration. For example loss of jobs due to the development of AI.

The objective of the study

  1. Establish ‘Principles for Responsible AI’

  2. Identify possible policy and recommendations on its regulation.

  3. Enforce guidelines and incentive mechanisms for Responsible AI.

Study of systems consideration

Some of the issues with systems consideration are given as follows:

The following is a table consisting of various laws that govern AI in different countries:

It is observed that India does not have any guidelines or standards that could be applied to AI. Although there are some privacy laws in place, there is a need for AI-specific laws to be formulated.

Study of societal consideration

  1. Impact of technology and innovations in the job landscape is not new. Especially the manufacturing and IT sector is particularly impacted by the growth of technology. It is to be considered that with the rise in AI, several routine jobs may be undertaken by an AI system.

  2. In the near future, job profiling could be driven by data collection and interpretation.

  3. Profiling by AI could also be subject to hidden propaganda and may result in social disharmony. For examples: In Myanmar, online platforms were used to spread hate speech and fake news was targeted against a particular community.

Identification of propaganda and hate speech is less advanced when dealing with posts in local languages. Research efforts must be dedicated to improving technology advancements in these areas.


The following principles to develop effective AI system:

  1. Principle of Safety and Reliability

  2. Principle of Equality

  3. Principle of Inclusivity and Non-discrimination

  4. Principle of Privacy and security

  5. Principle of Transparency

  6. Principle of Accountability

  7. Principle of protection and reinforcement of positive human values

These principles have not been explained further by Niti Aayog.

Enforcement of principles

  1. The people managing these principles are experts from technology, sector, and legal/policy fields.

  2. Principles are to be updated as per emerging cases and challenges.

  3. Various bodies involved in setting standards and regulations for AI are to be guided by the Government.

  4. The Government should develop sector specific guidelines such as for healthcare, finance, education etc. The existing guidelines are to be complied with.

  5. The Government should also establish institution (public, private, research institute) specific enforcement.

Self-assessment guide

  1. Analysing the problem by engaging experts, identifying errors in AI and developing a plan.

  2. Collection of data by identifying relevant laws and documents that regulate AI.

  3. Labelling data that might have a human bias.

  4. Data should be processed in a manner that only relevant data is used and sensitive data is excluded.

  5. AI system should be trained to ensure fairness.

  6. The error rate and fairness of the AI system should be evaluated.

  7. A feedback mechanism and grievance redressal mechanism should be made accessible to the users of AI system.

  8. There should be constant risk assessment, fairness assessment and performance assessment of the AI system.

This working paper is under public consultation until 10th August 2020. Do give your opinions on this at <> or email them to <>


[1] The Rise Of AI: How Did This Happen?, Ideal, Olivia Folick, available at <>, last seen on 01/08/2020 [2] [3] Working Document: Towards Responsible #AIforAll, Niti Aayog, available at <>, last seen on 01/08/2020. [4] Ibid.


The Indian Society of Artificial Intelligence and Law is a technology law think tank founded by Abhivardhan in 2018. Our mission as a non-profit industry body for the analytics & AI industry in India is to promote responsible development of artificial intelligence and its standardisation in India.


Since 2022, the research operations of the Society have been subsumed under VLiGTA® by Indic Pacific Legal Research.

ISAIL has supported two independent journals, namely - the Indic Journal of International Law and the Indian Journal of Artificial Intelligence and Law. It also supports an independent media and podcast initiative - The Bharat Pacific.

bottom of page