top of page

Position Statement on Ethical Guidelines for Trustworthy AI

Submitted by Vedant Sinha and Stuti Modi, Research Interns on ‘Trustworthy AI’ as a framework to help manage unique risk.



  • With the advent of rapid technological change, we believe it is essential that trust remains the bedrock of communities, economies, societies and sustainable development. This is the path we believe should be adopted by India to become the leader of ethical technology. It is through the path of Trustworthy AI that we will seek to reap the benefits in a way that is in sync with our foundational values of respect for democracy, human rights and the rule of law.

The Trustworthy AI Framework


  • The purpose of AI rests on the cause of bias, in optimization where equations are biased to deliver lower error rates to performances in complex financial transactions and capitalising arbitrage is an action requiring biases in the algorithms, But which bias is useful or not, is contingent upon us as humans to determine.

  • The question on hand, why is AI being used in a role that requires it to take a unilateral decision, performing a logical operation on a source and acting on it, and if the action is performed and by the logic of human morality, such action is against principles of natural justice, against principles of gender equality.

  • Roles that require differentiation based on established principles of philosophy, morality and law, like hiring firms, NGO, Judicial system. Can they use AI or should they rely on human prudence and experience at a time where cognitive outsourcing will become commonplace? To answer it properly the principles of outsourcing such activities, it basis and qualification bar should be clarified and decided for roles where human has to exercise a bias or a decision based on principles that cannot be logically quantified. Those roles whose selection isn't based on 2+2 is four rule.

  • In the companies being entrusted with determining fairness in AI doesn’t the relativity and ambiguity persist? An ethical code which is domain-specific, despite being consistent and futuristic, can never act as a substitute for ethical reasoning. This is because General Guidelines can never be sensitive to contextual details which forms the core of Ethical Reasoning itself. To ensure trustworthy AI, it is imperative to build and maintain an ethical culture and mindset through public debate, education and practical learning. There exists a dire need to adopt a more inclusive and holistic view of what we perceive to be fair, just and equitable which cannot be limited to the perception of the companies itself.

Transparency and accountability

  • In an accident involving AI, who will be responsible? The remoteness of damage should be instituted as the overarching principle, the rights and co-relatives shall lie in rem under criminal law, and in-person for civil liabilities, if the AI coder or CEO is blamed for the action, it is equivalent to blame a gun maker for the murders that follow. The basis of such liability to the progenitor of the specific AI lies in our understanding of AI as a tool, Instead of providing productional superiority as electricity did during the industrial revolution, AI will provide an advantage that shall be cognitive superiority, and in exercise it, it is the duty of the entity who should exercise caution while utilising such advantage.

  • Transparency should be there in commercial applications that require any involvement of elements of public law.

  • Biases cannot be outrightly detected by closely peering into the build of an algorithm but rather onto the results its process and the quality of data it takes, and view our objective aims against the biased dataset it takes in to generate biased results. It can be similarly argued that dexterity of a neuron isn’t audited, in response to the wrong results produced by a human in the workplace.

Robust and Reliable

  • With autonomous system becoming more prevalent in society, it becomes imperative to ensure they robustly behave as intended. The advent of the autonomous trading system, autonomous vehicle, the autonomous weapon also highlights the adoption of credible means to ensure their safety. To achieve AI that is robust and reliable, human factor even though critical is not the sole element to be considered and taken care of. A combination and balance between verification, validity, security and control are required.

Respecting privacy

  • The human sharing the data should be fully cognizant of the benefits and the repercussions expected from such sharing. Similarly, such liabilities should exist on the AI that utilisation of data should be restricted to the agreed avenues between the human and the company utilising the AI.

  • Transparency should be there in any applications that require any involvement of elements of public law.

Safety and security

  • Such security should be of top concern as it is an extension on the part of duty that commercial or the public institution to maintain the privacy by protecting the data from any breach owes in person.

Societal and Environmental Well-being

  • Throughout the AI system’s life-cycle, the environment, broader society and other sentiment beings should also be considered as stakeholders in line with the principle of fairness and prevention of harm.

  • Responsibility of the AI system should be extended to facilitate sustainability and ecological development and must aim to provide a solution to global concerns. Ideally, the AI System must be beneficial to all human beings, including future generations.

The Next Step

  • The goal is to create a culture of ‘Trustworthy AI for India’, in which the benefits of AI can be reaped by all. This needs to be adopted in a manner such that it ensures respect for our foundational values, democracy, fundamental rights and Rule of Law. Further, Ethical and Trustworthy AI needs to become Indisputable jus cogens or peremptory norm.

For queries, mail us at


The Indian Society of Artificial Intelligence and Law is a technology law think tank founded by Abhivardhan in 2018. Our mission as a non-profit industry body for the analytics & AI industry in India is to promote responsible development of artificial intelligence and its standardisation in India.


Since 2022, the research operations of the Society have been subsumed under VLiGTA® by Indic Pacific Legal Research.

ISAIL has supported two independent journals, namely - the Indic Journal of International Law and the Indian Journal of Artificial Intelligence and Law. It also supports an independent media and podcast initiative - The Bharat Pacific.

bottom of page