top of page

An AI Perspective on the Indian "Guardians of Galaxy"

Updated: Dec 30, 2020

Aarya Pachisia,

Research Intern,

Indian Society of Artificial Intelligence and Law.


Recently, an article referred to Artificial Intelligence (‘AI’) being the Guardian of the Galaxy. It shed light on the utility and benefits of AI in space exploration. There does not exist an ounce of doubt with respect to AI and its social benefits in Outer Space but before overwhelming ourselves with only the advantages, it is imperative as well as only rational to dive deep into the legal issues pertaining to the same. The Space Industry is estimated to reach the valuation of 1 trillion dollars by 2040. European Space Agency (ESA) estimates that for 1 euro of investment, the social benefits that we receive is equivalent to 6 euros. Space Industry is a demand-driven market with no central regulatory body. The private sector accounts for 70% of space activity. In this context, it is necessary to discuss the Treaties that govern Space Exploration. These Treaties do very little to clear the ambiguity around the same and the presence of AI only opens floodgates to excessive uncertainty. It is impossible to have a conversation on Space Law without analyzing the Outer Space Treaty and the Liability Convention. The former forms the basis for International Space Law and the latter determines the international liability on the launching State[1]. It is important to note that the Liability Convention only applies to the launching State.

The first section of this piece shall analyze the potential legal issues that may arise due to the presence of AI in Space Exploration and the second section shall deal with the lacunas in the governance of AI in space.

AI in Space: Legal Dilemmas

The impact of AI in Space is not restricted to the Outer Space but also extends to the population on Earth. The two major issues that arise due to AI’s presence in Space is two-fold: (a) Potential breach of privacy and ethical issues; (b) The possible collisions with other space objects.

AI in Space and Privacy

Maintenance of privacy continues to be one of the most crucial concerns in almost every field, and Outer Space is no stranger to the issue. The satellites collect data from terrestrial, ariel as well as spatial objects and events. For instance, Geospatial Intelligence interprets the event that is transpiring in real-time from the data collected from the satellites. In January 2020, the United States imposed immediate interim exports controls regulating the dissemination of AI technology software that possesses the ability to automatically scan ariel images to recognize anomalies or identify objects of interest, such as vehicles houses, and other structures.[2] The GDPR governs the protection of personal data within the EU jurisdiction. It defines personal data as information that is directly traceable to a specific individual. AI in Space can be used to breach the provisions of the regime by public authorities as well as private entities if mass surveillance is initiated owing to the advancing technology of Satellite imagery, VHR (Very High-Resolution Processing) and Processing-Intensive Image Analysis[3]. Face recognition technology combined with GPS can also give rise to extreme privacy concerns.

Recently, the European Commission (‘the Commission’) issued White Paper on AI, seeking proposals to regulate the same from the stakeholder. Despite this white paper, the concerns with respect to AI and its presence in outer space shall not be directly addressed. It is understood that the GDPR framework is not enough to also regulate the functioning of AI and therefore, it is necessary to have a separate set of regulation for AI.

The subjects of GDPR are only the citizens of the European Union. There are domestic privacy legislation that are applicable in different jurisdictions. The principles that remain consistent in the maintenance of privacy are- (i) purpose limitation, (ii) storage principle, (iii) accuracy principle as well as the (iv) the right to be forgotten. It is also imperative to uphold transparency and verification of compliance to the privacy regulations. This can be difficult considering the partial autonomous, black-box effect and unpredictability, therefore, making it impossible for enforcement agencies to ascertain whether compliance and transparency were upheld.

AI in Space and other space objects

The two treaties that become relevant to our discussion is the Outer Space Treaty and the Liability Convention. The former is the building block of International Space Law. The Liability Convention finds its genesis in Article VII of the Outer Space Treaty. The Liability Convention ascertains the international liability of a launching state on the basis of where the damage occurs. Outer Space does not belong to a singular state but it crowded with objects from different jurisdictions. The presence of a state in Space is the evidence of its sophisticated and advanced scientific stature. The Liability Convention is extremely restrictive in nature and does not hold a private entity liable for any damage caused in Outer Space. The Liability Convention determines the liability of a Launching State in the following ways

  1. Article II awards strict liability for the damage a space object causes on earth or on an aircraft.

  2. Article III awards fault liability for damage space object causes in outer space to any other celestial body on the basis of the degree of fault.

It is now necessary for us to evaluate how liability shall be assessed in the case of intelligent space objects. A space object is intelligent when it is capable of making autonomous decisions after being trained by data provided to its algorithm by its developer.

Article I of the Liability Convention defines ‘Damage’ as “loss of life, personal injury or other impairment of health; or loss of or damage to property of States or of persons, natural or juridical, or property of [the] international intergovernmental organization.” This definition creates ambiguity with respect to intelligent space objects (‘IOS’) as it fails to account whether the definition includes within its scope non-kinetic harm that can be caused by such intelligent objects[4]. It also fails to account for the scope of ‘other impairment of health’. For instance, intelligent space objects are being developed to accompany astronauts in space exploration to provide them with a companion.

If the IOS fails to maintain the mental health of the astronaut or ends up causing severe mental health setbacks, will the same be included within the scope of the definition of damage?

Fault-based liability which is evaluated under Article 3 is predicated on Human Fault[5]. In the legal field, any entity which has certain rights and obligation is considered to be a person. This ‘person’ is an artificial entity (for instance, limited liability corporations or joint ventures, a State an international legal personality but is not necessarily an actual human being) and not a human being as we understand in layman terms. It is also necessary to understand that the decisions taken by these entities are in reality taken by actual people. This is not the same in the case of Intelligent objects that are capable of making autonomous decisions. The decision taken by a State is, in reality, the decision of certain individuals which is not devoid of human emotions and consciousness. Therefore, the decision of a legal person like the State is premised on a human being’s rationale. Moreover, the developers of AI software do not understand how does an AI arrive at a particular conclusion. If in the absence of human oversight over IOS, the fault of launching state should not be sweepingly made as this shall cripple the development and advancement of AI in Space. Instead, the lability in the absence of oversight should be determined by asking the following question – what conduct is necessary to attribute the fault liability of a State for damage caused by an ISO when human oversight is not involved in the occurrence of causing damage.[6]

Limitations of Space Treaties in governing the Guardian of the Galaxy

There are no treaties, specifically governing IOS. The lack of such treaties shed light upon the potential problems that shall surface due to the lack of substantive law that shall apply. For instance, if a State claims for compensation under the Liability Convention, then which State’s substantive law shall apply in order to arrive at the compensation that can be awarded. Issues with respect to the standard of care and what constitutes fault with respect to an IOS are left unanswered. There is little to no policies on regulation or governance of AI. The ambiguity with respect to the application of negligence or theory of product liability for claiming compensation continues to exist. Both the grounds demand the involvement of human conduct. Negligence occurs when there is an omission of human conduct where it was near to essential whereas the product liability concerns defect in software design or the inability to inform of the existence of the defect.[7] The White paper by the Commission suggested the adoption of fault-based liability as against product liability stating the reason that it shall be difficult to prove that there exists a causal link between the defect in the product and the damage that occurred if the latter was adopted. Therefore, we see there is no consensus on the applicability of a particular regime among jurisdictions which gives rise to further uncertainty with respect to claiming compensation against damaged caused due to an IOS in Outer Space.

There are new perspectives arising from claiming compensation due to the damages caused by AI. For instance, Autonomous machines being given the status of a legal person and the standard for a reasonable man being substituted with robotic common sense[8]. The other perspective is to consider the AI machine as the agent of the operator, therefore, holding the operator liable. [9]

AI in Outer Space: Indian Perspective

Recently, the Finance Minister of India announced the privatization of the Indian Space sector which opens floodgates to investments and participation of private entities. Although, this has is viewed as a step in the right direction, it also calls for a robust legal framework to be enacted in India to regulate as well as legislate on issues pertaining to the involvement of private entities in Outer Space. The Space Activities Bill, 2017 has been criticized extensively for being too vague and is inept to regulate IN-SPACe, an independent regulatory body that shall be established in order to oversee the commercialization of Space Sector.

The vagueness of the Bill with respect to liability as well as taking into account the participation of the private entities in the Space sector, the licensing scheme has also been left ambiguous. The presence of AI in Outer Space shall only aggravate the issues posed by the draft Bill even further. ISRO acknowledges the risk of sending humans into space and recently announced the possibility of launching Vyommitra, a humanoid that is equipped with AI tools to lead space missions. This gives rise to potential legal issues that can plague the Indian as well as International Space sector if not efficiently addressed. The International Space Law shall rely on domestic substantive law to arrive at the determination of liability. The complexity of legal issues shall only increase with the involvement of private entities and presence AI in Space. Therefore, it is imperative for the Indian space legislation to be devoid of vagueness with potential legal issues addressed issues in order to facilitate the presence of AI in Space.


The Space law and AI regime can only develop with advancement and crystallization of domestic law with respect to AI or such legal conundrums would continue to exist. The volatile nature of legal issues around AI will also prevent in determining the process for claiming compensation in case of damages caused by IOS. We are aiming to open space for commercial purposes and hoping to take vacations on the moon but in order to implement such dream, it is necessary and only practical to first establish a structure that can make such dreams viable.


[1] Launching State is defined under the Liability Convention: A State which launches or procures the launch of a Space object and the State from whose territory or facility the space object is launched. A non-governmental actor does not have liability under the Liability convention irrespective of the culpability. [2] 85 Fed. Reg. 459 (January 6, 2020) [3] Cristiana Santos, Lucien Repp, Satellite Imagery, Very High Resolution and Processing-Intensive Image Analysis: Potential Risks under GDPR, Air and Space Law, 44 (3), 275, 295, available at [4] George Anthony Long, Small Satellites and State Responsibility Associated with Space Traffic Situational Awareness at 3, 1st Annual Space Traffic Management Conference “Roadmap to Stars,” Embry-Riddle Aeronautical University, Daytona Beach, November 6, 2014, available at [5] George Anthony, Cristiana Santos, Lucien Rapp, Réka Markovich, Leendert van der Torre, Artificial Intelligence in Space, Legal Parallax, LLC, USA & University of Luxembourg. [6] Id. [7] Id. [8] Iria Giuffrida, Liability for AI Decision-Making: Some Legal Ethical Considerations, 88 Fordhom L. Rev. 439 (2019) [9] Anthony, supra note 5, at 22. The Indian Learning, e-ISSN: 2582-5631, Volume 1, Issue 2, January 31, 2021.


Updates from our Newsletter, INDIAN.SUBSTACK.COM

The Indian Society of Artificial Intelligence and Law is a technology law think tank founded by Abhivardhan in 2018. Our mission as a non-profit industry body for the analytics & AI industry in India is to promote responsible development of artificial intelligence and its standardisation in India.


Since 2022, the research operations of the Society have been subsumed under VLiGTA® by Indic Pacific Legal Research.

ISAIL has supported two independent journals, namely - the Indic Journal of International Law and the Indian Journal of Artificial Intelligence and Law. It also supports an independent media and podcast initiative - The Bharat Pacific.

bottom of page