top of page

Commentary on Building Trust in Artificial Intelligence by Francesca Rossi

Updated: Dec 30, 2020

Falguni Singh,


Rajiv Gandhi National University of Law, Punjab


Professor Rossi's article put forward the out of the ordinary points and issues in regards to "Artificial Intelligence" (hereinafter referred to as AI), which is indeed both informative and thought-provoking. AI is, without a doubt, the mantra of our age. Albeit AI research has subsisted for over five decades, interest in the topic has heaped on over the past few years. This extraordinarily multifaceted and complex area emerged from the discipline of computer science.[1]

Interestingly, Professor Rossi defined AI as “scientific discipline aimed at building machines that can perform many tasks that require human intelligence.”[2] It indicates an essential shift, from humans instructing computers how to act to computers learning how to act.[3] AI makes this possible fundamentally through machine learning, comprising ‘deep learning’ methodologies.

Very efficiently, Professor has further explained AI consists of "two main areas of research"; one is based on what we can call explainable AI, which seems less risky and entirely trustworthy. On the other hand, there is less explainable AI which is based on “examples”, “correlation” and “data analysis”, explicitly where the problem is not foreseen, and this type of AI is very capable of committing errors and mistakes. The main question that comes to mind at this juncture- are we willing to be dependent on these machines and applications? The world is already reaching a point where a future cannot be seen without AI.

All around the world, many countries place millions of dollars in funding proposals and projects for further growth to be made in the field. The global AI and robotics defence industry was priced at $39.22 billion in 2018. With a probable compounded annual growth rate (CAGR) of 5.04 per cent, the market is expected to be priced at $61 billion by 2027.[4] Market Forecast characterizes this valuation and growth to investment in latest systems from countries such as the United States, Russia, and Israel as well as obtaining of systems by countries such as Saudi Arabia, India, Japan, and South Korea.[5]

Professor Rossi advocates that the two areas, as mentioned earlier of research about AI, are progressively merged to maximize the benefits of both and alleviate their downsides. Which indeed indicates that we are going towards the right path, but again are we there yet?

The artificial intelligence revolution is on the move. Remarkable gains in AI and machine learning are being used across various industries: medicine, finance, transportation, and others. These advancements likely will have a substantial impact on the global economy and the international security environment. Business leaders and politicians all over the world, from Elon Musk to Vladimir Putin, are having thoughts about whether AI will prompt a new industrial revolution. Like the steam engine, electricity, and the internal combustion engine, AI is an enabling technology with a wide range of applications.[6]

What Rossi describes “AI systems with human-level perception capabilities,” or “AI is to augment humans’ capabilities”—reconstructing our intelligence in a machine, but conceivably an advanced and improved version of us, free from the computational boundaries and the restrictions of the amount of data we can process to arrive at decisions has served as the incentive behind AI.[7] Regardless of our flaws, we make the right decisions as a prudent person, which includes facing uncertainty at a decent level. Astonishingly, we can grasp concepts and draw conclusions that result in the application of our learnings from one set of problems to entirely different ones in different domains. And this is where AI may lack in the view of the fact that the high-level reasoning is beyond on hand technologies, irrespective of the amount of data we have available.

This is where we have yet to make progress in AI systems, and they can learn and improve. One possibly will be able to apply the same system in a similar domain, but it is practically impossible to use this to an entirely different domain. Humans can carry out this sort of reasoning with only a few examples indeed.

Now, let's get to the main crux of the Professor's article that is "A Problem of Trust". It is very well foreshadowed by the Professor that AI can become very pervasive in our day-to-day lives. It would become a great asset to society, but it can also bring greater liabilities with it. The Professor has also addressed the issue of “Black Box”. Firstly, it is any artificial intelligence system whose operations and inputs are not detectable to the user or the interested party. A black box, in a general sense, is an impenetrable and unfathomable system. Secondly, a recurring concern regarding machine learning algorithms is that they operate as “black boxes.” Because these algorithms time after time change the way that they weigh inputs to improve the precision of their predictions, it can be intricate and tricky to spot how and why the algorithms reach the outcomes they do.[8] Yes, it seems scary and frightening to be honest, but Professor has further discussed "High-Level Principles of AI" to tackle these concerns and issues, which are quite promising. Undeniably an explainable AI is a key to this problem, a system that is designed to explain that how the algorithms attain their outcomes or predictions. There is even ongoing research on whether judges should demand explainable AI in criminal and civil cases. It is chiefly a theoretical debate about which algorithmic decisions entail an explanation and in which forms these explanations should take place.[9] Another trust-generating factor can be transparency; without an iota of doubt, companies need to be transparent about their design choices and data usage policies during the development of their latest products.

Don't you think humans are unique and undoubtedly possess the exclusive quality of reasoning? Sometimes our emotions must take tolls. They guide us towards the right path. They show us the difference between a good and malice. Can AI do that?

If we were to talk about the consequences, then there can be both intended and unintended consequences. They include challenge and risk of an increase in threats to existing cybersecurity, and moreover, vulnerabilities into an expansion of complex AI- dependent systems (like cloud computing); AI merges with other technologies including in the nuclear and biotech domains; weak transparency and accountability in AI decision making processes; algorithmic discrimination and biases; limited investment in safety research and protocols; and overly narrow ways of conceptualizing ethical problems.[10]

On moral grounds, the purpose of AI should be known beforehand, and it should be made while keeping in mind the influence it can cause on society; otherwise, things can go south.

For instance, in 2018, a Canadian company was set to open a store in Houston, Texas, where customers could try out, rent, and buy sex robots.[11] But Houstonians became demoralized at the idea of having a “robot brothel” in their backyard, and lawyers working against sex trafficking circulated a petition which received thousands of signatures. By revising its act controlling adult-oriented businesses to ban sexual contact with “an anthropomorphic device or object” on commercial platforms, the Houston city council shut down the enterprise before it could even start. Locals’ objections contained apprehensions that such a business would “reinforce the idea that women are just body objects or properties” or “open up doors for sexual desires and cause confusion and destruction to our younger generation”[12] It is time for a reality check. People are indulging too much in these AI that they are losing their connection with the real world. AI supposed to make our life better, but is it really making it better? We are too dependent on these machines. These can fire back on both our mental and physical health. The developers and companies need to be very careful; otherwise, it may influence the youth immorally.

The foundational pillars of AI should be based on fairness, transparency, human rights, ethics, and above everything, it should mostly be human-centric. The current hype is at the damaging level, and long term consequences may be detrimental. If we do not chase away confusion and supervise anticipations on the uses and abuses of AI, we risk plunging into another AI drawback.


[1] Camino Kavanagh, Camino Artificial Intelligence. Carnegie Endowment for International Peace, 2019, pp. 13–23, New Tech, New Threats, and New Governance Challenges: An Opportunity to Craft Smarter Responses?, (Last Accessed, 14 May 2020.) [2] Francesca Rossi, Building Trust in Artificial Intelligence, Journal of International Affairs, Vol. 72, no. 1, 2018, pp. 127–134. Jstor, (Last Accessed, 17 May 2020.) [3] Ulrike Franke, Harnessing Artificial Intelligence, European Council on Foreign Relations, 2019, (Last Accessed, 14 May 2020.) [4] Global Artificial Intelligence & Robotics for Defense, Market & Technology Forecast to 2027, Market Forecast, January 18, 2018, (Last Accessed, 15 May 2020.) [5] Andrew P. Hunter, et al. International Activity in Artificial Intelligence, Center for Strategic and International Studies (CSIS), 2018, pp. 46–61, Artificial Intelligence and National Security: The Importance of the AI Ecosystem, (Last Accessed, 14 May 2020.) [6] Paul Scharre, et al. The Artificial Intelligence Revolution, Center for a New American Security, 2018, pp. 3–4, Artificial Intelligence: What Every Policymaker Needs to Know, (Last Accessed, 14 May 2020.) [7] Maria Fasli, Commentary on Artificial Intelligence- The Revolution Hasn’t Happened Yet by Michael J. Jordon, University of California, Berkeley, July 01, 2019. (Last Accessed, 17 May 2020.) [8] Ashley Deeks, The Judicial Demand for Explainable Artificial Intelligence, Columbia Law Review, Vol. 119, no. 7, 2019, pp. 1829–1850, Jstor, (Last Accessed 15 May 2020.) [9] Ibid. [10] Ibid., 1. [11] Olivia P. Tallet, ‘Robot Brothel’ Planned for Houston Draws Fast Opposition from Mayor, Advocacy Group, Hous. (Last Accessed, 17 May 2020.) [12] Jeannie Suk Gersen, Sex Lex Machina: Intimacy and Artificial Intelligence, Columbia Law Review, Vol. 119, no. 7, 2019, pp. 1793–1810. Jstor, (Last Accessed 17 May 2020.) The Indian Learning, e-ISSN: 2582-5631, Volume 1, Issue 1, July 31, 2020.


The Indian Society of Artificial Intelligence and Law is a technology law think tank founded by Abhivardhan in 2018. Our mission as a non-profit industry body for the analytics & AI industry in India is to promote responsible development of artificial intelligence and its standardisation in India.


Since 2022, the research operations of the Society have been subsumed under VLiGTA® by Indic Pacific Legal Research.

ISAIL has supported two independent journals, namely - the Indic Journal of International Law and the Indian Journal of Artificial Intelligence and Law. It also supports an independent media and podcast initiative - The Bharat Pacific.

bottom of page