top of page

A Commentary on ‘How to Design AI for Social Good: 7 Essential Factors’ by Luciando Floridi et al.

Updated: Dec 30, 2020

Mehak Jain,

Research Intern,

Indian Society of Artificial Intelligence and Law.


 

Artificial Intelligence (hereinafter ‘AI’) has the potential to tackle and solve social problems and this is being increasingly recognised by AI system developers and designers. Projects using AI for social good range from models predicting septic shock[1] to targeting HIV-education at homeless youths[2]. However, there is a limited understanding of what makes an AI system socially and ethically good.

This study has analysed 27 case examples of artificial intelligence for social good (hereinafter ‘AI4SG’) to deduce seven factors which may serve as preliminary guidelines to ensure that AI is able to achieve its goal of being socially good. Each factor relates to at least one of the five principles of AI- autonomy, beneficence, nonmaleficence, justice, and explicability. It is also pertinent to note that the seven factors and mutually-dependent and intertwined.

Let us review and analyse each factor.

The first factor, Falsifiability and incremental deployment is derived from the principle of non-maleficence. Founded by Karl Popper, the concept of falsifiability enlists that to prove a hypothesis, you must be able to disprove it. To put it in simpler words, a hypothesis is deemed scientific if one of the possible outcomes disproves the very hypothesis itself.

This study deems falsifiability as the essential factor which can affect and improve the trustworthiness of an AI system. Since falsifiability requires testing of critical requirements (such as safety), it acts as a booster for trustworthiness.

After identifying falsifiable indicators, the same should be tested in an incremental manner, i.e. from the lab by way of simulations to the real world. A classic example of incremental deployment is Germany’s approach[3] to regulating autonomous vehicles. The manufacturers were first required to run tests in deregulated zones with constrained autonomy. With increasing trustworthiness, they were then allowed to test vehicles with higher levels of autonomy.

The next factor, derived from the ethical of non-maleficence again, is Safeguards against the manipulation of predictors,

Data manipulation has been a persistent problem over a long time. A model which is too easy to comprehend is also open to being too easy to fool and manipulate. Other risks such as extreme reliance on non-causal indicators also pose a threat to AI4SG projects. This necessitates a pressing need for the introduction of safeguards, such as limiting the number of indicators to be used in a project, to avoid unfavourable outcomes.

The third essential factor for an AI4SG system is Receiver-Contextualised Intervention. As the term suggests, intervention by software should not be in a way that it disrupts its user’s autonomy. This is not to be confused with non-intervention by software at all. Total non-intervention leads to efficiency is limited. A desirable level of intervention is one that is not unnecessarily intervening and one that is not entirely non-intervening; the ideal level is somewhere in between, one that achieves the right disruption while considering user autonomy.

To achieve so, this study suggests that designers building these decision-making systems should consult their users and understand their goals, preferences and characteristics, and respect their user’s right to ignore or modify interventions.

The next factor is Receiver-Contextualised Explanation and Transparent Purposes and it is derived from the ethical principle of explicability. The study exemplifies the importance of right conceptualisation while explaining AI decision-making by providing examples and underscores the need for AI operations to be explainable to make their purposes transparent.

The right conceptualisation for an AI4SG system cannot be uniform and varies since there are many different factors such as what is being explained to, and to whom. Any theory comprises 5 components: a system, a purpose, a Level of Abstraction, a model, and a structure of the system. Level of Abstraction (hereinafter ‘LoA’) i.e. the conceptual framework and a key component of the theory. It is chosen for a specific purpose. For example, whether an LoA is chosen to explain a decision to the designer who developed it, or some general user. Thus, AI4SG designers should opt for an LoA that fulfils the desired explanatory purpose.

Transparency in the goal which an AI4SG system has to achieve is also necessary since opaque goals have the potential to prompt misunderstanding which may lead to harms. Level of transparency needs to be thought of at the design stage itself and it should be ensured that the goal with which such system is deployed is knowable to its receivers by default.

The fifth factor is Privacy Protection and Data Subject Consent. Privacy has been hard-hit by previous waves of digital technology.[4] Respect for privacy is crucial for people’s safety and dignity.[5] It also maintains social cohesion by the way of retaining social structures and deviation from social normal without causing offence to a particular community.

Issues pertaining to consent exacerbate in the timed of national emergencies and pandemics. West Africa faced a complex ethical dilemma with regard to user privacy when faced with Ebola in 2014. The country could have used call-data records of cell phone users to track the spread of the epidemic, however, in the process, user’s privacy would have been compromised and the decision was held up.[6]

In other situations where time is not of the essence, user consent can be asked for before data has been used. The study highlights different levels or types of consents that can be sought such as assumed consent threshold and informed consent threshold. The challenge faced by researchers in Haque et al.[7] is a perfect example. The researchers, in this case, resorted to “depth images” (that de-identify a subject) so as to respect and preserve user privacy. Likewise, other creative solutions to privacy problems should be formulated to respect the threshold of consent established while dealing with personal data.

Derived from the ethical principle of justice, Situational Fairness is the sixth factor. To understand situational fairness, we first need to understand what algorithmic bias is. Algorithmic bias refers to the discriminatory action of an AI system due to faulty decision-making induced by biased data. Take the example of predictive policing software. The policing data used to train the AI system, in this case, contains deeply ingrained prejudices on the basis of a panoply of factors such as race, caste, and gender. This might lead to discriminatory warnings/ arrests, furthering the continuance of an inadequate and prejudiced policing system.

This underscores the importance of “sanitising the datasets” used to train AI. However, it is not to be confused with removing all traces of important contextual nuance which could improve ethical decision making.

This is where situational fairness comes into play. Depending on the circumstances, the AI should behave in an equitable manner so as to treat everyone in a manner which is fair. The study presents an interesting example of how a word processor should interact with all its human users identically, without taking into accord factors such as ethnicity, gender, caste, etc. However, when used by a visually impaired person, it should be allowed to meander and interact in a non-equal manner to ensure fairness. Variables and proxies irrelevant to outcomes should be removed, but the variables that are necessary for inclusivity should be sustained.

Lastly, Human-friendly Semanticisation is the seventh factor and it is derived from the ethical principle of autonomy. The crux of incorporating this best practice is determining what tasks should be delegated to an AI system and what tasks should be left over to human-friendly semantisation.

Let’s consider the case of Alzheimer’s patients. Research [8] noted 3 points with respect to the carer-patient relationship. First, carers remind the patients of the activities in which they participate in e.g. taking medication. Second, carers by the way of empathy and support provide meaningful interaction to the patients. Third, annoyance due to repetitive reminders of taking medication may weaken the carer-patient relationship. Researchers[9] successfully developed an AI system which assured that reminders don’t translate into annoyance, leaving the way for carers to provide meaningful support to the patient. This is a perfect example of how human-friendly semanticisation can take place while leaving tedious and formulaic tasks to AI.

In conclusion, this article gave a fair analysis and laid down the foundation for the future of research in the field of AI4SG systems. It brought forth seven essential factors and their corresponding best practices and analysed with the help of examples and case studies and suggested the incorporation of multiple perspectives into the design of AI decision-making systems to effectively reach the goal of an ideal AI4SG system. It lays down the groundwork for the future of AI4SG systems to reach the end goal of AI which is socially and ethically responsible and works for the better good.


References

[1] Henry, K. E., Hager, D. N., Pronovost, P. J., & Saria, S. (2015). A targeted real-time early warning score (TREWScore) for septic shock. Science Translational Medicine,7(299), 299ra122. https://doi.org/10.1126/scitranslmed.aab3719. [2] Yadav, A., Chan, H., Jiang, A., Rice, E., Kamar, E., Grosz, B., et al. (2016a). POMDPs for assisting homeless shelters—computational and deployment challenges. In N. Osman & C. Sierra (Eds.), Autonomous agents and multiagent systems. Lecture Notes in Computer Science (pp. 67–87). Berlin: Springer. [3] Pagallo, U. (2017). From automation to autonomous systems: A legal phenomenology with problems of accountability. In Roceedings of the twenty-sixth international joint conference on artificial intelligence (IJCAI-17), (pp. 17–23). [4] Nissenbaum, H. (2009). Privacy in context: Technology, policy, and the integrity of social life. Cambridge: Stanford University Press. [5] Solove, D. J. (2008). Understanding privacy (Vol. 173). MA: Harvard University Press Cambridge. [6] The Economist. (2014). Waiting on hold–Ebola and big data, October 27, 2014. https://www.economist.com/science-and-technology/2014/10/27/waiting-on-hold. [7] Haque, A., Guo, M., Alahi, A., Yeung, S., Luo, Z., Rege, A., & Jopling, J., et al. (2017). Towards vision-based smart hospitals: A system for tracking and monitoring hand hygiene compliance, August. https://arxiv.org/abs/1708.00163v3. [8] Burns, A., & Rabins, P. (2000). Carer burden in dementia. International Journal of Geriatric Psychiatry,15(S1), S9–S13. [9] Chu, Yi, Song, Y. C., Levinson, R., & Kautz, H. (2012). Interactive activity recognition and prompting to assist people with cognitive disabilities. Journal of Ambient Intelligence and Smart Environments,4(5), 443–459. https://doi.org/10.3233/AIS-2012-0168. The Indian Learning, e-ISSN: 2582-5631, Volume 1, Issue 1, July 31, 2020.

The Indian Society of Artificial Intelligence and Law is a technology law think tank founded by Abhivardhan in 2018. Our mission as a non-profit industry body for the analytics & AI industry in India is to promote responsible development of artificial intelligence and its standardisation in India.

 

Since 2022, the research operations of the Society have been subsumed under VLiGTA® by Indic Pacific Legal Research.

ISAIL has supported two independent journals, namely - the Indic Journal of International Law and the Indian Journal of Artificial Intelligence and Law. It also supports an independent media and podcast initiative - The Bharat Pacific.

bottom of page