top of page

Position Statement on Efficacy of Predictive Models on Social Sciences

Submitted by Sarmad Ahmad, Research Intern on Efficacy of Predictive Models on Social Sciences.

 
  • AI and algorithms, facilitated by Big Data show unprecedented utility across the various spectrums of human civilisation; personal or professional. Rather, it is somewhat even more difficult to sit and speculate upon a sector that could not be impacted by the duo of AI and Big Data.

  • However, the increasing utility of the two does not imply that our reliance is placed upon them blindly; but rather requires the opposite. In order to create effective systems that can uphold this unprecedented utility and in turn, maximise the productivity and efficiency of our existing models and structures of the world, AI and Big Data (hereinafter predictive models) ought to be met with unwavering caution and reason.

  • The need for adopting this perspective can be understood by highlighting numerous examples of harnessing predictive models towards a specific problem, which resulted in unexpected, concerning, and sometimes fatal consequences. In a recent study published by the Proceedings of the National Academy of Sciences of the United States of America, three sociologists at Princeton University requested a number of researchers to predict six life outcomes for children, parents, and households. This was approached using nearly 13,000 data points on over 4,000 families, collected in a 15-year long sociological study led by Sara McLanahan, a professor of sociology at the university and a lead author of the paper.

  • The researchers utilised methods of traditional statistical analysis and machine learning models; neither of which provided outcomes of reasonable satisfaction and accuracy. It did not matter if the latter used far more complicated methods of assessment than the former, as both of the methods approached ended up down the same road. Alice Xiang from the non-profit Partnership on AI stated how such an example demonstrates how predictive models and machine learning tools are not magic.

  • The objective lessons learned from the final comments on this study shines a light on the widely acknowledged notion that predictive models can provide an objective assessment without fault or error; which highlights that adopting such a notion could prove to be a fallacy.

  • While there were no real-time consequences of the outcomes in this study, adopting the fallacy may prove to be lethal in instances where human judgment utilises predictive models to guide decisions. Instances of such predictive models include COMPAS, which is a risk assessment tool used in the U.S Criminal legal system that helps judges determine whether defendants should be kept in jail or be allowed out while awaiting trial. COMPAS has already been under fire for propagating racial bias by being relaxed on Caucasian defendants that have a higher likelihood exercising aggressive behaviour, in comparison to being strict to African-American defendants with a lesser chance of exhibiting the same.

  • Outsourcing decision making to predictive models, or using them to facilitate human judgment with the objective to eliminate fallacies in thought can be disastrous as a non-cautious approach to applying this technology can propagate existing biases or even biases that could be manifested in the process.

  • Furthermore, the study also indirectly, highlights the fallacy of the Big Data hubris; the assumption that the application of predictive models incorporating the use of machine learning and big data assessment methods is a substitute to traditional methods of information collection and data analytics, by highlighting that a blind favouring towards these models could lead to incorrect enforcement of decisions.

  • Keeping in context of everything abovementioned, two factors are observed. One, that machine learning and directly predictive models, have not been manifested and bestowed upon us without fault or error in order to solve all our problems regarding analysis, assessment and decision making. Rather, predictive models ought to be treated like any other tool created by us as a species; tested through trial and error and sharpened regularly towards its intended use. Two, that in order to sharpen the utility of predictive models, we must acknowledge its limitations and work towards its sharpening process by working on all of the factors involved; such as algorithmic transparency, and the manifestation of relevant, unbiased and protected data.


To understand this development, please refer to:


For queries, mail us at editorial@isail.in.


Comments


Updates from our Newsletter, INDIAN.SUBSTACK.COM

The Indian Society of Artificial Intelligence and Law is a technology law think tank founded by Abhivardhan in 2018. Our mission as a non-profit industry body for the analytics & AI industry in India is to promote responsible development of artificial intelligence and its standardisation in India.

 

Since 2022, the research operations of the Society have been subsumed under VLiGTA® by Indic Pacific Legal Research.

ISAIL has supported two independent journals, namely - the Indic Journal of International Law and the Indian Journal of Artificial Intelligence and Law. It also supports an independent media and podcast initiative - The Bharat Pacific.

bottom of page