top of page

Position Statement on AI’s Attempts in Predicting a Child’s Life

Submitted by Sameer Samal and Nayan Grover, Research Interns on AI’s Attempts in Predicting a Child’s Life.


Since time immemorial, policymakers and social scientists, together, have played a crucial role in assessing the social impact of certain policies. The aim of policymakers in such situations is to evaluate specific factors that influence the trajectory of an individual’s life. The objective underlying such attempt is to propose interventions through policy instruments to promote the best and the most favourable outcome.

Impact of Machine Learning

In recent times, Machine Learning algorithms have been relied upon by the criminal justice systems. The algorithms have been trained to predict the possibilities of an offender committing a second crime or the possibilities of a child at risk of abuse or neglect at home. The ambition is to feed the algorithm with such considerable amount of data about a given a situation that it will make better predictions about a certain outcome better than a traditional and basic statistical analysis or a human being.

Results from a 15-Year-Long Sociology Study

A sociology study called the Fragile Families and Child Wellbeing Study led by Sara Mclanahan, a professor of sociology and public affairs at Princeton University were carried out to understand the lives of children born to unmarried parents, wherein, 2000 families were selected at random from hospitals in large US cities. They were followed up for data collection when the children were 1, 3, 5, 9, and 15 years old. The data from the report of this study was used in a later study to analyse the accuracy of Machine Learning algorithms in predicting their lives.

A study by Proceedings of the National Academy of Science

Three sociologists from Princeton University namely Sara Mclanahan, Matthew Salganik and Ian Lundberg invited predictions from hundreds of researchers including computer scientists, statisticians and computer sociologists, on six outcomes that were considered sociologically important. These factors were a child’s grit, their level of self-reported perseverance at school and the overall level of poverty at their households. The researchers were provided with 13,000 data points from 4000 families to train their Machine Learning programs and algorithms. The outcomes were then evaluated and put to test on the data already available from the lives of 2000 children from the previous 15-year-long study called the Fragile Families and Child Wellbeing Study.

None of the researchers could predict to a reasonable level of accuracy, regardless of whether they used simple statistics or cutting-edge trained machine learning algorithms.

Inferences from the Research

The research exhibited that, where an algorithm was used to assess risk or choose where to direct resources, simple explainable algorithms could predict with nearly the same level of accuracy as the Black Box techniques like deep learning. Thus, the huge costs of the complex black box techniques weren’t worth the results achieved.


Experts in the field pointed out possible loopholes that the data considered for sociology research is completely different than what is generally analysed for policymaking, for example, whether a child has “grit” is an inherently subjective judgment that research has shown to be “a racist construct for measuring success and performance”. The results didn’t appear to be a shock for experts studying the use of AI in society, since, even the most accurate risk assessment algorithms in criminal justice system turns out to be around 60%-70% accurate.


The research does not necessarily mean that complex machine learning techniques could never be useful in the field of policymaking. However, policymakers should be more careful in using complex machine learning techniques for predictions. Use of simpler and explainable methods is a better option for many areas and transparent predictions would also ensure its reliability. Ultimately, policymakers without significant experience in the field of Machine Learning and AI should be careful in its implementation and should avoid having unrealistic expectations about the outcomes.

To understand this development, please refer to:

For queries, mail us at


The Indian Society of Artificial Intelligence and Law is a technology law think tank founded by Abhivardhan in 2018. Our mission as a non-profit industry body for the analytics & AI industry in India is to promote responsible development of artificial intelligence and its standardisation in India.


Since 2022, the research operations of the Society have been subsumed under VLiGTA® by Indic Pacific Legal Research.

ISAIL has supported two independent journals, namely - the Indic Journal of International Law and the Indian Journal of Artificial Intelligence and Law. It also supports an independent media and podcast initiative - The Bharat Pacific.

bottom of page