The Blur Line between Data Prediction and Systematic Discrimination: LAPD’s Pred-Pol programme
Nayan Grover & Ritam Khanna,
Indian Society of Artificial Intelligence and Law.
With the recent alleged ‘civil rights’ movement taking place in the United States has highlighted the overlooked deep-seated discrimination among the whites and all the other people of different colours. Before, this movement kicked off in the month of April the LAPD programme based on the data analytics and prediction were discontinued. Reasons given for the same were the rigorous lockdown that led to the shut down on the development of the programme due to the COVID-19 and subsequent crunch of financial funding for the same. The Programme was a bit controversial due to the data predictions being inclined to be committed by the African American and Asians. The present analysis is the two flip sides to the programme’s effect on the citizens as well as on the course of the criminal investigation.
What is Pred-Pol ?
As technology has emerged and computers have evolved, so has the US State Department’s ability to analyse crime data and develop strategies to reduce crime and disorder. Beginning in 2009, with funding from the United States Department of Justice (USDOJ), specifically the Bureau of Justice Assistance (BJA) and the National Institute of Justice (NIJ), the Department implemented data-driven crime-fighting strategies. The initial program led to the development of a strategic plan to move the Department towards a Data-Informed, Community-Focused approach to crime prevention. The Pred-Pol was such an invention although it is contrary to any idea of strengthening the vision of community protection or harmonisation.
Critics state that when the Pred-Pol was to be used it would identify the specific areas and minority groups that already are susceptible targets of police as evident from the protest that is enough to highlight the problems. It was not only susceptible to targeting the vulnerable but, also was working on a fact-based prediction which renders the end result to be similar to that of any common patrol police. This has not proven to be much effectiveness in either investigation or in even predicting any probable criminals or offences.
Since, Pred-Pol works on the model of the SARA (Scanning, Analysis, Response and Assessment) it relies on the database such as the Crime Statistics which is a numerical quantifier but the criminals work with a psychological manuscript which is like a fingerprint- everyone has a different one altogether. The target of these machines would be based on crime prevention on environmental define which is a little closer to understanding human behaviour and it includes the following designs:
Natural Surveillance: the removal of hiding spots or physical barriers,
Natural Access Control: controlling the flow of traffic or travel,
Territoriality: generating a sense of ownership within the location, and
Maintenance: the physical maintenance or general upkeep of a place
The Criminology and the fallacy of the Data Prediction System
The Article by Christopher Riagano on Artificial Intelligence and criminal justice relations, the author had identified the police detections and to observe the criminal patterns to detect the behaviour of the crowd. This detection strengthens over a series of the addition of data and improvements of the software on a recurring basis so that the probability of prediction can reach the accurate outcome. We can consider that a virtual agent able to provide objective help to a real investigator doesn’t seem realistic because of the heterogeneity and the complexity of the situation that is not uniquely logical. Formal or informal perception plays an important role in the grip on reality for investigators. Because the human brain is not a chess program, AI is not completely ready today to emulate it. Understanding criminal investigations also require inferring a hidden factor, namely, the intention of the police officer. But we cannot exclude for the future that the extension of AI in this field is based on an analysis of police officer patterns. Analyses could, for instance, include experience, age of investigators, trajectories, modus operandi of investigations, crime type, and so on.
The main interest is to identify trends, patterns, or relationships among data, which can then be used to develop a predictive model and propose short, medium and long-term trends (Hoaglin et al., 1985) in order to inform police service at different levels. However, the predictions suffer from many of the nuances to reach a heightened level of accuracy in their judgement. As stated in a paper Stuart Armstrong et al (2014), many of the predictions made by AI experts aren’t logically complete: not every premise is unarguable, not every deduction is fully rigorous. In many cases, the argument relies on the expert’s judgement to bridge these gaps. This doesn’t mean that the prediction is unreliable: in a field as challenging as AI, judgement, honed by years of related work, maybe the best tool available. Non-experts cannot easily develop a good feel for the field and its subtleties, so should not confidently reject expert judgement out of hand. Relying on the expert judgement has its pitfalls. Expert disagreement is a major problem in making use of their judgement. If experts in the same field disagree, objective criteria are needed to figure out which group is correct. If experts in different fields disagree, objective criteria are needed to figure out which fields are the most relevant. Personal judgement cannot be used, as there is no evidence that people are skilled at reliably choosing between competing experts.
The LAPD has had a history of the notoriety of attacking the minority people in various crimes in the state. A study states that the LAPD officers happen to search the black and coloured race American Latinos often, in relation to heinous crimes.This abuse over the minority has been recorded according to the National Crime Bureau of America to be over the last 100 years at least. It is a solid foundation for the predictions all gone wrong.
The Los Angeles Police Department’s defence
The LAPD chief Moore, the reasoning of shutting down the programme is logical and very idealistic on the part of the police department. He stated that the financial crunch was the major reason why the programme was shut and even though the lockdown across the state was in force, the work on the same was being persistently carried on the online platforms. The department defended the value and ideas behind the programme. They refused to agree with the critical views of racial biases in the programming software and underlined that community building is central to policing in LA. Assistant Chief Beatrice Girmala justified the police support toward the minority by citing the probable recruitment figures involving the 671 prospective candidates from the minority community. However, this number dropped because of the reasons, in words of the Assistant Chief, “increase in the community anxiety and maybe people getting focused on future careers to other things as critical as life and death”. The focus of the statement was shifted from allegations against the software defects to the matter of the status update on testifying on the inclusiveness of the minority communities into the LAPD and telling the same to be too high and rigid standards of recruitment to meet by the community. This underwires the unsettling tone of community biases, which exists within the department.
The main conclusion from the above is that the department which has failed to distinguish the vulnerable from the privileged, casually and has always sidelined them from unexplainable reasons, their policing data of arrest and charges might also reflect the same. It forms the basis of the crime predictions considering this notion in the notorious LAPD.
Lessons for India and Recommendations
India, much similar to the United States, is a diverse country with various social groups residing. In India, the bias towards minority communities is not based on race but religion and caste. Especially in rural areas, the police don't register complaints of the vulnerable sections that easily especially when the victim belongs to a weaker section of the society and the accused is from a privileged section. Apart from the malevolent influence of systematic discrimination on the data, the data is not well maintained either. If such AI machine learning programmes are introduced in the Indian policing system then India will face some major issues. it will also face accountability issues.
Firstly, it will face the same problem as the U.S. since the data it will be fed would come from the same police departments who hold a bias towards certain communities, especially in rural areas. Thus, the predictions made by Machine Learning programmes would make the weak sections of society more vulnerable to harsher police actions.
Secondly, there will be no one to hold accountable for the increased unjust notorious measures taken by the police towards the endangered communities. Since the police would hide behind the predictions made by the Artificial Intelligence programmes and use it as a guard to justify their increased scrutiny towards the minority groups, they already hold a bias against. Whereas the Machine Learning programme would be giving the predictions based on the data that is being fed by the same biased police officers.
At last, it is also pertinent to note that although India is increasingly advancing in the technology sector, we still lag far behind developed countries like the United States. And a Machine Learning prediction technology like Pred – Pol takes several years and a large data set to fully develop and attain precision in predictions, even in a country as technologically advanced as the United States. So, such tech in India would require much more years until it can be considered completely reliable. Now the laws in India are still catching up with these advancements and such Archaic laws leave loopholes to be exploited by defenders in such situations where they could attack the credibility of the tech on whose predictions the foundation of the investigation began.
These issues are complex and need several continuous steps and improvements on our part based on observations and feedback on the working of such tech in the policing system. Still, at present times, some comprehensive measures can be suggested to minimise the problem.
First and foremost, it is crucial to supervise the data being fed to such machine learning software programs and involve people from all backgrounds in the process of training them. It is important to ensure that the data being fed is unbiased. Also, improvements in the data and record maintaining system would go a long way in establishing a reliable data set to be fed to these machines. This is the only way we can expect better predictions from such AI machine learning programmes and ensure that they help us in crime prevention rather than instilling discomfort amongst the weaker and vulnerable sections of the society.
The reliability of these systems should be improved by introducing simpler and explainable machine learning models and prediction techniques rather than the complex ones. Last but not the least, the laws in India need a lot of catching up to do to make sure the criminal justice system in India can benefit from such advancements in Technology.