top of page

Analysis of 2-day workshop conducted by Crime Science & the review of "AI-enabled Future Crime"

Mridutpal Bhattacharya,

Junior Associate Editor,

The Indian Learning.


February of 2019, experienced a brilliant ripple in the river of time, a ripple that shall perhaps lay foundations for a brighter tomorrow, a greater tomorrow, a future of prosperity where the crime shall be at an all-time low for the whole world & Artificial Intelligence shall be used only for good.

As brilliant as it sounds, a workshop was held by Crime Science, an international multi-peer reviewed journal, attempting to measure or at least think of a future where crime could potentially be aided by the disruptive blessing of Artificial Intelligence, where the conventional methods of debates & discussions were done away with to & a newer method was adopted. A newer method, which is more efficient, less time-consuming. The backdrop of the said exercise happens to be a time, rather an age – in which multitudes of recent efforts have been made & are being made to identify & classify threats which are posed to the world at large by the potentiality of AI-assisted crime. The exercise did not much seek to answer the age-old questions regarding effective inhibitions of crime by the usage of AI or the prediction of crimes taking place with the usage of AI. No, this particular exercise, this innovative attempt towards perceiving & perhaps understanding the threats of the future & classifying them was nought but an incredibly innovative idea, as, the understanding of the problem & as such, anticipation of the problem & assigning a certain gravity to it, undoubtedly inches us closer to solving the problem.

The workshop experience exemplified the zenith of the project. It endeavoured towards the classification of certain threats into groups signifying the threat levels. It involved a diverse group of partakers, ranging from researchers & thinkers in security, academia to officers of the public & private industries. The meeting had a multidisciplinary approach & encompassed various fields of expertise. The principal aim was to identify vulnerabilities by imagining possible crimes, & then assessing the severity of the threats posed by each, the sight into the future was limited to maximum of 15 years. The consideration of a relatively broad view of criminality was encouraged: the a made was that the laws could adapt to changing circumstances.

In the preliminary phases, sample scenarios were cited ranging from real crimes that have taken place & that can take place in the future to crimes in fiction & popular culture, as that can adequately act as a measuring scale of contemporary concerns & anxieties. The examples were organized into 3 non-exclusive categories based on the relationship between crime & AI, the categories were :

  1. Defeat to AI – e.g. breaking into devices secured by facial recognition.

  2. AI to prevent crimes – e.g. spotting fraudulent trading on financial markets.

  3. AI to commit a crime – e.g. blackmailing people with “deepfake” video.

These categories were eventually refined & merged to form the basis for the workshop sessions.

The workshop was organized by Crime Science & the Dawes Centre for Future Crime at UCL. The workshop was attended by 31 delegates. 14 from academia (Security & Crime Science, Computer Science, Public Policy; with attendees from UCL, Bristol & Sheffield), 7 from the private sector (AI technology, finance, retail); & 10 from the public sector (defence, police, government; including the Home Office, Defense Science & Technology Laboratory, National Crime Agency, National Cyber Security Centre & the College of Policing). The majority of attendees were UK- based, with 3 private sector delegates being from Poland.

Each main session of the 2-day workshop focused on a different theme for AI-enabled crime :

  1. Patterns & Predictions

  2. Fake Content

  3. Snooping, Biometrics & Beating AI

  4. Autonomy: Vehicles, Robots & Software.

Collaborating as groups of 4 – 6, the delegates rated probable AI-enabled crimes, which were devised by the Organizing Committee & other additional crimes generated within the groups. The crimes were assessed along four dimensions that had been identified by the organizing team during the review phase is useful for understanding different aspects of threat severity, as follow:

  • Harm – Harm towards the victim or the society at large.

  • Criminal Profit – Realization of a criminal aim.

  • Achievability – Feasibility of the crime.

  • Defeatability - Measures feasible to inhibit the crime or apprehend the perpetrator(s).

The crimes were summarized by a short phrase (e.g. “AI snake oil”) written on a sticky note the size of a placeholder stuck into a 10x10 cm grid, having 16 squares. Starting with empty grids, the delegates populated the grids with notes for crimes. For all the grids, columns to the left were considered the less bad end of the spectrum & columns to the right were considered the worse end of the spectrum. Delegates reported that q-sorting was a helpful approach to the comparison of crimes. Sufficient time was allocated to the q-sorting process so that the delegates were able to discuss each possible crime.

The crimes identified as the greatest concerns are:

  • Audio/Visual Impersonation

  • Driverless Vehicles as weapons

  • Tailored Phishing

  • Disrupting AI-controlled systems

  • Large Scale Blackmail

  • AI-authored fake news

The crimes designated to be medium-level threats were:

  1. Military Robots

  2. Snake Oil

  3. Data Poisoning

  4. Learning-Based Cyber Attacks

  5. Autonomous Attack Drones

  6. Online Eviction

  7. Tricking Facial Recognition

  8. Market Bombing

While crimes designated to be of the lowest threat levels were:

  • Bias Exploitation

  • Burglar Bots

  • Evading AI Detection

  • AI-authored fake reviews

  • AI-assisted stalking

  • Forgery

The results from a futures workshop or endeavour like this are essentially speculative & reflect upon the range of knowledge, experience & priorities of the partakers. Nevertheless, the outcomes provide useful insight into the prevailing concerns & how these are expected to play out in the years ahead. This particular exercise denoted that the delegates, in this case, were particularly concerned about the scalable threats, with crimes involving severe harms to single individuals or the society at large.

These exercises are also essential for the understanding of the conditions & technology prevalent today, from which future crimes are certain to evolve. Essentially, the resolution or mitigation of the disputes & crimes that may or may not arise was not the object of this particular project, it is still worth considering how the results could be used by stakeholders to inform their own responses to the potential crimes that were identified & discussed. One possible approach would be to look at the trade-offs between harm & defeatability as a guide to where effort & expenditure might efficiently be targeted.

Conclusively, it can be inferred that a project such as this & other projects such as these are essential to the development of the human intellect, the laws & policies of a nation & also the fact that these projects can potentially be the problem solvers of tomorrow.

As these projects aim at the designing & understanding the problem in the first place, therefore it’s imperative that the human mind which has caused the problem in the first place shall be able to find a solution to the problem. Projects such as this should be encouraged & should be taken on by all major Think-Tanks keen on understanding & entities keen on perhaps monopolizing the problems & solutions of the future. The age of artificial intelligence has come, & the age of many superior forms of artificial intelligence is yet forthcoming with certainty, therefore it’s wise to make sure that the solutions are ready at hand before the disaster itself strikes.


The Indian Society of Artificial Intelligence and Law is a technology law think tank founded by Abhivardhan in 2018. Our mission as a non-profit industry body for the analytics & AI industry in India is to promote responsible development of artificial intelligence and its standardisation in India.


Since 2022, the research operations of the Society have been subsumed under VLiGTA® by Indic Pacific Legal Research.

ISAIL has supported two independent journals, namely - the Indic Journal of International Law and the Indian Journal of Artificial Intelligence and Law. It also supports an independent media and podcast initiative - The Bharat Pacific.

bottom of page