top of page

New guidelines for a new era: AI in clinical trials

Updated: Dec 30, 2020

Pankhuri Bhatnagar,

Research Intern,

Indian Society of Artificial Intelligence and Law.


Introduction and background

If you are a part of the speculative who wonder what is even the need to infuse Artificial Intelligence into the field of Healthcare, consider this: You work in a pharmaceutical organization. Your team researched and dedicated 15 years to developing a new drug or medical procedure and spent billions of funds on it, monitored patients progress and recoveries, only to ultimately reach the conclusion that the anticipated solution does not work. While this may sound like an exaggeration, it is the crippling reality of most pharmaceutical organizations and other research centres who face a 95% disappointment rate. On the other side of the coin, there are patients whose lives are on stake and they may not have 15 years to wait for a lifesaving drug that may or may not work.

With the current methodology, for every 100 drugs which achieve first stage clinical trials, only one ultimately provides the intended results.[1] The clinical research sector is becoming more complex and competitive, with stricter regulatory standards and enhanced focus on patient safety. Against this backdrop, the industry needs disruption now more than ever. This is where AI comes into play. By reducing the duration of trials, limiting costs and improving the quality of data, AI offers an innovative way to get rid of trial inefficiencies and help pharma organizations provide new drugs and therapies to the marketplace at a quicker pace. However, it is necessary that AI-based trials must be subjected to rigorous and prospective evaluation to analyse its impact on the patient’s health. Unregulated AI interventions may result in more harm than good, which is why 2 new guidelines have been framed in this regard, namely CONSORT-AI and SPIRIT-AI.

Process of drafting these guidelines

The initiative was announced in October 2019 and both these guidelines were registered as undergoing drafting in the EQUATOR library of reporting guidelines. They were framed by a consensus procedure and developed according to EQUATOR’s methodological framework. The initiative had to be first approved by the ethical review committee at the University of Birmingham, UK. The process consisted of consulting experts and performing literature review to arrive at 29 probable items, which were analysed by an international group of 103 stakeholders through a 2 stage survey. The items were then agreed upon in a 2-day consensus meeting and polished by a checklist pilot (consisting of 31 and 34 stakeholders respectively). Participants were given instructions to vote for each item using a 9-point scale where the numbers would stand for: 1–3, not important; 4–6, important but not critical; and 7–9, important and critical.[2] Out of the 41 items discussed, 29 were finalized of which 14 have been included in CONSORT-AI and 15 in SPIRIT-AI.


It refers to Consolidated Standards of Reporting Trials – Artificial Intelligence and is an extension of the guidelines previously in place. CONSORT 2010 provides the minimum required standards for reporting randomized trials and CONSORT-AI is an extension of the same, applying only to clinical trials which involve AI intervention. 14 new items have been added to it which were deemed crucial enough to be regularly reported in addition to the core items.

Purpose – to promote transparency, explain-ability and comprehensiveness in the reporting of such trials. It also seeks to aid editors, readers and members of the scientific community to understand and analyse the quality of the trial design and the associated risks, bias etc which may arise in the reported outcome.

Recommendations added to CONSORT

Out of the 14 items added, 11 were extensions and 3 are elaborations. The 14 items[3] which passed the 80% threshold at the consensus meeting for being included in the statement are:

  1. 1a,b elaboration (i) – The title of the report must indicate the use of AI or machine learning and specify the model used

  2. 1a, b elaboration (ii) – Describe in the title/ abstract the intended use of the AI intervention in the clinical trial. Some interventions may have numerous objectives, or the objective may evolve over time. Specifying this enables the readers to understand the purpose of intervention at that point of time

  3. 2a extension – Explain the intended use in the context of its purpose and role in a clinical pathway along with the intended users – patients, the general public or healthcare experts.

  4. 4a (i) elaboration - Specify the exclusion and inclusion criteria for participants in the trial. This could be determined by factors like pre-existing health conditions, probability of success and survival, etc.

  5. 4a (ii) elaboration – Specify the exclusion and inclusion criteria at the stage of input data. For example, in case the input data for the trial is in the form of pictures, its eligibility criteria could include image resolution, the format of picture, quality metrics etc.

  6. 4b extension – Describe how the AI system was incorporated into the trial, including any site requirements. The functioning of AI systems generally depends on their environment. The hardware and software may have to be modified or the algorithm may have to be changed according to the study site. This process has to be specified according to the new guidelines.

  7. 5 (i) extension – Identify the version of AI algorithm used. AI software often undergo a number of updates during their lifespan, which is why it important to clarify the version used in the trial. This would also allow independent researchers to verify the study. If possible, the report should also state the differences between the 2 AI versions and the reasons for changing it.

  8. 5 (ii) extension – Inform how the input data was chosen and acquired. The quality of any kind of AI system is dependent on the quality of the raw input data that was fed to it. The selection, acquisition, managing and processing of the data must be described, This would help ensure transparency and comprehensiveness of the trial so that it can be duplicated in the real world.

  9. 5 (iii) extension – Inform how low quality or unavailable data was handled. The trial report should describe the amount of missing input data, how was the data which did not meet the eligibility criteria handled, and the impact of this on the trial or patient care.

  10. 5 (iv) extension – Clarify if there was any AI-human interaction in managing the data and the expertise required from the users. It is important to define the training given to the users for managing the input data and the process, for example – an endoscopist choosing a set of colonoscopy videos as data for the Ai system to detect polyps. Not clarifying the same can led to ethical issues, for example – it may be confusing whether an error happened on part of the human who did not follow the procedure instructions or if a mistake had been made by the software.

  11. 5 (v) extension – Clarify the nature of AI output. The usability of the system depends on the nature of the output, which can be in the form of a diagnosis, prediction, probability, suggested course, or an alarm alerting the happening of a particular event.

  12. 5 (vi) extension – Describe how the output led to decision making or other aspects for improving clinical practice. It is crucial to the trial to determine how the output data was utilized, the training required to understand it and how it was used to make decisions for the patient.

  13. 19 extension – Explain how errors were spotted and share the results of an analysis of these errors. Even minor issues in AI functioning can have catastrophic consequences when implemented at a larger scale. Thus, it is necessary to observe and report operational errors and come up with ways to mitigate the risks. In cases where such an error analysis is not undertaken, reasons must be given for not performing the same.

  14. 25 extension – Inform whether the AI intervention or its code can be made available to and accessed by interested parties. It should also specify the licensing and other relevant restrictions to its use.


It stands for Standard Protocol Items: Recommendations for Interventional Trials – Artificial Intelligence. It was framed in parallel with CONSORT-AI, its companion statement. The procedure for finalizing the items for both guidelines was essentially the same, after which they were categorized into the 2 statements.

Purpose – SPIRIT-AI aims to promote transparency and completeness same as its counterpart. It also seeks to help readers understand and appraise the clinical trial and the risks that may be associated with it.

Recommendations added to SPIRIT

There are 15 new items (12 extensions and 3 elaborations) which have been added to the core SPIRIT 2013 items and are now required to be regularly reported:[4]

  1. 1 (i) elaboration – The title/abstract of the report must indicate the use of AI or machine learning and specify the model used. The tile should be simple, capable of being understood by a large audience. Specific terminologies regarding the AI type must only be used in the abstract. Same as 1ab (i) of CONSORT-AI.

  2. 1 (ii) elaboration – The purpose of the intervention and the context of the illness must be elaborated in the title or abstract. Same as 1ab (ii) of CONSORT-AI.

  3. 6a (i) extension – Provide a description of the role of intervention in the clinical pathway, its aims, uses and the intended users for which it is designed. Same as 2a of CONSORT-AI

  4. 6a (ii) extension – Inform of any pre-existing evidence (whether published or unpublished) regarding the validation of the AI intervention. It must be considered whether the evidence was for uses and a target population similar to that of the clinical trial.

  5. 9 extension – Specify the site requirements for integrating the AI system into the trial. It must be specified if the AI required vendor-specific models, or special computing hardware, fine-tuning required etc. Same as 4b of CONSORT-AI

  6. 10 (i) elaboration – Same as 4a (i) of CONSORT-AI

  7. 10 (ii) extension – Same as 4a (ii) of CONSORT-AI

  8. 11a (i) extension – Same as 5(i) of CONSORT-AI

  9. 11a (ii) extension – Same as 5 (ii) of CONSORT-AI

  10. 11a (iii) extension – Same as 5(iii) of CONSORT-AI

  11. 11a (iv) extension – Same as 5(iv) of CONSORT-AI

  12. 11a (v) extension – Same as 5(v) of CONSORT-AI

  13. 11a (vi) extension – Same as 5(vi) of CONSORT-AI

  14. 22 extension – Same as 19 of CONSORT-AI

  15. 29 extension – Same as 25 of CONSORT-AI

Thus, from this list, it is clear that all the guidelines provided for in SPIRIT-AI and CONSORT-AI are common, apart from one additional provision in SPIRIT, namely guideline 6a (ii) which requires the authors of the trial to substantiate the published evidence with references or give proof of the unpublished evidence regarding validation or lack of the AI intervention.

Analysis of the Guidelines

These recommendations provide international consensus-based guidance on the data which has to be disclosed in clinical trials having AI intervention. It does not seek to prescribe the methodology or approach to such trials, rather, its purpose is to bring about clarity and transparency in the reporting process. This would enable researchers and readers to interpret the methods and results of the trial and would encourage peer review. There are certain details which are crucial for independent verification and yet, are often excluded from reports such as: version of AI, input and output data, training of handlers etc, which is why provisions were made to include these in the report. It must be noted that these guidelines have laid down the minimum reporting standards and there are additional AI-based factors to be considered for preparing trial reports, which may be found in the Supplementary table[5] of these recommendations.

Benefits of the issued guidelines

  1. Will boost transparency, robustness and completeness

  2. Encouraging peer review

  3. Allowing readers to understand and appraise the trials

  4. Recognizing the strong drivers in the field

  5. May serve as useful guidance for the programmers of AI systems and those who develop AI-based trials

  6. Expected to improve the quality of such trials

  7. Encouraging early planning for AI intervention in clinical trials

  8. Will enable patients and healthcare professionals to have confidence in the safety of an

  9. AI-based medical technique

Challenges and limitations of the guidelines and use of AI in clinical trials

1) Safety of the AI systems – A major concern of the Delphi survey group was regarding the safety of using AI systems. Machines can make errors which can’t be easily detected or explained by the human mind, but can grossly impact the output, and hence the health of the people. This is why researchers have been encouraged to perform error analysis and report the same.

2) Continuously evolving nature of AI systems – This inherent quality of AI systems is known as ‘machine learning’ or ‘continuously adapting’ systems. As they train themselves from new data, the performance of the system may be drastically altered. It is important to monitor and identify these changes. However, since the technology is at the stage of inception right now, this concern was reserved for future discussions.

3) A major limitation of the study is that it was based on the current state of AI in healthcare, which is not that developed, with only 7 published trials with AI intervention.

4) It must be remembered that at the end of the day, an AI is just a machine and not a human. It is dependent on the data on which it was built and may not always have the right answers. It does not feel emotions, have empathy or consider the ethical consequences of taking a decision.[6] The researchers should ideally make use of these technologies, but also apply their own reasoning to critically analyse the output and take the final decision.


Our society is now transforming from merely using the words AI and ML as buzzwords to actually implementing and making use of these technologies in real-time. The most successful application of AI in clinical trials has been in identifying participants, classification of images and drug discovery. All this has also helped in reaching the patients faster, reducing the R&D time and making efficient decisions based on evidence. Unlike humans, machines can process large volumes of data without any bias and minimal errors. This is what makes it attractive for quick decision making, cost-cutting and ultimately, for saving the lives of patients. With the increasing complexity of clinical trials, the sophistication of technology and rise in population and medical cases, the use of AI in Healthcare is inevitable to keep up with the growing volume of data and cases.

However, Artificial Intelligence is a rapidly evolving field. While most of the practical applications of AI technology are currently focused on detection, triage and diagnosis, it is possible that wider applications may emerge in the future. We may also witness advances in machine learning, better algorithms and computational techniques capable of disrupting the field of Healthcare to an even greater degree. These advancements will bring with them new challenges regarding reporting and designing of trials and both the guidelines would need to be updated in that case. In order to minimize risks and biases and maximize trustworthiness and transparency of the results, SPIRIT-AI and CONSORT-AI groups would have to carefully monitor the need for updates.


[1] Veerabhadra Sanekal Nayak et al., Artificial intelligence in clinical research, 3 International Journal of Clinical Trials 187 (2016). [2] Samantha Cruz Rivera et al., Guidelines for clinical trial protocols for interventions involving artificial intelligence: the SPIRIT-AI extension, 26 Nature Medicine 1351–1363 (2020). [3] | CONSORT-AI,, (last visited Sep 27, 2020). [4] | SPIRIT-AI,, (last visited Sep 27, 2020). [5] Nature Research, Reporting guidelines for clinical trial reports for interventions involving artificial intelligence: the CONSORT-AI extension, [6] Jenni Spinner, Phastar: AI, machine learning can transform clinical trials (2020), (last visited Sep 27, 2020). The Indian Learning, e-ISSN: 2582-5631, Volume 1, Issue 2, January 31, 2021.

Recent Posts

See All


The Indian Society of Artificial Intelligence and Law is a technology law think tank founded by Abhivardhan in 2018. Our mission as a non-profit industry body for the analytics & AI industry in India is to promote responsible development of artificial intelligence and its standardisation in India.


Since 2022, the research operations of the Society have been subsumed under VLiGTA® by Indic Pacific Legal Research.

ISAIL has supported two independent journals, namely - the Indic Journal of International Law and the Indian Journal of Artificial Intelligence and Law. It also supports an independent media and podcast initiative - The Bharat Pacific.

bottom of page