Adaptive Clinical Trial Designs Are Innovative Despite Unique Operational Challenges
Contributed Commentary by Rolf Hövelmann and Parvin Fardipour
September 20, 2017 | Adaptive clinical trial designs provide an innovative approach to drug and device development that can enhance efficiency and maximize portfolio value. However, adaptive trials also present unique operational challenges to ensure data quality and integrity, and maintain blinding of interim data analyses and adaptive changes.
Adaptive interim analyses require quality data that are consistent and accurate up to the interim analysis cut-off date. Any subsequent change to the data used at the interim is capable of invalidating any adaptive change based on it, which can invalidate the entire trial.
A look at a Phase III superiority trial of a new treatment for chronic graft-versus-host disease highlights some of data quality issues that can affect the trial results. The study employed an adaptive design, originally with one interim analysis. When interim analysis was conducted, an Independent Data Monitoring Committee (IDMC) recommended increasing the sample size and performing a second interim analysis after observation of 100 additional patients. The first interim analysis of 73 patients resulted in a recommendation to increase total sample size from 126 patients to 225. The second interim analysis of 175 patients concluded the trial had slightly exceeded its critical value. The trial was stopped early for efficacy, with nine recently enrolled patients permitted to finish.
Real-time data collection, cleaning, and monitoring are essential in driving the timely decisions at the heart of adaptive trials.
To ensure that interim analysis results and decisions are statistically robust, a sensitivity analysis should be planned within the decision-making process. In this study, when the invalidated data were excluded from the final analysis, the results were no longer statistically Significant. Data quality issues in this study were:
- Major protocol deviations in one subject
- Change in analysis strategy between interim analysis and final analysis in two subjects
- Data changes in two patients
All of these data quality issues led to trial failure in this case study. This example highlights how critical data quality and study conduct quality are to the success of adaptive design studies. Continuous central data monitoring and data cleaning might have prevented this failure.
Another example of the challenges adaptive trials present is operational bias, which can arise when unblinding information leaks out to study investigators, which can compromise trial data integrity. Even knowledge of adaptive choices and adaptation decision rules by investigators can introduce operational bias, as it may influence how investigators treat, manage, or evaluate patients. Interim analyses results and decisions should be guarded to ensure no operational bias is introduced. To mitigate this potential source of operational bias, steps should be taken at the design stage and throughout the trial.
Decision criteria should be confidential and described in documents only available to Data Monitoring Committees (DMCs) members and the unblinded statistical team supporting the DMC. Technology solutions controlling dissemination of unblinded interim reports only to authorized DMC members have proven effective for appropriately controlling the integrity of the adaptive trials.
One or more planned, unblinded interim analyses may be used to reassess sample size based on observed treatment effect size. These types of designs raise operational challenges, including appropriate timing of interim analysis, and variability of observed results between interim looks.
A randomized, double-blind Phase III non-inferiority trial of a new treatment for acute infectious diarrhea against a comparator provides an example of a design with two planned interim analyses with different recommendations between interim analysis 1 and interim analysis 2.
This study employed an adaptive design calling for two interim analyses to adjust sample size. The first interim analysis of 388 patients confirmed the initial 776 patient enrolment target based on an achieved efficacy outcome on track to meet the success threshold. The second interim analysis of 582 patients recommended increasing enrolment to 1,032 patients based on a decline in efficacy outcome. After seven years, the trial stopped early for slow recruitment with 835 patients enrolled. The final efficacy outcome increased, exceeding the final success threshold.
In this case, physicians’ decisions regarding patient selection and data recording may have been influenced by knowledge of adaptive decisions. The trial sample was first deemed sufficient suggesting the study drug was performing as expected. The later sample enlargement suggested that the treatment might not have been as effective. Such operational bias often appears as a sharp change in results following an adaptive change.
Rolf Hövelmann is the Senior Manager, Biostatistics at ICON. Rolf has over 20 years experience in clinical research providing statistical expertise for the analysis of clinical trials in all major therapeutic areas, including oncology, neurology, respiratory, and gastrointestinal disorders. He can be reached at Rolf.Hoevelmann@iconplc.com.
Parvin Fardipour is the Vice President, Adaptive Clinical Trials at ICON. Fardipour has over 29 years of clinical research experience in the pharmaceutical and contract research organization (CRO) industry. The past 11 years, Fardipour has been working in the adaptive design space to bring innovative approaches to clinical drug development and facilitate better and earlier decision making. She can be reached at Parvin.Fardipour@iconplc.com.