Using Interim Analyses to Improve Efficiency in Drug Development

By Valerie Harding  

June 3, 2011 | Guest Commentary | Interim analyses are widely used in clinical trials. They offer companies the opportunity to stop studies early when it appears the primary objective will not be achieved (often referred to as ‘stopping for futility’). This can reduce the overall cost of the trial, allows companies to focus resources on other projects, and has the ethical benefit of administering fewer patients a treatment that is not effective. Interim analyses also enable companies to halt trials when there is clear evidence that the primary objective has already been met (‘stopping for efficacy’ or ‘stopping for success’). This not only lowers costs but also speeds up the time to develop the compound, which obviously benefits both the company and patients.    

The term ‘Bayesian’ refers to an approach named after the statistician Thomas Bayes. In clinical trials, this approach can be used to estimate the probability distribution of the drug effect, given the prior belief about the effect before the trial, and the data collected during the trial. This framework lends itself quite naturally to application in interim analysis. The basic structure of the Bayesian approach is as follows:  

 

  • Start by expressing a belief about the likely magnitude of effect of the compound, and the confidence in that belief (Prior belief)  
  • Gather some data to explore the likely magnitude of that compound’s effect (Likelihood)  
  • Based on collected data, update the initial belief about the likely magnitude of effect of the compound to obtain revised belief (Posterior belief).   

 

Once clinicians have posterior belief (step iii), this can be used as prior belief (step i) for the next study or stage of the study, and the steps can be followed again. Applying Bayesian approaches to interim analysis involves simply going round this cycle. The first time round, the prior belief may be one of ignorance (i.e. no belief about the likely magnitude of the treatment effect) or it may come from previous studies with the compound. Once data have been gathered up to the first interim analysis, the first posterior belief is derived, which becomes the prior belief for the next cycle, and so on. The process is repeated until enough posterior belief is obtained to conclude that either the compound is effective, or that collecting more data would be futile. At each interim analysis, there is also the option to make adaptations to the study design based on the belief at that stage.   

How does a clinical development team decide whether it has gathered enough evidence to stop the trial? There are two straightforward approaches:   

Posterior probability: The first is to calculate, at the interim analysis, the posterior probability that the true difference between the test compound and the control is greater than the target effect. This can answer the questions: What is the probability (given the data gathered so far) that our test compound is efficacious? What is the probability (given the data gathered so far) that our test compound delivers the effect that we need? If these probabilities are sufficiently high (or low) then the study can be stopped.   

Predictive probability: A second approach calculates, at the interim analysis, the predictive probability of achieving a successful result at the end of the study. This can be particularly helpful if statistical criteria for determining success/failure of the study have been clearly defined. The predictive probability answers the question: What is the probability (given the gathered data and the planned additional number of subjects to be recruited), of meeting the criteria for success at the end of the study? If this predictive probability at the interim is low, then the study can be stopped for futility, since the evidence shows that it is unlikely to be successful if continued to the end.   

There is a risk with both approaches, however, that a wrong decision could be made at the interim. This could result in the study being stopped for futility when actually in truth the test compound is effective. Alternatively the study could be stopped early for success, but in truth the test compound is not effective, leading to failure later in development. It is therefore important before the study begins both to agree what the planned stopping thresholds will be (i.e. the amount of evidence, in terms of posterior or predictive probability, that is required for a ‘stop’ decision), and to assess up front the risks of making wrong decisions.  This can be done using simulations to show how often the selected decision rules will lead to ‘stop’/’continue’ decisions for different possible values of the true treatment effect.  

Historically, Bayesian approaches have been difficult to implement due to computational complexity. However in recent years with improvements in computer technology this is no longer a barrier, and these methods have started to take off, particularly in earlier phase trials. The concepts of posterior and predictive probability are intuitive for making interim decisions about continuation or early stopping of a study. Simulations of proposed designs can help assess the performance of different decision rules and assist in the determination of the sample size and timing of the interim analysis. Using these methods in clinical drug development can result in efficient studies that make the best use of resources whilst ensuring good chances of success.   

Valerie Harding is Quanticate’s (www.quanticate.com) Communications Account Director. Email: vharding@recommunication.com