The statistical interpretation of pilot trials: should significance thresholds be reconsidered?
 Ellen C Lee†^{1},
 Amy L Whitehead†^{1},
 Richard M Jacques†^{1} and
 Steven A Julious†^{1}Email author
DOI: 10.1186/147122881441
© Lee et al.; licensee BioMed Central Ltd. 2014
Received: 18 October 2013
Accepted: 12 March 2014
Published: 20 March 2014
Abstract
Background
In an evaluation of a new health technology, a pilot trial may be undertaken prior to a trial that makes a definitive assessment of benefit. The objective of pilot studies is to provide sufficient evidence that a larger definitive trial can be undertaken and, at times, to provide a preliminary assessment of benefit.
Methods
We describe significance thresholds, confidence intervals and surrogate markers in the context of pilot studies and how Bayesian methods can be used in pilot trials. We use a worked example to illustrate the issues raised.
Results
We show how significance levels other than the traditional 5% should be considered to provide preliminary evidence for efficacy and how estimation and confidence intervals should be the focus to provide an estimated range of possible treatment effects. We also illustrate how Bayesian methods could also assist in the early assessment of a health technology.
Conclusions
We recommend that in pilot trials the focus should be on descriptive statistics and estimation, using confidence intervals, rather than formal hypothesis testing and that confidence intervals other than 95% confidence intervals, such as 85% or 75%, be used for the estimation. The confidence interval should then be interpreted with regards to the minimum clinically important difference. We also recommend that Bayesian methods be used to assist in the interpretation of pilot trials. Surrogate endpoints can also be used in pilot trials but they must reliably predict the overall effect on the clinical outcome.
Keywords
Pilot trial Power Type I error Confidence interval Significance Bayesian methodsBackground
In an evaluation of a new health technology, a pilot trial may be undertaken prior to a definitive trial that makes a definitive assessment of benefit. The main objective of a pilot trial is to provide sufficient assurance to enable a larger definitive trial to be undertaken. For example, they may assess aspects such as recruitment rates or whether the technologies can be implemented.
Pilot studies are more about learning than confirming: they are not designed to formally assess evidence of benefit. As such, for clinical endpoints, rather than formal hypothesis testing to prove definitively there is a response, it is usually more informative to provide an estimate of the range of possible responses [1, 2]. This estimation may not be around the primary endpoint for the definitive study but could be on a surrogate or an early assessment of an endpoint which may be assessed at a later time point in the definitive study [3].
In this paper we present and discuss approaches towards significance thresholds and confidence interval levels in pilot studies. The methods are divided into three main sections. In the first, we provide alternatives to hypothesis testing using the conventional 5% significance level. We then discuss the use of surrogate outcomes in pilot studies. Finally, a Bayesian approach to significant thresholds is introduced. Throughout the paper we use a worked example to provide illustration to the methods discussed.
Methods and results
Significance and confidence levels
Pilot studies are not formally powered to assess effect. However, it may be of interest to calculate confidence intervals to describe the range of effects, even if this is not a conventional 95% confidence interval. In this section we give a rational for confidence interval estimation and “hypothesis testing” in pilot studies.
Significance levels and power calculations
Pilot studies are usually underpowered to achieve statistical significance at the commonly used 5% level. Despite recommendations that formal significance levels are not provided for pilot studies, [4, 5] many still quote and interpret Pvalues. In a survey of pilot studies published in 2007–8, Arain et al. [6] found that 81% (21/26) of pilot studies performed hypothesis tests in order to comment on the statistical significance of results. If the primary purpose of a pilot study is to provide preliminary evidence of the efficacy of an intervention, then the significance level can be increased for hypothesis testing [7]. Stallard [8] recommends that the design for a phase II trial is based on a one sided Type I error rate of α = 0.2. Whilst Schoenfeld [9] proposed a higher type I error rate for preliminary testing in pilot trials; up to a (one sided) α = 0.25. In studies other than drug trials, setting and personnel may not be representative of a future main trial: A pilot trial might see a greater treatment difference due to protocol adherence and enthusiasm in the pilot centre, which might not be replicated in a multicentre trial. Nevertheless, the pilot may still be underpowered for a traditional 5% significance threshold.
It should be noted that in the context of a pilot study a Type I error would have a different impact. For a definitive study, a Type I error would mean therapies or health technologies falsely being concluded as beneficial. As such, in this context they would be referred to as societies risk – such that the wish is to have a Type I error as low as possible. For a pilot study the impact of a Type I error is that a definitive study may falsely be undertaken. Although there is a consequence for patients in the trial – being randomised to therapies when there is equipoise – the impact of this false positive error could be in the main on the sponsor or funder i.e. sponsors spend more money and resources on the ‘wrong’ study that will not result in a true effect/benefit from the new technology.
The aim of a pilot study, therefore, is to inform both the decision whether to conduct a confirmatory study and the design of the larger confirmatory trial. Any interpreted Pvalues in a pilot study should be with a disclaimer that the study is not adequately powered [10, 11]; and while post hoc power calculations are possible [11] they are generally not advisable [12]. Instead, estimation and confidence intervals should be used to infer the size and direction of treatment effect.
Confidence intervals
It is recommended in pilot trials that the focus is on descriptive statistics and estimation rather than formal hypothesis testing [4]. A confidence interval for the treatment effect will inform the decision, amongst other factors, whether or not to perform a confirmatory trial. The confidence interval should be interpreted with regards to the minimum clinically important difference (MCID) [12]; this is the difference between treatment groups that is considered to be clinically meaningful, specified a priori. If a confidence interval for the treatment difference crosses zero and the MCID, then the results of the pilot study could be considered to be equivocal. There could be no difference between treatments, or there could be a difference larger than the MCID; the results would not preclude either possibility. This approach is superior to formal hypothesis testing as there is insufficient power to test hypotheses, and its focus on the MCID will help inform the main confirmatory trial. Interpreting confidence intervals this way also helps investigators visualise the evidence of effect from the pilot trial.
It is common to report the 95% confidence interval which corresponds to a 5% significance level. In a pilot study, without adequate power, we can consider investigating confidence intervals of different widths to help inform our decision making, these can then be displayed alongside each other to illustrate the strength of preliminary evidence. We suggest setting minimum prior requirement; that the mean treatment difference is above zero, and that a CI of a certain length includes (or is above) the MCID.
Worked example
The Leg Ulcer Study was a randomised controlled trial designed to investigate the relative cost effectiveness of community leg ulcer clinics that use four layer compression bandaging versus usual care provided by district nurses [13, 14]. In the trial 233 patients with venous leg ulcers were allocated at random to the intervention (120) or control (113) group. The SF36 questionnaire was completed at baseline, three and twelve months post randomisation. For this example we investigate the SF36 General Health (GH) dimension score. The GH dimension is scored on a 0 (poor) to 100 (good health) scale.
We assume that 3 month data for the first 40 patients is the pilot study data. There were 31 individuals with complete 3 month SF36 GH dimension data (17 in treatment group and 14 in control group).
Note missing data on 22.5% (9/40) patients is quite high and may be considered unacceptable for a main study. In actuality for this trial there was just 14% (29/230) of missing data for the SF36 data [15]. For our data we may well have observed a randomly high number. If this was a true pilot study then a missing data rate of 22.5% may need some investigation. There are statistical methods for accounting for missing data [16]. However, the only solution to missing data is not to have any. After a pilot study, measures to ensure complete data would need to be investigated to bring the level of missing data to an acceptable level.
We take the minimum clinically important difference to be a 5 point difference in SF36 GH dimension scores at 3 months postrandomisation; we assume a standard deviation of 20 points. Without seeing the actual trial results, with 40 individuals, there would be 20% power to detect a 5 point or more difference between the groups if it truly existed which is clearly underpowered by conventional standards. Thus, for such a trial it would be more appropriate to estimate possible effects rather than have formal hypothesis tests.
Results from the pilot study comparing 3month SF36 GH dimension scores
Mean SF36 GH dimension score  

Clinic (n = 17)  Home (n = 14)  Difference (95% CI)  Pvalue 
68.0 (sd = 17.6)  55.1 (sd = 19.8)  12.8 (−0.8 to 26.6)  0.065 
The leg ulcer randomised controlled trial reported in 1998 obtained appropriate ethics committee approvals [14]. The use of the data from this trial for the work presented in this paper has been approved by School of Health and Related Research (University of Sheffield) ethics as secondary analysis of anonymised data.
Outcomes
The NIHR Evaluation, Trials and Studies Coordinating Centre (NETSCC) describes a pilot study as a smaller version of the main trial, designed to test whether components of the main study can all work together as well as a preliminary assessment of clinical efficacy. This screening function of pilot studies requires a preliminary evaluation of treatments. Therefore, using the definitive clinical endpoint during a pilot trial may not always be viable. There may be times when measuring the clinical endpoint is not efficient [17]. For example, if the clinical endpoint is the five year survival rate, then an assessment of disease progression or tumour shrinkage may be assessed in the pilot. Such endpoints would be used as surrogates for the definitive endpoint. We will now discuss surrogates in more detail [18].
Surrogate endpoints
In the situations described above an investigator may consider using an endpoint other than the clinical endpoint; a surrogate endpoint. ICH E9 [19] defines a surrogate endpoint as
‘A variable that provides an indirect measurement of effect in situations where direct measurement of clinical effect is not feasible or practical’.
Using a surrogate endpoint can reduce the required sample size or the duration of the trial compared to using the clinical endpoint. This leads to cost reductions which may be crucial for trial feasibility [18]. For an endpoint to be considered a surrogate the relationship between it and the clinical outcome must be biologically plausible. In addition, the surrogate must have demonstrable prognostic value for the clinical outcome and there must be evidence from clinical trials that treatment effects on the surrogate outcome correspond to treatments effects on the clinical outcome [19].
The risks involved when using surrogate endpoints
When an aim of a pilot study is to estimate design parameters, using a surrogate endpoint may mean we do not get precise estimates. For example, designing the study based on the surrogate may mean having sub optimal information to estimate the variance of the clinical endpoint or an assessment at an earlier time point. This may mean we do not get an accurate estimate of attrition rates.
A surrogate endpoint must reliably predict the overall effect on the clinical outcome [20]. Otherwise it would be possible to wrongly reject effective treatments or take ineffective treatments through to further testing. If a surrogate does predict clinical benefit it could mean treatment benefits can be brought to patients earlier than if clinical outcomes were used and possibly at a lower cost [21].
Worked example revisited
Using the same data set as in the previous example we now look at the 12 month SF36 general health (GH) dimension data for the main trial. There were 233 people in the study in total, 155 with complete SF36 GH dimension data and 78 observations were recorded as missing. From the 155 observed outcomes 80 were in the clinic group and 75 were in the home or control group – note we had 23% attrition at 3 months compared to 31% at 12 months. Such considerations may be important when trying to design a definitive trial.
Results from main trial comparing 12month GH dimension scores
Clinic (n = 80)  Home (n = 75)  Difference (95% CI)  Pvalue 

56.0 (sd = 22.8)  52.7 (sd = 23.9)  3.3 (−4.1 to 10.8)  0.377 
In the previous worked example we envisaged that the pilot trial had 40 patients and measured the 3month GH dimension score. Using a significance level of 10% we would have proceeded to the main trial. The 3month GH dimension score is now considered as a surrogate endpoint to the clinical outcome of 12month GH dimension score. If we used a significance level of 5% to assess the clinical outcome, the difference between the groups is not statistically significant. Using the 3month endpoint in the pilot study and a lower significance level would cause us to proceed to the main trial after the pilot study only to observe no significant difference between the two groups in the main study. It could be a Type I error which would lead us to the main study or it could be due to the treatment having no long term efficacy – for example the intervention may have a short term benefit which does not last for 12 months. The ‘large’ effect of 12.8 points in the first 40 patients at 3 months has not been replicated at 12 months in the full study.
Bayesian methods
The Bayesian framework offers an alternative approach to the Frequentist significance levels and confidence intervals discussed in the previous section. It allows prior beliefs about the intervention to be combined with the observed data to form posterior responses about the outcome of interest. These posterior responses can then be used to inform decisions about whether a larger definitive trial should be undertaken. One approach to making a decision about the intervention is to use a prespecified Go/NoGo criteria.
Go/NoGo criteria
Julious et al. [22] define a Go/NoGo decision as a hurdle in a clinical development path to necessitate further progression or otherwise of a health technology. These hurdles can be set low or high depending on the stage of development of the intervention.
At the planning stage of a pilot study there are a number of decisions that need to be made about how Go/NoGo criteria are defined. The first concerns the metric that is going to measure success or failure. Julious and Swank [23] suggest a method of calculating a probability of success for different development plans based on decision trees and Bayes’ Theorem. They take into account the study team’s confidence (expressed as a probability) that the intervention will meet the safety and efficacy targets for success, and then calculate the probability that each part of the clinical assessment will correctly indicate that the health technology works or does not work.
ChuangStein et al. [24] suggest that a good metric is the probability that there will be a successful confirmatory trial outcome. This is also called assurance by O’Hagan et al. [25] or average power by ChuangStein [26] and is used in Bayesian sample size calculations for confirmatory trials. The method that we describe here in detail uses prior beliefs and the data collected from the pilot study to calculate the probability of detecting a clinically meaningful difference. This method has previously been described by Julious et al. [22] for binary and Normal outcomes, and Parmar et al. [27] for survival outcomes.
The second decision concerns the cutoff or level of the criteria. For example, do we want to be 70% or 80% sure that a confirmatory trial will show a minimum clinically meaningful difference? With a pilot study, criteria could be set to minimise the probability of a false positive, (i.e. minimising the probability of progressing an intervention that will fail in a confirmatory trial) but if the goal is set too high then this will increase the probability of a false negative (i.e. stopping an intervention that works from going to a confirmatory trial) [22]. Other factors may also influence the choice of criteria, for example, the sponsor of a drug trial may be more willing to accept an incorrect go decision rather than an incorrect nogo decision if the new treatment is the first in class rather than one of several drugs in class [24].
Prior distributions
As with all Bayesian methods, prior distributions have to be specified for the parameters that we are interested in making inference about and this leads to the question of how these distributions are defined. The simplest approach is to use a noninformative prior. In this case the results will be similar to the Frequentist analysis because all of the information is coming from the observed response. Alternatively, a prior can be elicited based on expert knowledge of the intervention. This may, for example, be based on the synthesis of evidence from previous studies of the same or similar interventions as suggested by ChuangStein et al. [24]. Other elicitation techniques including the elicitation from multiple experts are discussed in Spiegelhalter et al. [28].
With a large sample size for the pilot study the posterior distribution will be robust to changes in the prior [29]. However, sample sizes in pilot studies are typically small  in a literature survey by Arain et al. [6] the median number of participants was 76  and therefore an informative prior distribution may have a large influence on the posterior distribution. We illustrate in our example that caution should be taken when specifying a prior distribution for a pilot study, as different priors may lead to different interpretations of the results.
Probability of detecting a clinically meaningful difference
We now outline one possible method for calculating the probability of detecting a clinically meaningful difference for data that are anticipated to take a Normal form. In the context of a Go/NoGo criteria we need to determine the probability of observing a difference, d_{i}, or greater given that d_{pilot} has already been observed, i.e. prob(θ > d_{i}  d_{pilot}) where θ is the mean difference.
For Normal data of the form X_{1},X_{2},…,X_{n} ~ N(θ, σ^{2}) we wish to make inference about θ for given σ^{2}. In this case the Normal family is conjugate and we have the following prior θ ~ N(μ_{prior}, σ_{prior} ^{2}). Note that other distributions may be used for the prior. The Bayesian updating rules can then be defined as follows.
Prior values for the mean difference and population standard deviation are defined as d_{prior} and s_{prior} respectively. The observed mean difference and population standard deviation from the pilot data are defined as d_{pilot} and s_{pilot} respectively. Hence ${\mathit{S}}_{1}\sqrt{\left(\mathit{r}+1\right)/\mathit{rn}}$ is an estimate of the standard deviation around the mean where r is the allocation ratio between groups and n is the number of individuals per arm.
Worked example revisited with bayesian approach
Using the same leg ulcer data as described previously, we demonstrate how to calculate the probability that the mean difference in SF36 GH dimension scores at 3 months post randomisation is greater than the minimum clinically important difference of five points. This question may also be stated in terms of a ‘Go’ criteria, for example:
Are we at least 75% sure of having a mean difference in SF36 GH dimension that is greater than the minimum clinically meaningful difference of five points at 3 months post randomisation.
For the expository purpose of this exercise we will consider the following three Normally distributed priors:

Noninformative

Pessimistic prior, with a mean difference of 4 and 90% certainty that the mean difference is within −1 and 9.

Optimistic prior, with a mean difference of 7 and 90% certainty that the mean difference is within 4 and 10.
Posterior means, standard deviations and the probability of observing a clinically meaningful effect size of greater than 5 for noninformative, pessimistic and optimistic priors
Prior  Posterior mean  Posterior SD  P(>5) 

NonInformative  12.9  6.7  0.88 
Pessimistic  5.5  2.8  0.58 
Optimistic  7.4  1.8  0.91 
It could be argued that a Bayesian approach is appealing as it formally accounts for any related work (and/or of beliefs held by investigators) by setting priors before the start of a study [22]. Once the trial has been completed, the observed data are combined with the priors to form a posterior distribution for the treatment response. The interpretation is then through a measure that is more easily understood – in our example what is the probability that the response is greater than 5.
Discussion
This paper has demonstrated a variety of approaches towards significance thresholds in pilot studies. When undertaking a pilot investigation, it was shown how significance levels other than the “traditional” 5% should be considered to provide preliminary evidence for efficacy. It was highlighted how estimation and confidence intervals should be focused on in order to provide an estimated range of possible treatment effects.
Interpreting confidence intervals with respect to the minimum clinically important difference should be considered. Investigating several confidence intervals of different widths and displaying them as in Figure 1 can aid decision making and is a helpful way of displaying evidence in pilot studies. Minimum prior requirements can be set and used in addition to the graphical display to help illustrate the strength of preliminary evidence. However, caution must be taken when using a surrogate outcome in pilot studies as it must reliably predict the clinical endpoint.
Bayesian methods could also assist in the early assessment of a health technology. Pilot data can be combined with prior beliefs in order to calculate the probability that there will be a successful confirmatory trial outcome. This can be framed into a Go/NoGo hurdle such as; are we at least 75% sure of having a mean difference larger than the minimum clinically meaningful difference. We demonstrated how care must be taken when choosing a prior distribution; the posterior distribution can be heavily influenced by the choice of prior as pilot data usually has a small sample size.
Conclusions
We recommend that in pilot trials the focus should be on descriptive statistics and estimation, using confidence intervals, rather than formal hypothesis testing. We further recommend that confidence intervals in addition to 95% confidence intervals, such as 85% or 75%, be used for the estimation. The confidence interval should then be interpreted with regards to the minimum clinically important difference and we suggest setting minimum prior requirements. Although Bayesian methods could assist in the interpretation of pilot trials, we recommend that they are used with caution due to small sample sizes.
Notes
Abbreviations
 GH:

General Health
 MCID:

Minimum Clinically Important Difference
 NETSCC:

National Institute for Health Research Evaluation, Trials and Studies Coordinating Centre.
Declarations
Acknowledgements
We thank Professor Stephen Walters who provided the data used in the worked example. ALW is funded by a School of Health and Related Research (ScHARR) Postgraduate Teaching Assistant Studentship. ECL, RMJ and SAJ did not receive any funding for this work.
Authors’ Affiliations
References
 Wood J, Lambert M: Sample size calculations for trials in health services research. J Health Serv Res Policy. 1999, 4 (4): 226229.PubMedGoogle Scholar
 Julious SA, Patterson SD: Sample sizes for estimation in clinical research. Pharm Stat. 2004, 3 (3): 213215. 10.1002/pst.125.View ArticleGoogle Scholar
 Biomarkers Definitions Working Group: Biomarkers and surrogate endpoints: preferred definitions and conceptual framework. Clin Pharmacol Ther. 2001, 69 (3): 8995.View ArticleGoogle Scholar
 Lancaster GA, Dodd S, Williamson PR: Design and analysis of pilot studies: recommendations for good practice. J Eval Clin Pract. 2004, 10 (2): 307312. 10.1111/j..2002.384.doc.x.View ArticlePubMedGoogle Scholar
 Thabane L, Ma J, Chu R, Cheng J, Ismaila A, Rios LP, Robson R, Thabane M, Giangregorio L, Goldsmith CH: A tutorial on pilot studies: the what, why and how. BMC Med Res Methodol. 2010, 10: 110.1186/14712288101.View ArticlePubMedPubMed CentralGoogle Scholar
 Arain M, Campbell MJ, Cooper CL, Lancaster GA: What is a pilot or feasibility study? A review of current practice and editorial policy. BMC Med Res Methodol. 2010, 10: 6710.1186/147122881067.View ArticlePubMedPubMed CentralGoogle Scholar
 Kianifard F, Islam MZ: A guide to the design and analysis of small clinical studies. Pharm Stat. 2011, 10 (4): 363368. 10.1002/pst.477.View ArticlePubMedGoogle Scholar
 Stallard N: Optimal sample sizes for phase II clinical trials and pilot studies. Stat Med. 2012, 31: 10311042. 10.1002/sim.4357.View ArticlePubMedGoogle Scholar
 Schoenfeld D: Statistical considerations for pilotstudies. Int J Radiat Oncol Biol Phys. 1980, 6 (3): 371374. 10.1016/03603016(80)901534.View ArticlePubMedGoogle Scholar
 Papadakis S, Aitken D, Gocan S, Riley D, Laplante MA, BhatnagarBost A, Cousineau D, Simpson D, Edjoc R, Pipe AL, Sharma M, Reid RD: A randomised controlled pilot study of standardised counselling and costfree pharmacotherapy for smoking cessation among stroke and TIA patients. BMJ Open. 2011, 1 (2): e000366View ArticlePubMedPubMed CentralGoogle Scholar
 Legault C, Jennings JM, Katula JA, Dagenbach D, Gaussoin SA, Sink KM, Rapp SR, Rejeski WJ, Shumaker SA, Espeland MA: Designing clinical trials for assessing the effects of cognitive training and physical activity interventions on cognitive outcomes: the Seniors Health and Activity Research Program Pilot (SHARPP) study, a randomized controlled trial. BMC Geriatr. 2011, 11: 2710.1186/147123181127.View ArticlePubMedPubMed CentralGoogle Scholar
 Walters SJ: Consultants’ forum: should post hoc sample size calculations be done?. Pharm Stat. 2009, 8 (2): 163169. 10.1002/pst.334.View ArticlePubMedGoogle Scholar
 Walters SJ, Morrell CJ, Dixon S: Measuring healthrelated quality of life in patients with venous leg ulcers. Qual Life Res. 1999, 8 (4): 327336. 10.1023/A:1008992006845.View ArticlePubMedGoogle Scholar
 Morrell CJ, Walters SJ, Dixon S, Collins KA, Brereton LML, Peters J, Brooker CGD: Cost effectiveness of community leg ulcer clinics: randomised controlled trial. Br Med J. 1998, 316 (7143): 14871491. 10.1136/bmj.316.7143.1487.View ArticleGoogle Scholar
 Collins K, Morrell J, Peters J, Walters S, Brooker C, Brereton L: Problems associated with patient satisfaction surveys. Bri J Commun Health Nurs. 2007, 2 (3): 156163.View ArticleGoogle Scholar
 Carpenter JR, Kenward MG: Multiple Imputation and its Application. 2013, Chichester: WileyView ArticleGoogle Scholar
 De Gruttola VG, Clax P, DeMets DL, Downing GJ, Ellenberg SS, Friedman L, Gail MH, Prentice R, Wittes J, Zeger SL: Considerations in the evaluation of surrogate endpoints in clinical trials: Summary of a National Institutes of Health Workshop. Control Clin Trials. 2001, 22 (5): 485502. 10.1016/S01972456(01)001532.View ArticlePubMedGoogle Scholar
 Prentice RL: Surrogate endpoints in clinicaltrials  definition and operational criteria. Stat Med. 1989, 8 (4): 431440. 10.1002/sim.4780080407.View ArticlePubMedGoogle Scholar
 International Conference on Harmonisation: ICH E9 statistical principals for clinical trials. 1998, http://www.ich.org/fileadmin/Public_Web_Site/ICH_Products/Guidelines/Efficacy/E9/Step4/E9_Guideline.pdf,Google Scholar
 Fleming TR, DeMets DL: Surrogate end points in clinical trials: are we being misled?. Ann Intern Med. 1996, 125 (7): 605613. 10.7326/00034819125719961001000011.View ArticlePubMedGoogle Scholar
 Temple R: Are surrogate markers adequate to assess cardiovascular disease drugs?. J Am Med Assoc. 1999, 282 (8): 790795. 10.1001/jama.282.8.790.View ArticleGoogle Scholar
 Julious SA, Machin D, Tan SB: An Introduction to Statistics in Early Phase Trials. 2010, Oxford: WileyBlackwellView ArticleGoogle Scholar
 Julious SA, Swank DJ: Moving statistics beyond the individual clinical trial: applying decision science to optimize a clinical development plan. Pharm Stat. 2005, 4 (1): 3746. 10.1002/pst.149.View ArticleGoogle Scholar
 ChuangStein C, Kirby S, French J, Kowalski K, Marshall S, Smith MK, Bycott P, Beltangady M: A quantitative approach for making go/nogo decisions in drug development. Drug Inform J. 2011, 45 (2): 187202.Google Scholar
 O’Hagan A, Stevens JW, Campbell MJ: Assurance in clinical trial design. Pharm Stat. 2005, 4 (3): 187201. 10.1002/pst.175.View ArticleGoogle Scholar
 ChuangStein C: Sample size and the probability of a successful trial. Pharm Stat. 2006, 5 (4): 305309. 10.1002/pst.232.View ArticlePubMedGoogle Scholar
 Parmar MKB, Ungerleider RS, Simon R: Assessing whether to perform a confirmatory randomized clinical trial. J Natl Canc Inst. 1996, 88 (22): 16451651. 10.1093/jnci/88.22.1645.View ArticleGoogle Scholar
 Spiegelhalter DJ, Abrams KR, Myles JP: Bayesian Approaches to Clinical Trials and HealthCare Evaluation. 2004, Chichester: John Wiley & SonsGoogle Scholar
 Lee PM: Bayesian Statistics: An Introduction. 1989, New York: Oxford University Press; Edward ArnoldGoogle Scholar
 The prepublication history for this paper can be accessed here:http://www.biomedcentral.com/14712288/14/41/prepub
Prepublication history
Copyright
This article is published under license to BioMed Central Ltd. This is an Open Access article distributed under the terms of the Creative Commons Attribution License (http://creativecommons.org/licenses/by/2.0), which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly credited. The Creative Commons Public Domain Dedication waiver (http://creativecommons.org/publicdomain/zero/1.0/) applies to the data made available in this article, unless otherwise stated.