Skip to main content

Evaluation of a multi-arm multi-stage Bayesian design for phase II drug selection trials – an example in hemato-oncology

Abstract

Background

Multi-Arm Multi-Stage designs aim at comparing several new treatments to a common reference, in order to select or drop any treatment arm to move forward when such evidence already exists based on interim analyses. We redesigned a Bayesian adaptive design initially proposed for dose-finding, focusing our interest in the comparison of multiple experimental drugs to a control on a binary criterion measure.

Methods

We redesigned a phase II clinical trial that randomly allocates patients across three (one control and two experimental) treatment arms to assess dropping decision rules. We were interested in dropping any arm due to futility, either based on historical control rate (first rule) or comparison across arms (second rule), and in stopping experimental arm due to its ability to reach a sufficient response rate (third rule), using the difference of response probabilities in Bayes binomial trials between the treated and control as a measure of treatment benefit. Simulations were then conducted to investigate the decision operating characteristics under a variety of plausible scenarios, as a function of the decision thresholds.

Results

Our findings suggest that one experimental treatment was less efficient than the control and could have been dropped from the trial based on a sample of approximately 20 instead of 40 patients. In the simulation study, stopping decisions were reached sooner for the first rule than for the second rule, with close mean estimates of response rates and small bias. According to the decision threshold, the mean sample size to detect the required 0.15 absolute benefit ranged from 63 to 70 (rule 3) with false negative rates of less than 2 % (rule 1) up to 6 % (rule 2). In contrast, detecting a 0.15 inferiority in response rates required a sample size ranging on average from 23 to 35 (rules 1 and 2, respectively) with a false positive rate ranging from 3.6 to 0.6 % (rule 3).

Conclusion

Adaptive trial design is a good way to improve clinical trials. It allows removing ineffective drugs and reducing the trial sample size, while maintaining unbiased estimates. Decision thresholds can be set according to predefined fixed error decision rates.

Trial registration

ClinicalTrials.gov Identifier: NCT01342692.

Peer Review reports

Background

Adaptive designs for clinical trials that use features that change or “adapt” in response to information generated during the trial to be more efficient than standard approaches [1] have been the focus of an abundant statistical literature since the 1970s. Among the wide range of adaptive designs, multi-arm multi-stage (MAMS) designs aim to compare several new treatments (multi-arm) to a common reference treatment to select or drop any treatment arm to move forward when evidence exists based on interim analyses (multi-stage). These designs have also been referred to as selection designs in phase II/III trials [2], randomized phase II screening trials [3] or select-drop designs [4]. Similarly to other adaptive designs, MAMS designs aim to decrease the time and number of patients required to move experimental treatments from development to a definitive assessment of benefit compared to the traditional approach, in which each drug is assessed through separate controlled trials. Improving the efficiency of clinical trials has been of prime interest in the development of anticancer therapies because multiple candidate anticancer agents are available for screening simultaneously due to the acceleration of drug development [3, 5]. However, although MAMS trials have gained popularity, they are still poorly used by practitioners. Notably, because of the number of arms and stages, MAMS trials appear more complex in design, conduct, and data analysis, with a broad variety of proposed versions [68]. All these proposed MAMS trials are faced with the issue of multiple testing due to comparisons between active treatments and control treatment, or pairwise between all arms. Moreover, this multiplicity issue is increased by the repeated testing, resulting in stopping either the trial or merely the relevant arm, with a focus on sequential futility boundaries for lack of benefit adjusted so that the overall familywise error rate is or is not controlled at a pre-specified α level.

We aimed at assessing how a Bayesian MAMS design may appear as an alternate way of handling such issues. Indeed, Bayesian designs are an efficient way to achieve valid and reliable evidence in clinical trials, given that the interpretation of the data is unrelated to preplanned stopping rules and is independent of the number of interim views [9, 10]. Such Bayesian approaches for MAMS trials have been rarely used, notably with one proposal for normal outcomes [11]. To allow a direct and simple use of the Bayes approach, we focused on the probability of success in binomial trials, restricting our considerations to conjugate beta priors. Moreover, it can then be easily updated along the trial, and allowance for early stopping for futility can be made. This setting of Bayes binomial trials was also recently used to compare the Bayesian approaches to frequentist hypothesis testing in two-arm clinical trials [12]. Actually, our approach could be also viewed as an extension to the MAMS trials with binary outcomes of that proposed by Zalavsky for two-arm trials [12]. Indeed, both approaches use similar beta-binomial modeling (with integers [12] or not as beta parameters), and posterior difference of beta as the quantity of interest for decision making. However, while Zalavsky [12] focused on deriving one-sided superiority and non-inferiority Bayesian tests and their closeness to frequentist approaches, we provided stopping rules as decision-tools for interim analyses due to the MAMS design, as Xie et al. did [13]. The scope for extending this approach to the comparison of different arms of experimental treatments against one control was considered below.

This paper was motivated by a phase II randomized controlled trial to compare on a binary outcome measure, two experimental drugs with conventional azacitidine treatment for myelodysplastic syndrome patients, in which the main objective was to drop the experimental inefficacious arm. The trial was designed using a modified two-stage Simon’s design [14], allowing with small sample sizes of 40 patients per arm in the first stage to control the type I error accurately at the pre-specified level of 0.15 with a statistical power of 0.80. At the end of this first stage, no decision of dropping any arm was made. We wondered whether the use of a Bayes approach may have modified the design, and subsequent analyses.

Thus, the objective of this paper was to redesign the Bayesian adaptive design originally proposed by Xie, Ji and Tremmel for seamless phase I/II trials [13], focusing on the comparison of multiple experimental drugs to a control drug on a binary criterion measure.

First, we applied our design to the real dataset from the ongoing phase II randomized trial conducted on 120 patients that motivated this work. Then, we assessed its performance using a simulation study. Some discussion and conclusions are finally provided.

Methods

Motivating example

We used data from a phase II clinical trial of an international study conducted in 120 patients with myelodysplastic syndrome (MDS) who were randomized across three treatment arms. Although the original design was non-Bayesian [14], we reanalyzed data from the first stage of this trial to illustrate the interest of Bayes approaches. Because the trial is still ongoing in a second stage, no further details about the treatment arms will be provided. Each group of 40 patients received one of the following treatments: A (reference treatment, control group), B or C (two combinations of new drugs with the reference treatment, experimental groups). It was hypothesized that the response rate in the control group would be 0.30 and that a response rate of at least 0.45 would indicate that a combination was sufficiently promising to be included in further studies.

Bayesian multi-Arm multi-stage design

Let X denote the treatment arm, where X = 0 is the control arm, and X = 1, …, K denote K distinct new drugs to be tested. Suppose that n patients are randomly allocated to each of the (K + 1) arms. For simplicity, let us consider a balanced design, although any imbalanced fixed design could be considered.

Consider a binary outcome, Y, where Y = 1 denotes a response to treatment and Y = 0 denotes the absence of a response. The observed number of responses among the \( {n}_k \) patients allocated to arm k is given by \( {y}_k=\sum_{i=1}^n{y}_i{1}_{i\in k} \), where \( {1}_{i\in k} \) denotes the indicator function (\( {1}_{i\in k}=1 \) if the ith patient has been allocated to arm k, and 0 otherwise). Note that the selection does not need to involve a measure of efficacy [2], so that response could be defined according to a toxicity grading scale.

We used a Bayesian inference framework, where \( {\pi}_k=P\left(Y=1|X=k\right) \) denotes the probability of response in the arm X = k (k = 0, …, K). Using a beta Be(\( {a}_k \), \( {b}_k \)) prior for πk, the posterior probability of πk is still a beta distribution given by Be(\( {a}_k \) + \( {y}_k \), \( {b}_k \) + \( {n}_k-{y}_k \)) due to the natural conjugate property of the beta family for binomial sampling.

The main aims of MAMS trials are to, over a range of K new treatments, select those that prove sufficiently efficacious and avoid those drugs that are unexpectedly ineffective. Let \( {y}_{ki} \) denote the number of responses observed at stage i among the \( {n}_{ki} \) patients randomly allocated to arm X = k (k = 0, …, K).

Thus, several stopping decision criteria were proposed, derived from the proposals of Xie [13].

First, the inefficacy of each drug was assessed by comparison to some historical minimal value of interest, which was originally called the “minimum required treatment response rate” (MRT) by Xie et al. [13]. Thus, the futility rule (denoted as Rule 2 in [13]) is defined by the following posterior probability:

$$ P\left({\pi}_k<{p}_0\Big|{y}_{ki},{n}_{ki}\right)>{\gamma}_1 $$
(1)

where \( {p}_0 \) denotes the MRT usually defined from some historical control rates, and \( {\gamma}_1 \) is some threshold of a “high” probability of inefficacy.

In randomized phase II settings, the selection of a new drug is based on evaluating the potential benefits of the experimental treatment over the control arm [15]. Thus, one may consider dropping a new drug from further studies only if there is a rather low posterior probability that this drug is beneficial over the control by some targeted minimal level while on the opposite selecting the drug if there is sufficient information to declare that one treatment is better than the other, that is when its benefit reaches a so-called “sufficient treatment response rate” (STR). Two resulting decision criteria and stopping rules were defined from the posterior distribution of the difference in response rates of the experimental over the control arm at the ith stage as follows:

$$ P\left({\pi}_k-{\pi}_0>\triangle\ \Big|{y}_{ki},{n}_{ki}\right)<{\gamma}_2 $$
(2)
$$ P\left({\pi}_k-{\pi}_0>{\delta}^{*}\Big|{y}_{ki},{n}_{ki}\right)\kern0.5em >{\gamma}_3 $$
(3)

In the original paper [13], Eq. (2) is referred to as Rule 3, with \( \Delta \) set at the “targeted difference in response rate”, and Eq. (3) is referred to as Rule 4, with \( {\delta}^{*} \) set at the STR. However, whereas Xie [13] used the Eq. (2) to define expansion for the seamless phase I/II design, in the present study, we only considered select/drop decisions due to the phase II design. More specifically, Eq. (2) attempts to assess the futility of experiencing experimental arm k given the posterior probability that its response rate compared to that observed in the control arm is below some decision threshold; such a rule (2) can be considered as the posterior probability of the alternative hypothesis, as commonly used to evaluate the success of an experiment. Thus, such a rule was proposed to provide an answer closest to the frequentist setting where one wishes to test the null against the alternative. Note that when Δ = 0, the equation (2) reduces to the posterior probability that the experimental treatment is better than the control, a quantity that was first proposed in the setting of phase 2 single arm clinical trials [15] and more recently used to provide adaptive randomized allocation probability [16, 17]. By contrast, Eq. (3) aims at quantifying the posterior probability that response rate in experimental arm k is above that of the control arm by some sufficient treatment response rate. From a practical perspective, the alternative hypothesis in terms of differences in response rates that aim for better performance (or non-inferiority) could be considered, and appear very natural in the clinical environment.

Contrary to the posterior density given in (1), the second and third rules involve the difference of two beta distributions (\( {\pi}_k \) and \( {\pi}_0 \), respectively), which is no longer a beta distribution but a complicated distribution as reported in [12]. This difference has been computed in relation to Appell’s hypergeometric functions [18, 19]; otherwise, a normal approximation has been proposed; however, when the difference between the sample proportions is small, the approximate probability is not equal to the exact probability [19]. Exact calculation is possible in a few special cases [20], while numerical integration is usually performed, like in [12, 15]:

$$ P\left({\pi}_k<{\pi}_0+d\left|{y}_k{n}_k{y}_0{n}_0\right.\right)={\displaystyle \underset{0}{\overset{p-d}{\int }}}F\left({\pi}_k+d\Big|{a}_k+{y}_k,\ {b}_k+{n}_k-{y}_k\right) \times f\left(p\Big|{a}_0+{y}_0,\ {b}_0+{n}_0-{y}_0\right)dp $$
(4)

where F(|a,b) and f(|a,b) are the cumulative distribution function and the density of the beta random variable π ~ Be(a,b), respectively.

The priors

Regarding the prior on the response probability, \( {\uppi}_{\mathrm{k}},\;\mathrm{k}=0, \dots, \mathrm{K} \), the amount of past information is likely different according to the randomization arm. While it is expected that the elicitation of the prior on \( {\uppi}_0 \) could be based on previous trial results and expert opinion, that on \( {\uppi}_{\mathrm{k}},\;k>0 \), is likely to be less informative.

First, the use of flat non-informative priors was motivated by several considerations. It allows the posterior to be dominated by the data rather than by any prior overoptimistic views regarding the experimental arms. Thus, it insures that critical amount of clinical information is required as a basis for deciding whether the experimental arm will be administered to a large number of patients in a Phase III clinical trial. Moreover, such domination by the data allows the trial results to be used by others who have their own priors [15].

However, it is widely recommended to use different prior densities to assess the robustness of the trial results. Thus, we performed sensitivity analyses to the prior choice, using distinct beta distributions reflecting increased amount of prior information throughout the effective sample size (ESS) [21]. Given the ESS of a beta Be(a,b) prior is given by ESS=a+b, one may modifying the beta parameters for modifying the prior variance while the prior mean is fixed, providing sensitivity analyses to the prior information translated into a sample size (Fig. 1). Prior mean was either “enthusiastic” or “skeptical”, as we did previously [22]. These terms “enthusiastic” and “skeptical” priors refer to either the optimistic view of a beneficial treatment effect at least equal to that expected when planning the trial, or to the pessimistic view of no treatment effect as compared to the control [23]. Both priors allow encompassing the heterogeneity in physician prior opinion before to the trial.

Fig. 1
figure 1

Guide calibration of the prior variance according to the prior mean and the prior information translated into the so-called effective sample size (ESS) – For instance, when prior mean is 0.50, the variance is 0.125, 0.083, 0.042, and 0.027 for a prior effective sample size of 1, 2, 5 and 10, respectively

Decision thresholds

To be applied, some arbitrary constants (further denoted as “design parameters”) must be defined. First, the choice of the minimal response rate (p 0 ) could be guided by some historical controls or the clinical experience of the control group in the disease under study, and the response rate under the null hypothesis is commonly chosen in uncontrolled Phase II trials. Second, we choose \( \Delta =0 \) as a targeted minimal response rate; this value represents no difference between the treatments. \( {\delta}^{*}=0.15 \) was chosen as a sufficient response rate; this value would reflect a clinically important treatment effect. Both values delineate the underlying null and alternative hypotheses in a frequentist framework.

Otherwise, the number of design stages, that is, the frequency of the computation of the rules described above that conduct stopping decisions, should be defined. Moreover, the threshold values \( {\gamma}_1,\;{\gamma}_2 \) and \( {\gamma}_3 \) are statistical quantities that should be set to some predetermined values allowing for the good performance of the design, likely related to the quantity of information in the trial (thus, of the entire sample size). Xie in 2012 [13] suggested that \( {\gamma}_1 \) and \( {\gamma}_3 \) should be high (>0.8), and \( {\gamma}_2 \) should be at most 0.10. Obviously, such values widely govern the occurrence of false positive (or negative) decisions. Nevertheless, larger than traditional values of false positive rates are commonly used in MAMS settings, up to 0.50 at the first stage [8], notably because one wishes to make decision on dropping arms early while maintaining a low false negative decision rate.

Thus, we first proposed to compute the decision rules after every observed response in the trial and then attempt to define some criteria for design choices, and their impact in terms of sample size.

Results

Illustrative case study

We first apply the proposed design to the phase II randomized trial with K=2 new drugs compared against the control. The Jung trial design [14] was based on p 0 =0.30 and δ=0.15, with type I and type II errors fixed at 0.15 and 0.20, respectively. Of the 120 enrolled patients, 44 (36.7 %) exhibited a response, including 15 in arm A, 13 in arm B, and 16 in arm C, resulting in observed response rates of 0.3750, 0.3250 and 0.40, respectively.

Bayes analyses were applied, first using in each arm non-informative beta priors either the Jeffreys prior Be(1/2,1/2) or the uniform prior Be(1,1) resulting in ESS=1 or 2, respectively. Then, as reported above, a sensitivity analysis to the prior choice was performed; for the control arm, only skeptical priors - centered on the null (prior mean=0.3) hypothesis- were used, while both skeptical and enthusiastic – centered on the alternative (prior mean=0.45) hypothesis- priors were defined. Prior effective sample size was set at 10 in control, and varied from 1 or 5 in experimental arms.

Figure 2 displays the prior and posterior distribution of response rates in each randomized arm at the end of enrollment, illustrating how the posterior distribution of each experimental arm was not markedly affected by the prior information as translated into the (prior) effective sample size or its location. At the end of the trial, according to the prior, the posterior mean response rate ranged from 0.3600 to 0.3810 in arm A, from 0.3222 to 0.3389 in arm B, and from 0.3889 to 0.4056 in arm C (Table 1).

Fig. 2
figure 2

MDS trial- Sensitivity analyses of the distribution of response rate in each treatment arm according to the prior choice in terms of location (non-informative centered on 0.5, skeptical centered on 0.3 or enthusiastic centered on 0.45) and effective sample size (ESS ranging from 1–5 in experimental arms up to 10 in control). Upper plots display the prior densities while lower plots display the posterior densities. The left plots refer to the non-informative situation in which all of the three priors are uniform over [0,1] (Be (1,1)) or distributed according to Jeffreys prior (Be (1/2,1/2); the middle and right plots refer to the situations in which the priors were either skeptical (middle plots) or enthusiastic (right plots); each uses various effective sample sizes (ESS) denoting various amounts of prior information

Table 1 MDS results – Sensitivity analyses

We retrospectively applied the decision rules defined in (1)-(3) with threshold values set at 0.9, 0.1 and 0.9, respectively. Figure 3 displays the evolution of the posterior probabilities and stopping criteria over time, when using non-informative priors.

Fig. 3
figure 3

Results of the MDS trial- Bayesian analyses using non-informative uniform priors, the minimum required treatment response rate at MRT=0.3 (Rule 1), the targeted minimal response rate at Δ=0 (Rule 2), and a sufficient treatment response rate at STR=0.15 (Rule 3), with the cut-off probability thresholds for rules 1–3 set at 0.9, 0.1 and 0.9, respectively

The application of the first stopping criterion does not allow either arm to be eliminated, indicating that there is a small probability that either response rate is below the historical response rate of 0.30; indeed, the posterior estimates were close to and mainly above 0.30, except for arm B, where the response rate was lower than those of the other two arms for the 20 first enrolled patients (Fig. 2, left). This finding was illustrated in the second criterion computed over the trial, where the cut-off threshold of the second decision criterion was crossed for arm B after 5, 13, 14, 18, and 22 enrolled patients in that arm, illustrating a low (<0.10) posterior probability that the response rate in that arm was above that observed in the control. As expected, the third decision criterion never required that the study be stopped with the conclusion that the benefit of any experimental arm was at least the 0.15 expected. Note that all the three decision criteria at the end of the trial were slightly affected by the prior, with close values that do not modify any decision (Table 1).

These findings suggest that arm B could have been dropped from the trial based on a sample of approximately 20 instead of the 40 actually recruited patients, although further results (with a sample size of at least 25 patients) do not confirm such a decision. This could be related to some “drift” towards improved response rates over the course of the trial. This may also point out that the probability in Eq. (2) can be highly variable in the beginning of the trial when the number of patients is small, resulting in possibly false decisions [17].

We thus decided to assess the performances of this approach and more specifically to assess the quantity of information required to drop an ineffective arm or an efficacious arm, according to decision thresholds related to false decision probabilities.

Simulation study

Simulation settings

Once the Bayesian design has been structured, statisticians use simulations and adjust tuning parameters to comply with a set of targeted operating characteristics. Thus, we assessed the operating characteristics of the proposed MAMS design through simulations that mimic the MSD trial, although with clear-cut ineffective or effective drugs, and in which stopping decision criteria (1)-(3) were applied.

We considered several situations of drug inefficacy, that is, when the benefit in terms of response rate was null or below that expected of 0.15 (true benefit set at 0, 0.05, and 0.10 compared to an expected response rate of 0.30), and situations of drug efficacy (true benefit at 0.15, 0.20, 0.25, 0.30 and 0.45, over the 0.30 expected response rate). Moreover, among the K=2 new drugs, several scenarios combining these various treatment benefits were distinguished, either similar across new drugs or not. The first scenario simulated the case in which the efficacies of treatments B and C were similar to that of treatment A (\( {\pi}_B={\pi}_C={\pi}_A \)). In further scenarios, we simulated the case in which only arm B was more efficient than A (\( {\pi}_C={\pi}_A,\;{\pi}_B={\pi}_A+ dB \)). In the latter, we simulated the cases where both B and C treatments had a higher probability of response than A (\( {\pi}_C={\pi}_A+dC,\;{\pi}_B={\pi}_A+ dB \)).

We simulated samples of \( n \) patients. In each simulation, the treatment arm was generated from a multinomial distribution mult(\( n,\frac{1}{3},\frac{1}{3},\frac{1}{3} \)), and the response-indicating efficacies were generated from Bernoulli distributions B(\( {\pi}_k \)).

For each scenario, data were analyzed using Bayesian inference. The priors of \( {\pi}_k \) were non-informative beta Be(1,1). Posterior probabilities in (1) were easily obtained as beta cumulative density functions, whereas those of (2) and (3) required numerical integration –see Eq. (4). We first computed those criteria for fixed sample sizes. Then, any arm could be dropped if evidence suggested that it was unlikely to be effective (futility rules 1 and 2) or if sufficient evidence of effectiveness over the control had already been determined (rule 3). Furthermore, to take into account the high variability in differences of beta distributions based on small samples [16], those rules only applied once at least 15 patients had been enrolled in each arm.

A total of N=10,000 independent replications were performed, with the results averaged across the N repeated simulations. In all simulations, design parameters were set to be constant at p 0= 0.30, \( \mathrm{\triangle}=0 \) and \( {\delta}^{*}=0.15 \) unless otherwise specified.

All analyses were performed using the R statistical software (http://www.R-project.org/).

Simulation results

Threshold calibration

To determine the decision thresholds, as suggested by Xie [13], some simulations were first performed, considering a 2 fixed parallel arm designs based on n=40 and n=100 patients per arm (Table 2). In all cases, biases were low, mainly below 0.01 (when n=40) or 0.005 (when n=100), with lower mean square errors (MSEs). The first decision criterion, that is, the posterior probability that the response rate was lower than 0.3 was nearly equal to 0.5 in the control arm or when there was no drug benefit (dB=0), as expected, and then decreased from 0.30 (when dB=0.05 and n=40) down to 0.01 (when dB=0.25) to reach 0 when dB=0.45. In parallel, the difference between the probabilities of a response for B over the control arm A increased with the benefit of B. Moreover, a larger sample size led to a higher probability of detecting a smaller benefit, so that for a given benefit, the decision threshold depends on the amount of information.

Table 2 Simulation results in terms of absolute bias based on a fixed sample size for increasing benefit of the experimental arm– all priors on pk (k=A,B,C) are non-informative Be (1,1) priors; p0=0.30; n=40 or 100 patients per arm

We thus computed the three decision criteria according to the true benefit of the experimental arm (dB ranging from −0.2 to 0.45) and to the sample size (ranging from 10 to 100 patients per arm), each based on 10,000 independent replications (Fig. 4). Left plots of Fig. 4 quantify to what extent the stopping rule (1) is influenced by the sample size and the actual benefit of the experimental treatment arm – beside the threshold cut-off value, expectedly. Notably, it shows that a threshold of 0.95 with samples of n=40 patients per arm, allows on average arms with response rates below 0.15 of that expected to be dropped, while those with response rates below 0.10 could be dropped only when the sample size reached n=100. Similarly, when the experimental arm is compared to the control (middle plots), rule 2 evaluating the futility of trial continuation, with a 0.05 threshold, allows on average arms with a response probability at least 0.20 below that of control to be dropped when the sample size was n=40, and those with response probability 0.15 below to the control when n=100. In contrast, a threshold of 0.95 for rule 3 (right plots) will enable one to determine that the benefit of the experimental over the control is at least 0.40 with n=40, and nearly 0.30 with n=100.

Fig. 4
figure 4

Posterior stopping rules according to the actual treatment benefit and sample size; the left plots refer to decision criterion 1 with p0=0.15, the middle plots refer to criterion 2 with Δ=0, and the right plots refer to criterion 3 with δ*=0.15. The mean estimates are from N=10,000 independent simulations for each actual benefit (dB)

Obviously, when the threshold values were less stringent, the increased ability of the design in dropping less different arms compared to the control could be counterbalanced by its increase propensity of dropping efficacious arms. This was the further aim of the simulation study to assess those false (positive or negative) decision rates.

Assessing false decision rates

Tables 3, 4 and 5 summarize the simulation results for the arms dropped at the end of the first stage and the absolute bias in their treatment effect estimates on the definitive outcome at the stopping decision based on rules 1, 2 and 3, respectively, when the sample size was set at n=40, 100 per arm, and the threshold values were set at stringent values, that is, of \( {\gamma}_1=0.95 \), \( {\gamma}_2=0.05 \) and \( {\gamma}_3=0.95. \)

Table 3 Simulation results for dropping treatment arms based on the first rule (R1) and the absolute bias for such arms in the estimated treatment effect at the time of dropping decision– all priors on pk (k=A,B,C) are non-informative beta(1, 1) priors, when decision threshold is set at 0.95
Table 4 Simulation results for dropping treatment arms based on the second rule (R2) and the absolute bias for such arms in the estimated treatment effect at the time of dropping decision – all priors on pk (k=A,B,C) are non-informative beta(1,1) priors, when decision threshold is set at 0.05
Table 5 Simulation results evaluating Rule 3 when the threshold probability is set at 0.90

As expected, when the treatment was less efficacious than expected, the first rule allowed the trial to be stopped early in 30.7–52.5 % of cases when the absolute difference in response rates was 5 %, to 96–99 % of cases when the absolute difference was down to 20 % (Table 3). The mean sample size required to detect inefficacy was 25 patients for a decrease of 0.15 in response rates, down to 15 for a 0.20 decrease. Otherwise, the false negative stopping rates due to this first rule in the case of beneficial treatment were low, with values of approximately 15–23 % when there was no benefit, less than 10 % when the benefit was 5 %, and less than 1 % for higher benefits (Table 3).

To handle the control arm, rule 2 was then applied to detect the lack of treatment benefit (Table 4). Compared to the previous first rule, a decision of stopping early in case of actual lower response rates in the experimental group than in the control group appears to be reached similarly for small differences, with, for instance, a decision to stop in 32 % of cases compared to 31 % in the case of a 5 % response rate below that of the control for n=40 and in 46 % of cases compared to 52 % for n=100. In contrast, false negative decisions of dropping the arm were increased compared to rule 1 in the same situation; for instance, for a minor benefit of 5 %, the second rule incorrectly proposes stopping for futility in 13–16 % of cases compared to 7–9 % based on the first rule when n=40 and n=100, respectively. Expectedly, when \( {\gamma}_2=0.10 \), the results were modified, with lower false decision rates (Table 6).

Table 6 Simulation results evaluating Rule 2 when the threshold probability is set at 0.10

Finally, when evaluating the third rule in detecting true benefits, the average sample sizes were decreased to about 10 patients per arm when the absolute benefit increased to 45 %, while the false positive rate was only 6–7 % in the case of no benefit, likely related to the threshold probability of \( {\gamma}_3=0.90 \) (Table 5). As expected, these figures were modified when using a less stringent probability threshold of \( {\gamma}_3=0.80 \) where the false positive rate reached 18–20 % in absence of any benefit (Table 7).

Table 7 Simulation results evaluating Rule 3 when the threshold probability is set at 0.80

Discussion

There has been increasing evidence that the effectiveness of clinical trials can be improved by adopting a more integrated model that increases flexibility and maximizes the use of accumulated knowledge. We focused this work on adaptive MAMS designs to select effective drugs among a fixed set of new drugs compared to a control. So-called screening or select/drop designs aim at proposing changes in treatment regimens with the possible elimination of a treatment group based on information derived from accumulated data. Such designs appear particularly useful for rapidly evolving interventions and drugs, especially when outcomes occur sufficiently soon to permit adaptation of the trial design. This setting in which several treatments are compared to a single control allows heterogeneity in patient populations and disease courses to be considered [24, 25]. However, the heterogeneity in objectives, design, data analysis, and reporting of these multi-arm randomized trials has recently been highlighted [26]. Moreover, in ascertaining which treatment modalities are most effective, the presence of K experimental arms also introduces complexity. We used a binary outcome measure, given that it appears to be the most widely used endpoint in phase II trials. Of note, such a binary criterion in MAMS has been used only in frequentist designs [6, 27].

Indeed, most of the proposed MAMS designs, including optimal designs, used a frequentist framework for inference [48, 14, 28]. The application of Bayesian adaptive design methods has recently been advocated to maximize the knowledge-creating opportunity of a learning phase study [13]. Surprisingly, although several designs have used Bayesian adaptive allocation methods[17, 29], Bayesian adaptive designs in terms of sample size or treatment allocation have been proposed mainly in the early phases of cancer drug development, notably in the setting of seamless phase I/II trials [13]. In the MAMS setting, Bayesian adaptive phase II screening designs have been proposed only for selecting/dropping arms using normal outcome measures [11], and more frequently by modifying the allocation probabilities to each arm. For instance, to select among treatment combinations of multiple agents, patients were adaptively allocated to either one of the treatment combinations based on posterior probabilities of all hypotheses of superiority of each combination based on a continuous endpoint [29]. Even when comparing MAMS designs to adaptive randomization designs, only the latter were based on Bayesian inference, whereas the former used test statistics from grouped sequential methods [27].

We decided to focus on the select/drop decisions while preserving the equilibrium of sample allocation across arms. We first use stopping rules based on the posterior probability of inefficacy (or of over-toxicity), as previously performed in closed settings [30, 31]. Indeed, nearly all phase III trials include pre-specified inefficacy/futility interim monitoring rules to stop the trial early if the interim results strongly suggest that the experimental treatment has no benefit over the control [32]. In contrast, a phase II analysis in a phase II/III trial requires more evidence that the experimental treatment works better than the control [2]. Thus, we use the difference of response probabilities between the treated group and control group as a simple Bayesian conditional measure of evidence regarding the treatment benefit. This method has been poorly used in a Bayesian context [12], possibly because the precise prior density of the difference of two independent beta is unknown. However, some analytical works have been published [1820], and more recently, software to calculate the probability that one random variable is greater than another has been provided (http://biostatistics.mdanderson.org/SoftwareDownload/). When this density can be approximated, it can be used in several important applications. This illustrates how Bayesian methods give direct answers to the questions that most people want to ask, such as “which treatment is the best” [10]. Moreover, the Bayesian tools enable decision making based on the difference in response probabilities and the quantification of probabilities of benefit of each possible arm, which are more informative and transparent than p-values. It could be combined with the adaptive design methodology to provide a very flexible and efficient decision making process [33].

Due to the multiplicity of arms, we considered as the primary motivating design that of Xie et al. [13] who focused on multiple dose levels, though our approach was close to that proposed by Zalavsky et al. for tow-arm trials [12]. Nevertheless, this exemplifies the large interests and clinical applications of such Bayesian designs, unfortunately still underused in clinical practice [34].

Since a common concern in Bayesian data analysis is that an inappropriately informative prior may unduly influence posterior inferences, we reran the analyses using different priors, possibly distinguishing various amounts of previous information across randomized arms as quantified by the effective sample size. This slightly modified the results of the clinical trial. We restricted our considerations to conjugate beta priors so that the prior probabilities of tested hypotheses could be transformed into Bernoulli trials with a theoretical (effective) sample size. This appeared an important issue when applying Bayesian methods in settings with a small to moderate sample sizes such as those proposed for MAMS [21].

Conclusions

Regardless of its inference, adaptive trial design is a methodologically sound way to improve clinical trials but adds significant complexity. This approach requires boundary parameters to be chosen for stopping decisions\( . \) Xie et al. in 2012 [13] reported the use of a high criterion for action (\( {\gamma}_2 \) =0.9) as a default value based on a maximum cohort size of 36 (with 24 treated with the active dose and 12 treated with placebo), although calibration is often required. Thus, we calibrated the values of these thresholds according to the simulation study. Indeed, the choice of these thresholds is highly dependent on our desire to control false decision in either direction, as typically considered in early trial phases. Otherwise, combining stopping rules 1 and 2 appears to be another option to improve such a control [33].

Finally, this adaptive Bayesian approach in which existing information at the time of trial initiation is combined with data accumulating during the trial has also been used to identify the treatments that are most beneficial for specific patient subgroups [3538]. Such an approach, in the line of personalized medicine, appears to be an interesting research area to explore in the MAMS setting.

Abbreviations

B():

Bernoulli distribution

Be():

beta distribution

ESS:

effective sample size

MAMS:

multi-arm multi-stage

MRT:

minimum required treatment response

MSD:

myelodysplastic syndrome

MSE:

mean square error

mult():

multinomial distribution

STR:

sufficient treatment response rate

References

  1. Luce BR, Kramer JM, Goodman SN, Connor JT, Tunis S, Whicher D, Schwartz JS. Rethinking randomized clinical trials for comparative effectiveness research: the need for transformational change. Ann Intern Med. 2009;151:206–9. PMID: 19567619.

    Article  PubMed  Google Scholar 

  2. Korn EL, Freidlin B, Abrams JS, Halabi S. Design issues in randomized phase II/III trials. J Clin Oncol Off J Am Soc Clin Oncol. 2012;30:667–71. doi:10.1200/JCO.2011.38.5732 [PMID: 22271475PMCID: PMC3295562].

    Article  CAS  Google Scholar 

  3. Rubinstein LV, Korn EL, Freidlin B, Hunsberger S, Ivy SP, Smith MA. Design issues of randomized phase II trials and a proposal for phase II screening trials. J Clin Oncol Off J Am Soc Clin Oncol. 2005;23:7199–206. doi:10.1200/JCO.2005.01.149. PMID: 16192604.

    Article  Google Scholar 

  4. Ellenberg SS. Select-drop designs in clinical trials. Am Heart J. 2000;139:S158–160. PMID: 10740123.

    Article  CAS  PubMed  Google Scholar 

  5. Freidlin B, Korn EL, Gray R, Martin A. Multi-arm clinical trials of new agents: some design considerations. Clin Cancer Res Off J Am Assoc Cancer Res. 2008;14:4368–71. doi:10.1158/1078-0432.CCR-08-0325. PMID: 18628449.

    Article  CAS  Google Scholar 

  6. Bratton DJ, Phillips PPJ, Parmar MKB. A multi-arm multi-stage clinical trial design for binary outcomes with application to tuberculosis. BMC Med Res Methodol. 2013;13:139. doi:10.1186/1471-2288-13-139 [PMID: 24229079PMCID: PMC3840569].

    Article  PubMed  PubMed Central  Google Scholar 

  7. Cheung YK. Selecting promising treatments in randomized phase II cancer trials with an active control. J Biopharm Stat. 2009;19:494–508. doi:10.1080/10543400902802425 [PMID: 19384691PMCID: PMC2896482].

    Article  PubMed  PubMed Central  Google Scholar 

  8. Royston P, Barthel FM-S, Parmar MK, Choodari-Oskooei B, Isham V. Designs for clinical trials with time-to-event outcomes based on stopping guidelines for lack of benefit. Trials. 2011;12:81. doi:10.1186/1745-6215-12-81 [PMID: 21418571PMCID: PMC3078872].

    Article  PubMed  PubMed Central  Google Scholar 

  9. Berry DA. Bayesian clinical trials. Nat Rev Drug Discov. 2006;5:27–36. doi:10.1038/nrd1927. PMID: 16485344.

    Article  CAS  PubMed  Google Scholar 

  10. Lee JJ, Chu CT. Bayesian clinical trials in action. Stat Med. 2012;31:2955–72. doi:10.1002/sim.5404 [PMID: 22711340PMCID: PMC3495977].

    Article  PubMed  PubMed Central  Google Scholar 

  11. Whitehead J, Cleary F, Turner A. Bayesian sample sizes for exploratory clinical trials comparing multiple experimental treatments with a control. Stat Med. 2015;34(12):2048–61. doi:10.1002/sim.6469. PMID: 25765252.

    Article  PubMed  Google Scholar 

  12. Zalavsky BG. Bayesian hypothesis testing in two-arm trials with dichotomous outcomes. Biometrics. 2013;69(1):157–63. doi:10.1111/j.1541-0420.2012.

    Article  Google Scholar 

  13. Xie F, Ji Y, Tremmel L. A Bayesian adaptive design for multi-dose, randomized, placebo-controlled phase I/II trials. Contemp Clin Trials. 2012;33:739–48. doi:10.1016/j.cct.2012.03.001. PMID: 22426247.

    Article  PubMed  Google Scholar 

  14. Jung S-H. Randomized phase II trials with a prospective control. Stat Med. 2008;27:568–83. doi:10.1002/sim.2961. PMID: 17573688.

    Article  PubMed  Google Scholar 

  15. Thall PF, Simon R. Practical Bayesian guidelines for phase IIB clinical trials. Biometrics. 1994;50:337–49. PMID: 7980801.

    Article  CAS  PubMed  Google Scholar 

  16. Lee JJ, Xuemin Gu N, Suyu Liu N. Bayesian adaptive randomization designs for targeted agent development. Clin Trials Lond Engl. 2010;7:584–96. doi:10.1177/1740774510373120. PMID: 20571130.

    Article  Google Scholar 

  17. Du Y, Wang X, Jack Lee J. Simulation study for evaluating the performance of response-adaptive randomization. Contemp Clin Trials. 2015;40:15–25. doi:10.1016/j.cct.2014.11.006 [PMID: 25460340PMCID: PMC4314433].

    Article  PubMed  Google Scholar 

  18. Pham-Gia T, Turkkan N, Eng P. Bayesian analysis of the difference of two proportions. Commun Stat Theory Methods. 1993;22:1755–71. doi:10.1080/03610929308831114.

    Article  Google Scholar 

  19. Kawasaki Y, Miyaoka E. A Bayesian inference of P(π1 > π2) for two proportions. J Biopharm Stat. 2012;22:425–37. doi:10.1080/10543406.2010.544438 [00005 PMID: 22416833].

    Article  PubMed  Google Scholar 

  20. Cook J: Exact Calculation of Beta Inequalities. Tech. Rep. 2005, http://www.johndcook.com/exact_probability_inequalitiy.pdf

  21. Morita S, Thall PF, Müller P: Evaluating the Impact of Prior Assumptions in Bayesian Biostatistics. Stat. Biosci. 2010, 2:1–17. [doi: 10.1007/s12561-010-9018-x] [PMID: 20668651PMCID: PMC2910452].

  22. Moatti M, Zohar S, Facon T, Moreau P, Mary J-Y, Chevret S. Modeling of experts’ divergent prior beliefs for a sequential phase III clinical trial. Clin Trials Lond Engl. 2013;10:505–14. doi:10.1177/1740774513493528. PMID: 23820061.

    Article  Google Scholar 

  23. Spiegelhalter DJ, Myles JP, Jones DR, Abrams KR. Bayesian methods in health technology assessment: a review. Health Technol Assess Winch Engl. 2000;4:1–130. PMID: 11134920.

    CAS  Google Scholar 

  24. Taylor JMG, Braun TM, Li Z. Comparing an experimental agent to a standard agent: relative merits of a one-arm or randomized two-arm Phase II design. Clin Trials Lond Engl. 2006;3:335–48. doi:10.1177/1740774506070654. PMID: 17060208.

    Google Scholar 

  25. Ratain MJ, Sargent DJ. Optimising the design of phase II oncology trials: the importance of randomisation. Eur. J. Cancer Oxf. Engl. 1990 2009, 45:275–280. [doi: 10.1016/j.ejca.2008.10.029] [PMID: 19059773].

  26. Baron G, Perrodeau E, Boutron I, Ravaud P. Reporting of analyses from randomized controlled trials with multiple arms: a systematic review. BMC Med. 2013, 11:84. [doi: 10.1186/1741-7015-11-84] [PMID: 23531230PMCID: PMC3621416].

  27. Wason JMS, Trippa L. A comparison of Bayesian adaptive randomization and multi-stage designs for multi-arm clinical trials. Stat Med. 2014;33:2206–21. doi:10.1002/sim.6086. PMID: 24421053.

    Article  PubMed  Google Scholar 

  28. Wason JMS, Jaki T. Optimal design of multi-arm multi-stage trials. Stat Med. 2012;31:4269–79. doi:10.1002/sim.5513. PMID: 22826199.

    Article  PubMed  Google Scholar 

  29. Cai C, Yuan Y, Johnson VE. Bayesian adaptive phase II screening design for combination trials. Clin. Trials Lond. Engl. 2013, 10:353–362. [doi: 10.1177/1740774512470316] [PMID: 23359875PMCID: PMC3867529].

  30. Huang X, Biswas S, Oki Y, Issa J-P, Berry DA. A parallel phase I/II clinical trial design for combination therapies. Biometrics. 2007;63:429–36. doi:10.1111/j.1541-0420.2006.00685.x. PMID: 17688495.

    Article  CAS  PubMed  Google Scholar 

  31. Pan H, Xie F, Liu P, Xia J, Ji Y. A phase I/II seamless dose escalation/expansion with adaptive randomization scheme (SEARS). Clin. Trials Lond. Engl. 2014, 11:49–59. [doi: 10.1177/1740774513500081] [PMID: 24137041PMCID: PMC4281526].

  32. Freidlin B, Korn EL. Monitoring for lack of benefit: a critical component of a randomized clinical trial. J. Clin. Oncol. Off. J. Am. Soc. Clin. Oncol. 2009, 27:629–633. [doi: 10.1200/JCO.2008.17.8905] [PMID: 19064977PMCID: PMC2645857].

  33. Brannath W, Zuber E, Branson M, Bretz F, Gallo P, Posch M, Racine-Poon A. Confirmatory adaptive designs with Bayesian decision tools for a targeted therapy in oncology. Stat Med. 2009;28:1445–63. doi:10.1002/sim.3559.

    Article  PubMed  Google Scholar 

  34. Chevret S. Bayesian adaptive clinical trials: a dream for statisticians only? Stat Med. 2012;31(11–12):1002–13. doi:10.1002/sim.4363. PMID:21905067.

    Article  PubMed  Google Scholar 

  35. Collins SP, Lindsell CJ, Pang PS, Storrow AB, Peacock WF, Levy P, Rahbar MH, Del Junco D, Gheorghiade M, Berry DA. Bayesian adaptive trial design in acute heart failure syndromes: moving beyond the mega trial. Am Heart J. 2012;164:138–45. doi:10.1016/j.ahj.2011.11.023 [PMID: 22877798PMCID: PMC3417230].

    Article  PubMed  PubMed Central  Google Scholar 

  36. Gu X, Yin G, Lee JJ. Bayesian two-step Lasso strategy for biomarker selection in personalized medicine development for time-to-event endpoints. Contemp Clin Trials. 2013;36:642–50. doi:10.1016/j.cct.2013.09.009 [PMID: 24075829PMCID: PMC3873734].

    Article  PubMed  Google Scholar 

  37. Lai TL, Lavori PW, Liao OY-W. Adaptive choice of patient subgroup for comparing two treatments. Contemp Clin Trials. 2014;39:191–200. doi:10.1016/j.cct.2014.09.001. PMID: 25205644.

    Article  PubMed  PubMed Central  Google Scholar 

  38. Gao Z, Roy A, Tan M. Multistage adaptive biomarker-directed targeted design for randomized clinical trials. Contemp Clin Trials. 2015;42:119–31. doi:10.1016/j.cct.2015.03.001. PMID: 25778672.

    Article  PubMed  Google Scholar 

Download references

Acknowledgments

We wish to thank Professor Pierre Fenaux for providing access to this Phase II screening trial.

Funding

This work benefited from a grant from French Institute of Cancer, the INCA (2014–132, R14208KK).

Availability of data and materials

Trial data supporting their findings can be found at the AP-HP, Paris, which is the study sponsor. Actually, data could not be shared until the trial has been terminated (given the second phase is already running).

Authors’ contributions

SC first defined the conception of this work, with secondary contributions of MU, SB, and IB Preliminary analyses were performed by MU, SB, and IB. LJ and SC performed the terminal analyses and wrote the manuscript that was revised critically for important intellectual content by MU, SB, and IB. All the authors gave approval of the final version to be published, and agreed to be accountable for all aspects of the work in ensuring that questions related to the accuracy or integrity of any part of the work are appropriately investigated and resolved.

Authors’ information

L. Jacob is a MD-PhD candidate from the Ecole Normale Supérieure of Lyon. S. Chevret obtained both MD and PhD degrees, and she leads the biostatistics and clinical epidemiology team of Saint-Louis Hospital in Paris. Maria Uvarova, Sandrine Boulet and Inva Begaj are students in a statistical school.

Competing interests

The authors declare that they have no competing interests.

Ethics approval and consent to participate

The trial was approved by the French Ethics Committee of Ile de France X (reference P081225) in September, 2010.

Grant/Funding acknowledgement statement

This work benefited from a grant from INCA (2014–132, R14208KK).

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Sylvie Chevret.

Rights and permissions

Open Access This article is distributed under the terms of the Creative Commons Attribution 4.0 International License (http://creativecommons.org/licenses/by/4.0/), which permits unrestricted use, distribution, and reproduction in any medium, provided you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons license, and indicate if changes were made. The Creative Commons Public Domain Dedication waiver (http://creativecommons.org/publicdomain/zero/1.0/) applies to the data made available in this article, unless otherwise stated.

Reprints and permissions

About this article

Check for updates. Verify currency and authenticity via CrossMark

Cite this article

Jacob, L., Uvarova, M., Boulet, S. et al. Evaluation of a multi-arm multi-stage Bayesian design for phase II drug selection trials – an example in hemato-oncology. BMC Med Res Methodol 16, 67 (2016). https://doi.org/10.1186/s12874-016-0166-7

Download citation

  • Received:

  • Accepted:

  • Published:

  • DOI: https://doi.org/10.1186/s12874-016-0166-7

Keywords