Skip to main content

A simulation study for comparing testing statistics in response-adaptive randomization



Response-adaptive randomizations are able to assign more patients in a comparative clinical trial to the tentatively better treatment. However, due to the adaptation in patient allocation, the samples to be compared are no longer independent. At large sample sizes, many asymptotic properties of test statistics derived for independent sample comparison are still applicable in adaptive randomization provided that the patient allocation ratio converges to an appropriate target asymptotically. However, the small sample properties of commonly used test statistics in response-adaptive randomization are not fully studied.


Simulations are systematically conducted to characterize the statistical properties of eight test statistics in six response-adaptive randomization methods at six allocation targets with sample sizes ranging from 20 to 200. Since adaptive randomization is usually not recommended for sample size less than 30, the present paper focuses on the case with a sample of 30 to give general recommendations with regard to test statistics for contingency tables in response-adaptive randomization at small sample sizes.


Among all asymptotic test statistics, the Cook's correction to chi-square test (T MC ) is the best in attaining the nominal size of hypothesis test. The William's correction to log-likelihood ratio test (T ML ) gives slightly inflated type I error and higher power as compared with T MC , but it is more robust against the unbalance in patient allocation. T MC and T ML are usually the two test statistics with the highest power in different simulation scenarios. When focusing on T MC and T ML , the generalized drop-the-loser urn (GDL) and sequential estimation-adjusted urn (SEU) have the best ability to attain the correct size of hypothesis test respectively. Among all sequential methods that can target different allocation ratios, GDL has the lowest variation and the highest overall power at all allocation ratios. The performance of different adaptive randomization methods and test statistics also depends on allocation targets. At the limiting allocation ratio of drop-the-loser (DL) and randomized play-the-winner (RPW) urn, DL outperforms all other methods including GDL. When comparing the power of test statistics in the same randomization method but at different allocation targets, the powers of log-likelihood-ratio, log-relative-risk, log-odds-ratio, Wald-type Z, and chi-square test statistics are maximized at their corresponding optimal allocation ratios for power. Except for the optimal allocation target for log-relative-risk, the other four optimal targets could assign more patients to the worse arm in some simulation scenarios. Another optimal allocation target, R RSIHR , proposed by Rosenberger and Sriram (Journal of Statistical Planning and Inference, 1997) is aimed at minimizing the number of failures at fixed power using Wald-type Z test statistics. Among allocation ratios that always assign more patients to the better treatment, R RSIHR usually has less variation in patient allocation, and the values of variation are consistent across all simulation scenarios. Additionally, the patient allocation at R RSIHR is not too extreme. Therefore, R RSIHR provides a good balance between assigning more patients to the better treatment and maintaining the overall power.


The Cook's correction to chi-square test and Williams' correction to log-likelihood-ratio test are generally recommended for hypothesis test in response-adaptive randomization, especially when sample sizes are small. The generalized drop-the-loser urn design is the recommended method for its good overall properties. Also recommended is the use of the R RSIHR allocation target.

Peer Review reports


The response-adaptive randomization (RAR) in clinical trials is a class of flexible ways of assigning treatment to new patients sequentially based on available data. The RAR adjusts the allocation probabilities to reflect the interim results of the trial, thereby allowing patients to benefit from the interim knowledge as it accumulates in the trial. In practice, unequal allocation probabilities are generated based on the current assessment of treatment efficacy, which results in more patients being assigned to the treatment that is putatively superior.

Many RAR designs have been proposed over the years [113]. The two key issues extensively investigated are the evaluations of parameter estimations and hypothesis testing. Due to the dependency of assigning new patients based on observed data at that time, conventional estimates of treatment effect are often biased; therefore, efforts have been made to quantify and correct estimation bias [14, 15]. Recent theoretical works have been focused on solving problems encountered in practice, which includes delayed response, implementation for multi-arm trials, and incorporating covariates, etc. [1, 3, 11, 1618]. Many recent theoretical developments are summarized in [19]. Additionally, in order to compare treatment efficacies through hypothesis testing, studies have been conducted on power comparisons and sample size calculations under the framework of adaptive randomization [2024]. However, most of the works are based on large sample sizes, and focus on asymptotic properties [4, 12, 22, 25, 26]. But these properties have not been fully studied with small sample sizes. The mathematical challenge imposed by correlated data makes it extremely difficult to derive exact solutions for finite samples. Up to now, only limited results on exact solutions have been available [15, 27], and computer simulation has to be relied upon when sample size is small [23, 24], which is often the case in early phase II trials.

Each RAR design has its own objective, and there are both advantages and disadvantages associated with that objective. It is not our purpose to give a comprehensive assessment of different designs by comparing their advantages and disadvantages. Instead, the primary objective of the present study is to characterize the small sample properties of RAR based on a frequentist approach. In particular, we focus on comparing the performance of commonly used test statistics in RAR of two-arm comparative trials with a binary outcome. Due to the departure from normality caused by data correlation and the discrete nature of a binary outcome, hypothesis tests usually can not be controlled at any given levels of nominal significance. Thus, to make our simulation comparison more relevant, our assessment of hypothesis testing methods and RAR procedures is based on the calculation of both statistical power and the comparison to the nominal type I error rate. Several RAR methods studied in our simulations can assign patients according to a given allocation target, which may be optimal in terms of maximizing the power or minimizing the expected treatment failure. Therefore, we also compare the properties of test statistics at different optimal allocation targets.

The remaining parts of this paper are organized into 4 sections. In the Methods Section, we introduce the adaptive randomization procedures, the optimal allocation rates, and the test statistics used in the simulation. In the Results Section, we present the simulation results. We provide a discussion and final recommendations regarding the RAR methods and hypothesis tests in the Discussion and Conclusions Sections.


In the present section, we briefly describe the randomization methods, asymptotic hypothesis test statistics, and optimal patient allocation targets that are relevant to our simulations. More detailed information can be found in the corresponding references.

Response-based Adaptive Randomization (RAR)

The RAR procedures investigated in the present study are randomized play-the-winner (RPW) [8, 10], drop-the-loser (DL) [28], sequential maximum likelihood estimation (SMLE) [12], doubly-adaptive biased coin [2, 3], sequential estimation-adjusted urn (SEU) [13], and generalized drop-the-loser (GDL) [11] designs. RPW, DL, SEU and GDL are all urn models in the sense that treatment assignment for each patient can be obtained by sampling balls from an urn. In the usual clinical trial setting, an urn model consists of one urn with different types of balls that represent the different treatments under study. Patients are assigned to treatments by randomly selecting balls from the urn. Initially, the urn contains an equal number of balls for each of the treatment offered in the trial. With the progress of a clinical trial, certain rules are applied to update the contents of the urn in such a way that favors the selection of balls corresponding to the better treatment. For example, under the RPW design, the observation of a successful treatment response leads to the addition of a (>0) balls of the same type to the urn; a lack of success leads to the addition of b (>0) balls of the other type to the urn (a = b = 1 in our simulation). The limiting allocation rate of patients on treatment 1 is q 2/(q 1 + q 2), where q 1 = 1-p 1 and q 2 = 1-p 2 are failure rates, and p 1 and p 2 are success rates (or response rates) for treatments 1 and 2. In the DL model, patients are assigned to a treatment based on the type of ball that is drawn; however a treatment failure results in the removal of a treatment ball from the urn, and treatment successes are ignored. Due to the finite probabilities of extinction, immigration balls are added to the urn. If an immigration ball is drawn, an additional ball of each type is added. The sampling process is repeated until a treatment ball is drawn. The DL urn design has the same limiting allocation as the RPW urn, but less variability in patient allocation. Both SEU and GDL are urn models allowing fraction number of balls, and can target any allocation rate. For SEU method [13], if the limiting allocation of RPW urn is the target in a two-arm trial, then balls of type 2 and balls of type 1 are added to the urn following the allocation of the ith patient. Obviously, the response status of the ith patient is related to the contents of SEU urn only through the calculation of and . For a two-arm GDL urn model [11], when a treatment ball is drawn, a new patient is assigned accordingly, but the ball will not be returned to the urn. Depending on the response of the patient, the conditional average numbers of balls being added back to the urn are b 1 and b 2 for treatments 1 and 2, respectively. Therefore, the conditional average numbers of type 1 and type 2 balls being taken out of the urn can be defined as d 1 and d 2, where d 1 = 1-b 1 and d 2 = 1-b 2. Immigration balls are also present in a GDL urn. Whenever an immigration ball is drawn, a 1 and a 2 balls are added for treatments 1 and 2, respectively. Zhang et al [11] have shown that the limiting allocation rate of patients on treatment 1 is


The GDL urn becomes a DL urn when a 1 = 1, a 2 = 1, b 1 = p 1, and b 2 = p 2. Although GDL is a general method with different ways of implementation, a convenient approach is taken in our simulation. When a treatment ball is drawn, the ball is not returned, and no ball is added regardless of the response of the patient. When an immigration ball is drawn, 1 and 2 balls of type 1 and 2 are added, where C is a constant, and ρ 1 and ρ 2 are allocation targets on treatments 1 and 2, which are estimated sequentially using the maximum likelihood estimates (MLE) [11].

The SMLE and doubly-adaptive biased coin design (DBCD) methods can also target any allocation ratios, and SMLE can be implemented as a special case of DBCD method. In DBCD method, the probability of the (i+1)th patient being assigned to treatment 1 is calculated by


where r 1 = n 1 (i)/i and ρ(i) are the current allocation rate and estimated allocation rate on treatment 1 [2, 3]. The properties of the DBCD depend largely on the selection of g, which can be considered as a measuring function for the deviation from the allocation target. In the present study, we use the following function suggested by Hu and Zhang [3]:


where α is a tuning parameter. When α approaches infinity, the DBCD becomes deterministic and the patients are assigned to the putatively better treatment with probability 1. When α equals to 0, the MLE of ρ becomes the allocation target, and the DBCD method is essentially the same as the SMLE design proposed by Melfi et al [12].

Hypothesis Tests for Two-Arm Comparative Trials

In two-arm comparative trials, the results of a binary outcome variable can be summarized in a 2 × 2 contingency table (Table 1). The following hypothesis test is often conducted to compare treatment efficacy:

Table 1 Summary of data from a two-arm comparative clinical trial

Nine test statistics for the hypothesis test in (4) are given in Table 2. When relative risk (q 1/q 2) and odds ratio (p 1 q 2/q 1 p 2) are used to quantify the differences between 2 treatment arms, the test statistics are log-relative-risk and log-odds-ratio, T Risk and T Odds , which are asymptotically distributed as chi-square distribution with one degree of freedom (). When simple difference is used to measure the treatment effect, the applicable test statistics are the Wald-type test statistic T Wald and the score-type test statistics T Chisq , where the variance of simple difference in response rates is evaluated at H 1 or H 0 respectively. Additionally, the test statistics based on the logarithm of likelihood ratio (T LLR ) can also be constructed. Besides the 5 commonly used test statistics mentioned above, four modified test statistics are also included in Table 2. T MO is a modified log-odds-ratio test proposed by Gart using the approximation of discrete distributions by their continuous analogues [29]. As shown in Table 2, T MO is essentially a modification to T Odds by adding 0.5 to each cell of a 2 × 2 table. Similarly, Agresti and Caffo proposed a modification to T Wald by adding 1 to each cell of a contingency table [30], which results in the test statistic T MW in Table 2. T MC is the Cook's continuity correction to chi-square test statistics T Chisq . Williams provided a modification to log-likelihood-ratio test T LLR [31]. The original test statistic T LLR is improved by multiplying a scale factor such that the null distribution of the new test statistic T ML has the same moments as the chi-square distribution.

Table 2 Test statistics

Since all test statistics in Table 2 are based on , they are asymptotically equivalent and any one of them can be used for large sample sizes. Meanwhile at small sample sizes, an exact test can be conducted if a model is specified for the data given in Table 1. For example, depending on the number of fixed margins predetermined for the design, one of the following three models can be applied [32]:




where h(r 1|n, n 1, r) represents the hypergeometric distribution of r 1, b(r|n, p) gives the binomial distribution of r under the null hypothesis of equal response rates (H 0: p 1 = p 2 = p), and b(n 1|n, ρ) denotes the binomial distributions of patients on arm 1 with an allocation ratio of ρ (ρ 1 = 0.5 for equal randomization). The p value of exact test can be calculated by maximizing the probability in (5), (6), or (7) over the two nuisance parameters, p and ρ. However, due to data dependency, none of the above three models are directly applicable in adaptive randomization. For example, the allocation ratio ρ in adaptive randomization is a random variable with unknown distribution, and the binomial distribution of n 1 assumed in model (7) is not valid even when the null hypothesis is true. Therefore, in adaptive randomization, unconditional exact tests are not available and asymptotic test statistics such as the ones in Table 2 are required for testing the hypothesis in (4).

Optimal Allocation Ratios

The SMLE, DBCD, SEU, and GDL methods can be utilized to allocate patients based on different allocation targets. The allocation targets simulated in the present study are summarized in Table 3, where R Risk , R Odds , R Wald , R Chisq , and R LLR are optimal allocation ratios maximizing the power of T Risk , T Odds , T Wald , T Chisq , and T LLR respectively, at fixed sample size. The derivation of T Risk , T Odds , T Wald , T Chisq , and T LLR can be found in [33, 34], which is equivalent to minimizing the variance of corresponding test statistic at a fixed total sample size, and consequently the power of that test statistic is maximized. R RSIHR is a recently proposed allocation target that minimizes the expected total number of failures among all trials with the same power [15, 33]. The general theoretical framework and the practical implementation of optimal allocation in k-arm trials with binary outcomes are discussed and demonstrated by Tymofyeyev et al [35], where the optimization can be conducted over different goals. In practice, the performance of the methodology depends on the chosen RAR procedure. The present simulation study only focuses on two-arm trials, with a goal of maximizing the power or minimizing the total number of failures.

Table 3 Allocation targets


Simulations are conducted at different total numbers of patients ranging from 20 to 200. To simplify the presentation, the results for trials with 30 patients are shown here. When patients are less than 30, adaptive randomization is generally not recommended. For sample size of 100 or larger, all methods yield similar properties in general. For all of the urn models, one ball for each treatment is consistently used as the initial contents of the urn. The number of immigration balls is 1 for both the DL and GDL urns. The tuning parameter of DBCD, α, is fixed at 0 or 2. When α is 0, it results in the SMLE method. The value of the constant C in GDL is 2, which is equivalent to adding 2 treatment balls on average when an immigration ball is drawn. All simulation results are calculated based on 10,000 replicates.

For the purpose of comparison, the true allocation rates are shown in Table 4, and the simulated results for allocation rates on arm 1 are shown in Table 5. Among all RAR methods, DBCD has the best ability to attain the true allocation target. The comparison between SMLE and DBCD shows that, the allocation becomes more unbalanced and the variation of DBCD decreases with increasing value of tuning exponent α. On the other hand, the patient allocation of SEU results in more balanced mean allocation between two arms with a much larger variation as compared with other RAR methods. The GDL has the lowest variation among the four sequential RAR methods. When R RPW (the same as R DL ) is the allocation target, DL urn method has the lowest variation in patient allocation, which is consistent with the fact that the lower bound of the estimate of Var(R RPW ) is attained by DL urn [4]. The comparison among allocation targets shows that R LLR has the lowest variation in patient allocation, and the highest variation is usually found at R RPW or R Risk . However, R RPW and R Risk are usually the top two allocation targets that assign more patients to the better treatment. R Wald , R Odds , and RLLR assigns more patients to the worse arm in some simulation cases. Among the three allocation targets that assign more patients to the better treatment (R RSIHR , R Risk and R RPW ), R RSIHR has a stable and often the lowest variation in patient allocation.

Table 4 Asymptotic allocation rates on arm 1 calculated from true p 1 and p 2
Table 5 Mean and standard deviation (in parenthesis) of allocation rate on arm 1 for n = 30.

The simulation results are obtained for five null cases and ten alternative cases, and Table 6 gives the summary by averaging the results over the five null cases and the ten alternative cases for a given RAR method and at a given allocation target. Detailed simulation results for each test statistic are shown in Tables 7, 8, 9, 10, 11, 12 with one table for each of the six allocation targets. To simplify the presentation, the results are shown only for the four modified test statistics T MW , T MO , T MC , T ML , and the log-relative-risk test statistic T Risk because they tend to have better performance than the four corresponding unmodified tests. The qualitative comparisons among test statistics, RAR methods, and allocation targets can be made based on the results in Table 6.

Table 6 The mean and standard deviation (in parenthesis) of type I error and power.
Table 7 Power and type I error at R Wald (alpha = 0.05, n = 30).
Table 8 Power and type I error at R Risk (alpha = 0.05, n = 30).
Table 9 Power and type I error at R Odds (alpha = 0.05, n = 30).
Table 10 Power and type I error at R LLR (alpha = 0.05, n = 30).
Table 11 Power and type I error at R RSIHR (alpha = 0.05, n = 30).
Table 12 Power and type I error at R RPW (alpha = 0.05, n = 30).

As shown in Table 6 (also see Tables 7, 8, 9, 10, 11, 12), the worst performance can be found in the results of T MO and T Risk , which are often conservative with less than nominal type I error rate. T MW is always slightly conservative across all simulation cases. Overall, T MC is the best in attaining the correct type I error rate. T ML , is slightly inflated as compared with chi-square test T MC . However, the simulation results not shown here indicate that T ML is very robust against the unbalance in patient allocation even when sample size is 20. The comparison between different RAR methods shows that the mean type I error of GDL and SEU can usually match the correct size of tests better than other methods when T MC and T ML are used respectively. The type I error of DBCD is usually the largest one, except at R Odds . The overall type I error of SEU is comparable with GDL.

The power comparison of different test statistics indicates that T Risk is the statistic with the highest power at R Risk but with a much inflated type I error. Except at R Risk , T MC or T ML is the one with the highest power. Usually, GDL has the highest power and SEU has the lowest power among all RAR methods. DBCD and SMLE have similar power, but DBCD is more powerful in most cases. At target R RPW , DL urn has the best statistical properties. On the average, the target with the lowest power achieved by test statistics is R Risk . The highest overall power can usually be achieved by test statistics at R RSIHR and R LLR , but R LLR has the disadvantage of assigning more patients to the worse treatment in some cases.


In response-adaptive randomization, the assignment of a new patient depends on the treatment outcomes of patients previously enrolled in the trial. Delayed responses are often encountered in practice. Recently, the problem of delayed response in multi-arm generalized drop-the-loser urn and generalized Friedman's urn design is studied for both continuous and discontinuous outcomes [11, 16, 17, 36]. It is shown that, under reasonable assumption about the delay, the asymptotic properties of adaptive design are not affected by the delay. In the present study, the primary focus is the comparison between commonly used test statistics for 2 × 2 tables. Based on results not shown here, a less extreme allocation with higher variation would be expected when a random delay is assumed. It is assumed that the response status of each of the patients already in the trial is available before the allocation of a new patient in our simulations evaluation.

The RAR methods simulated in the present study are aimed at assigning patients to the better treatment with probabilities higher than what otherwise would be allowed by equal randomization. The price being paid is that the sample sizes on the two comparing arms are no longer fixed, and the adaptation in patient allocation can complicate the statistical inference at the end of the trial. The properties of test statistics will change when the patient allocation ratio changes in adaptive randomization. The power of test statistics shown in the present simulation study is obtained by averaging over trials with an unknown distribution of allocation ratios. As shown in our simulation results, a large deviation from the nominal significance level of the hypothesis test can be found even under the null hypothesis. Therefore, the practice of comparing asymptotic hypothesis testing methods based solely on statistical power under the alternative hypothesis is not recommended. It is important to compare adaptive randomization methods based on both the type I error rate and the statistical power, especially when the sample size is small.

General recommendations given in the result section are based on the aggregated results across different settings. Because the performance of different test statistics, RAR methods, and allocation target are closely related to each other, recommendations under a specific scenario can be found based on the detailed simulation results in Tables 7, 8, 9, 10, 11, 12.

Based on simulation results, the Cook's correction to chi-square test statistic T MC and Williams' correction to log-likelihood-ratio test T ML are recommended to be used for hypothesis testing at the end of adaptive randomization. T MC has good ability to attain the correct significance levels, and is relatively robust against the change of RAR method or allocation target. T ML has more robust performance than T MC and has higher power, but its type I error is slightly inflated as compared with T MC . However, T ML attains more accurate type I error than T MC when the sample size is small. The original Wald-type Z test statistic T Wald , which is very sensitive to patient allocation and has inflated type I error, should be avoided at small sample sizes. On the other hand, T MW , the Argresti's correction to T Wald , and T MO the modified log-odds-ratio test are too conservative and under powered at small sample sizes.

The primary objective of current study is to compare test statistics. Since the recommended test statistics are T MC and T ML , the comparison between RAR methods and allocation targets are mainly based on these two selected test statistics. Among SMLE, DBCD, SEU, and GDL methods, GDL seems to be the best one due to its ability to attain the correct size of hypothesis test and comparatively higher overall power at most allocation targets. Therefore, GDL is the recommended RAR method. The sequential estimation-adjusted urn (SEU) method is comparable with GDL in controlling the type I error. However, SEU is often under powered, and the high variation in patient allocation makes it less useful in practice. The DBCD method with tuning exponent α equal to 2 is the best in targeting the true allocation ratio. When T MC is the test statistic, DBCD has slightly inflated type I error and slightly lower power as compared with GDL. Therefore, among values of α, the balances among controlling the type I error, obtaining higher power, and targeting a given allocation ratio can be reached when α is equal to 2. The simulation comparison of statistical power for different RAR methods also indicates that DL urn has the best statistical properties at R RPW , mainly due to its low variation in patient allocation.

The statistical characteristics of hypothesis tests and RAR methods also depend on allocation targets. At R Wald , R Odds , and R LLR targets, more patients could be assigned to the inferior treatment in certain parameter spaces. In contrast, R Risk , R RPW , and R RSIHR always assign more patients to the better treatment. However, due to the more extreme allocation of R Risk and R RPW , both power and type I error of R Risk and R RPW will suffer as compared with R RSIHR . On the other hand, the variation of patient allocation at R RISHR is relatively small with a stable value across all simulation scenarios. Additional, among all designs with similar power using Wald-type test statistic, R RSIHR allocation ration can achieve fewer failures in the whole trial. Therefore, R RSIHR is recommended among all the allocation targets in the present study.

In addition to the frequentist development on the response adaptive randomization, Bayesian decision theoretic methods has also been proposed in the context of bandit problem. The concept of "patient horizon" was brought up to include future patients to whom the current study results might be applied. The goal is to maximize the total number of success in patients enrolled in the study with or without including the patient horizon. More detailed exposition of Bayesian methods for response adaptive randomization is beyond the scope of this paper and interested readers should consult the original work on this topic [3740].


The Cook's correction to chi-square test and Williams' correction to log-likelihood-ratio test are recommended for hypothesis test of RAR at small sample sizes. Among all the RAR methods compared, GDL method has better statistical properties in controlling type one error and maintaining high statistical power. The RSIHR allocation target provides a good balance between assigning more patients to the better treatment and maintaining a high overall power.



Response-adaptive randomization


Randomized play-the-winner




Doubly-adaptive biased coin design


Sequential maximum likelihood estimation design


Sequential estimation-adjusted urn


Generalized drop-the-loser urn


Optimal allocation target minimizing total numbers of failure for Wald-type test statistics at fixed power


Maximum likelihood estimate.


  1. 1.

    Andersen J, Faries D, Tamura R: A randomized play-the-winner design for multi-arm clinical trials. Communications in Statistics-Theory and Methods. 1994, 23: 309-323. 10.1080/03610929408831257.

    Article  Google Scholar 

  2. 2.

    Eisele JR: The doubly adaptive biased coin design for sequential clinical trials. Journal of Statistical Planning and Inference. 1994, 38: 249-262. 10.1016/0378-3758(94)90038-8.

    Article  Google Scholar 

  3. 3.

    Hu FF, Zhang LX: Asymptotic properties of doubly adaptive biased coin designs for multi-treatment clinical trials. Annals of Statistics. 2004, 32 (1): 268-301.

    Google Scholar 

  4. 4.

    Ivanova S, Rosenberger WF, Durham S, Flournoy N: A birth and death urn for randomized clinical trials: asymptotic methods. Sankhya: The Indian Journals of Statistics. 2000, 62 (B): 104-118.

    Google Scholar 

  5. 5.

    Li W, Durham SD, Flournoy N: Randomized Pôlya urn. 1996 Proceedings of the Biopharmaceutical Section of the American Statistical Association: 1997; Alexandria: American Statistical Association. 1997, 166-170.

    Google Scholar 

  6. 6.

    Rosenberger WF, Stallard N, Ivanova A, Harper CN, Ricks ML: Optimal adaptive designs for binary response trials. Biometrics. 2001, 57: 909-913. 10.1111/j.0006-341X.2001.00909.x.

    CAS  Article  PubMed  Google Scholar 

  7. 7.

    Wei LJ: The generalized Polya's urn design for sequential medical trials. Annals of Statistics. 1979, 7: 291-296. 10.1214/aos/1176344614.

    Article  Google Scholar 

  8. 8.

    Wei LJ, Durham SD: The randomized play-the-winner rule in medical trials. Journal of American Statistical Association. 1978, 85: 156-162. 10.2307/2289538.

    Article  Google Scholar 

  9. 9.

    Yang Y, Zhu D: Randomized allocation with nonparametric estimation for a multi-armed bandit problem with covariates. Annals of Statistics. 2002, 30: 100-121. 10.1214/aos/1015362186.

    Article  Google Scholar 

  10. 10.

    Zelen M: Play the winner rule and the controlled clinical trial. Journal of the American Statistical Association. 1969, 64: 131-146. 10.2307/2283724.

    Article  Google Scholar 

  11. 11.

    Zhang LX, Chan WS, Cheung SH, Hu FF: A generalized drop-the-loser urn for clinical trials with delayed responses. Statistica Sinica. 2007, 17 (1): 387-409.

    CAS  Google Scholar 

  12. 12.

    Melfi VF, Page C, Geraldes M: An adaptive randomized design with application to estimation. Canadian Journal of Statistics. 2001, 29 (1): 107-116. 10.2307/3316054.

    Article  Google Scholar 

  13. 13.

    Zhang LX, Hu FF, Cheung SH: Asymptotic theorems of sequential estimation-adjusted urn models. Annals of Applied Probability. 2006, 16 (1): 340-369. 10.1214/105051605000000746.

    Article  Google Scholar 

  14. 14.

    Coad DS, Ivanova A: Bias calculations for adaptive urn designs. Sequential Analysis. 2001, 20 (3): 91-116. 10.1081/SQA-100106051.

    Article  Google Scholar 

  15. 15.

    Rosenberger WF, Sriram TN: Estimation for an adapative allocation design. Journal of Statistical Planning and Inference. 1997, 59: 309-319. 10.1016/S0378-3758(96)00109-7.

    Article  Google Scholar 

  16. 16.

    Bai ZD, Hu FF, Rosenberger WF: Asymptotic properties of adaptive designs for clinical trials with delayed response. Annals of Statistics. 2002, 30 (1): 122-139. 10.1214/aos/1015362187.

    Article  Google Scholar 

  17. 17.

    Hu FF, Zhang LJ: Asymptotic normality of urn models for clinical trials with delayed response. Bernoulli. 2004, 10: 447-463. 10.3150/bj/1089206406.

    Article  Google Scholar 

  18. 18.

    Rosenberger WF, Vidyashankar AN, Agarwal DK: Covariate-adjusted response-adaptive designs for binary response. Journal of Biopharmaceutical Statistics. 2001, 11: 227-236.

    CAS  Article  PubMed  Google Scholar 

  19. 19.

    Hu FF, Rosenberger WF: The Theory of Response-Adaptive Randomization in Clinical Trials. 2006, Hoboken, New Jersey: John Wiley & Sons, Inc.

    Google Scholar 

  20. 20.

    Hu FF, Rosenberger WF: Optimality, variability, power: evaluating response-adaptive randomization procedures for treatment comparisons. Journal of the American Statistical Association. 2003, 98 (463): 671-678. 10.1198/016214503000000576.

    Article  Google Scholar 

  21. 21.

    Zhang LJ, Rosenberger WF: Response-adaptive randomization for clinical trials with continuous outcomes. Biometrics. 2006, 62 (2): 562-569. 10.1111/j.1541-0420.2005.00496.x.

    Article  PubMed  Google Scholar 

  22. 22.

    Hu FF, Rosenberger WF, Zhang LX: Asymptotically best response-adaptive randomization procedures. Journal of Statistical Planning and Inference. 2006, 136 (6): 1911-1922. 10.1016/j.jspi.2005.08.011.

    Article  Google Scholar 

  23. 23.

    Morgan CC, Coad DS: A comparison of adaptive allocation rules for group-sequential binary response clinical trials. Statistics in Medicine. 2007, 26 (9): 1937-1954. 10.1002/sim.2693.

    Article  PubMed  Google Scholar 

  24. 24.

    Guimaraes P, Palesch Y: Power and sample size simulations for Randomized Play-the-Winner rules. Contemporary Clinical Trials. 2007, 28 (4): 487-499. 10.1016/j.cct.2007.01.006.

    Article  PubMed  Google Scholar 

  25. 25.

    Matthews PC, Rosenberger WF: Variance in randomized play-the-winner clinical trials. Statistics & Probability Letters. 1997, 35: 233-240. 10.1016/S0167-7152(97)00018-7.

    Article  Google Scholar 

  26. 26.

    Bai ZD, Hu FF: Asymptotics in randomized urn models. Annals of Applied Probability. 2005, 15 (1B): 914-940. 10.1214/105051604000000774.

    Article  Google Scholar 

  27. 27.

    Matthews PC, Rosenberger WF: Variance in randomized play-the-winner clinical trials. Statistics & Probability Letters. 1997, 35 (3): 233-240. 10.1016/S0167-7152(97)00018-7.

    Article  Google Scholar 

  28. 28.

    Ivanova A: A play-the-winner-type urn design with reduced variability. Metrika. 2003, 58: 1-13.

    Google Scholar 

  29. 29.

    Gart JJ: Alternative analyses of contingency tables. Journal of Royal Statistical Society B. 1966, 28: 164-179.

    Google Scholar 

  30. 30.

    Agresti A, Caffo B: Simple and effective confidence intervals for proportions and differences of proportions results from adding two successes and two failures. The American Statistician. 2000, 54 (4): 280-288. 10.2307/2685779.

    Google Scholar 

  31. 31.

    Williams SS: Improved likelihood ratio tests for complete contingency tables. Biometrika. 1976, 63: 33-37. 10.1093/biomet/63.1.33.

    Article  Google Scholar 

  32. 32.

    Upton GJG: A comparison of alternative tests for the 2 × 2 table comparative trial. Journal of Royal Statistical Society A. 1982, 145: 86-105. 10.2307/2981423.

    Article  Google Scholar 

  33. 33.

    Rosenberger WF, Lachin JM: Randomization in Clinical Trials: Theory and Practice. 2002, New York: Wiley

    Google Scholar 

  34. 34.

    Jennison C, Turnbull BW: Group Sequential Methods with Applications to Clinical Trials. 2000, Boca Raton: Chapman & Hall/CRC

    Google Scholar 

  35. 35.

    Tymofyeyev Y, Rosenberger WF, Hu FF: Implementing optimal allocation in sequential binary response experiments. Journal of American Statistical Association. 2007, 102 (477): 224-234. 10.1198/016214506000000906.

    CAS  Article  Google Scholar 

  36. 36.

    Sun RB, Cheung SH, Zhang LX: A generalized drop-the-loser rule for multi-treatment clinical trials. Journal of Statistical Planning and Inference. 2007, 137 (6): 2011-2023. 10.1016/j.jspi.2006.06.039.

    Article  Google Scholar 

  37. 37.

    Berry DA, Fristedt B: Bandit Problems. 1985, New York: Chapman and Hall

    Google Scholar 

  38. 38.

    Thompson WR: On the likelihood that one unknown probability exceeds another in the view of the evidence of the two samples. Biometrika. 1933, 25: 275-294.

    Article  Google Scholar 

  39. 39.

    Berry DA, Eick SG: Adaptive assignment versus balanced randomization in clinical trials: a decision analysis. Statistics in Medicine. 1995, 14: 231-246. 10.1002/sim.4780140302.

    CAS  Article  PubMed  Google Scholar 

  40. 40.

    Cheng Y, Berry DA: Optimal adaptive randomized designs for clinical trials. Biometrika. 2007, 94 (4): 673-689. 10.1093/biomet/asm049.

    Article  Google Scholar 

Pre-publication history

  1. The pre-publication history for this paper can be accessed here:

Download references


This work was supported in part by grants CA16672 from the National Cancer Institute and W81XWH-06-1-0303 and W81XWH-07-1-0306 from the Department of Defense. The authors thank Dr. Lunagomez for helpful discussions. The authors also thank Ms. Lee Ann Chastain for her help, which greatly improved the presentation of our study.

Author information



Corresponding author

Correspondence to J Jack Lee.

Additional information

Competing interests

The authors declare that they have no competing interests.

Authors' contributions

XMG conducted the simulation part of the study. Both XMG and JJL participated in designing the study and writing the manuscript. All authors read and approved the final manuscript.

Authors’ original submitted files for images

Rights and permissions

This article is published under license to BioMed Central Ltd. This is an open access article distributed under the terms of the Creative Commons Attribution License (, which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited.

Reprints and Permissions

About this article

Cite this article

Gu, X., Lee, J.J. A simulation study for comparing testing statistics in response-adaptive randomization. BMC Med Res Methodol 10, 48 (2010).

Download citation


  • Allocation Ratio
  • Adaptive Randomization
  • Patient Allocation
  • Allocation Target
  • Good Statistical Property