 Research
 Open access
 Published:
Optimal trial design selection: a comparative analysis between twoarm and threearm trials incorporating network metaanalysis for evaluating a new treatment
BMC Medical Research Methodology volume 23, Article number: 267 (2023)
Abstract
Background
Planning the design of a new trial comparing two treatments already in a network of trials with an apriori plan to estimate the effect size using a network metaanalysis increases power or reduces the sample size requirements. However, when the comparison of interest is between a treatment already in the existing network (old treatment) and a treatment that hasn’t been studied previously (new treatment), the impact of leveraging information from the existing network to inform trial design has not been extensively investigated. We aim to identify the most powerful trial design for a comparison of interest between an old treatment A and a new treatment Z, given a fixed total sample size. We consider three possible designs: a twoarm trial between A and Z (’direct twoarm’), a twoarm trial between another old treatment B and Z (’indirect twoarm’), and a threearm trial among A, B, and Z.
Methods
We compare the standard error of the estimated effect size between treatments A and Z for each of the three trial designs using formulas. For continuous outcomes, the direct twoarm trial always has the largest power, while for a binary outcome, the minimum variances among the three trial designs are conclusive only when \(p_A(1p_A) \ge p_B(1p_B)\). Simulation studies are conducted to demonstrate the potential for the indirect twoarm and threearm trials to outperform the direct twoarm trial in terms of power under the condition of \(p_A(1p_A) < p_B(1p_B)\).
Results
Based on the simulation results, we observe that the indirect twoarm and threearm trials have the potential to be more powerful than a direct twoarm trial only when \(p_A(1p_A) < p_B(1p_B)\). This power advantage is influenced by various factors, including the risk of the three treatments, the total sample size, and the standard error of the estimated effect size from the existing network metaanalysis.
Conclusions
The standard twoarm trial design between two treatments in the comparison of interest may not always be the most powerful design. Utilizing information from the existing network metaanalysis, incorporating an additional old treatment into the trial design through an indirect twoarm trial or a threearm trial can increase power.
Background
Network metaanalysis (NMA) compares three or more interventions by combining indirect and direct evidence from a network of trials. When designing a new trial, NMA can be used to leverage existing trial data, reducing the sample size needed and increasing the power to detect treatment effects [1].
Nikolakopoulo et al. (2014) [2] provided a framework for study design that helps investigators decide the treatments, total sample sizes, and the number of studies needed to achieve a desirable level of power, with the existing evidence. While their study examined the comparison of interest (COI) between treatments that existed in the network of trials, an important and valuable situation is to compare one treatment that appears in the existing network of trials and one treatment that doesn’t. We refer to the treatment that appeared previously and the treatment that didn’t appear previously as ’old treatment’ and ’new treatment’, respectively. We are unaware of any guidance or literature investigating the study design for a future trial when the COI is between a new treatment and an old treatment.
Suppose a specific total sample size has been decided based on the available physical or financial resources. When a specific comparison (AZ) is of interest with A as an old treatment and Z as a new treatment, it is of interest to know which of the possible designs provides the greatest power. The most intuitive design is to conduct a twoarm trial between A and Z directly. Rigorous evidence is needed to validate the intuition. When we analyze the new trial with the existing evidence from a network of trials, it is possible that we can gain power when indirect evidence is introduced. Another motivation to consider other types of design is that the old treatment A in our interested COI is expensive or practically hard to implement. For example, perhaps treatment A is an antibiotic with a longer withholding period compared to treatment B, so although legal and feasible, for a trial it would not be preferred by the farm staff. Another rationale could be that treatment B is already used at the planned trial site, and implementing two novel treatments (A and Z) at the trial site is a barrier to the conduct of the trial In such situations, we would actually look for alternative designs by including another relatively old treatment to avoid the higher resource cost associated with A but still be able to provide a reliable estimate of the relative effect size between A and Z at the same time. As a consequence of these motivations, researchers could be interested in exploring the power among three possible trial designs: 1) direct twoarm trial: conduct a new twoarm trial between A and Z; 2) threearm trial: conduct a new threearm trial among A, B and Z; 3) indirect twoarm trial: conduct a new twoarm trial between B and Z where B is another old treatment.
Our aim is to provide guidelines for investigators to decide the most powerful trial design among the three candidates. We develop formulas for both continuous and binary outcomes and investigate if borrowing information from the existing evidence can increase power. The three trial designs are compared based on their maximum achievable power under a fixed total sample size. Sample size allocation will be optimized to minimize the variance, thereby maximizing the power. To evaluate the power difference further, a simulation study is conducted to illustrate the power difference among the three candidate trial designs. By doing so, we hope to provide valuable insights into designing future trials and facilitating the efficient use of existing resources.
Methods
In this section, we introduce the variance formula for the three designs under two types of outcome data, continuous and binary. The comparison of power among the three designs is achieved by comparing the variance of the estimated effect size.
The following set of notations are used for the two types of outcomes. Suppose our COI is between treatments A and Z, where A is an old treatment and Z is a new treatment. Treatment B is another old treatment in the network. Let \(d_{AZ, two}\), \(d_{AZ, two, indirect}\), and \(d_{AZ, three}\) denote the relative effect size between treatment A and Z in the direct twoarm trial, indirect twoarm trial and threearm trial, respectively. Let \(\hat{d}_{AZ, two}\) , \(\hat{d}_{AZ, two, indirect}\) , and \(\hat{d}_{AZ, three}\) denote the corresponding estimations. Let \(n_i\) denote the sample size for each treatment group i, \(i\in \{A,B,Z\}\). We use \(\sigma ^2_{AB,old}\) to denote the variance of the estimated effect size between treatment A and B from the existing NMA.
Continuous outcome
Assume we have a twoarm trial comparing treatment A and Z with a total sample size of n. Suppose the outcome data are continuous such as a production metric like average daily gain or milk production. In the continuous case, we use the mean difference in the outcome to represent the relative effect size. The variance of \(\hat{d}_{AZ,two}\) can be written as
where \(\sigma ^2_{AZ}\) is the variance of response for each treatment group under the homogeneous variance assumption and \(\hat{\sigma }^2_{AZ}\) is the estimate. The optimal allocation would be \(n_A=n_Z=\frac{n}{2}\), then the minimal value of \(\text {Var}(\hat{d}_{AZ,two})\) is
Suppose instead we conduct a twoarm trial with treatment B and Z with a total sample size of n, and the comparison between treatment A and Z can be achieved by using the indirect estimate obtained from adding the new trial data to the existing network using NMA. The variance of \(\hat{d}_{AZ, two, indirect}\) can be expressed as
where \(\sigma ^2_{BZ}\) is the variance of response for each treatment group under the homogeneous variance assumption and \(\hat{\sigma }^2_{BZ}\) is the estimate. \(\text {Var}(\hat{d}_{AZ,two, indirect})\) reaches its minimum when \(n_B = n_Z = \frac{n}{2}\) and its minimum is
Finally, suppose we conduct a threearm trial with treatments A, B and Z with a total sample size of n, the variance of \(\hat{d}_{AZ, three}\) when we analyze the new trial with the existing network by NMA is
where \(\sigma ^2\) is the variance of response for each treatment group in the threearm trial under the homogeneous variance assumption and \(\hat{\sigma }^2\) is the estimate.
For any given sample size \(n_A\), \(n_B\) and \(n_Z\), \(\text {Var}(\hat{d}_{AZ,three})\) is always bigger than
For a fixed total sample size \(n = n_A+n_B+n_Z\), \(\text {Var}(\hat{d}_{AZ,three,0})\) reaches the minimum when \(n_Z = n_A+n_B = \frac{n}{2}\) and the minimal value of \(\text {Var}(\hat{d}_{AZ,three,0})\) is
By the homogeneous variance assumption for each treatment group, we have \(\sigma _{AZ}^2 = \sigma _{BZ}^2 = \sigma ^2\) so that \(\hat{\sigma }_{AZ}^2 = \hat{\sigma }_{BZ}^2 = \hat{\sigma }^2\). By Eqs. 1, 2, and 3, we have the following two inequalities
To summarize, we have
Given the total sample size is fixed at n, the minimum variance of the estimated effect size between treatment A and Z of the direct twoarm trial is the smallest among the three types of design. In other words, it’s unnecessary to conduct a threearm trial or indirect twoarm trial in the continuous case for the purpose of reducing variance or increasing power. This result is independent of the configuration of the network of trials i.e. the number of trials for each treatment or the effect size of any pairwise comparison of A, B or Z.
Binary outcome
Assume we have a twoarm trial comparing treatment A and Z with a total sample size of n. Suppose the outcome is binary such as a disease event. For binary data, we usually use the log odds ratio between two groups to represent the relative effect size. Let \(p_i\) denote the estimated probability of an event occurring in treatment group i, \(i\in \{A,B,Z\}\). The variance of \(\hat{d}_{AZ,two}\) can be written as
By calculating the first derivative and setting it to 0, the optimal sample size allocation with the goal to minimize \(\text {Var}(\hat{d}_{AZ,two})\) would be
The minimal value of \(\text {Var}(\hat{d}_{AZ,two})\) with a fixed total sample size of n is
Suppose we conduct a new twoarm trial with treatment B and Z with a total sample size of n, the variance of the estimated effect size between treatment A and Z by analyzing the new trial with the existing network by NMA can be expressed as
Similar to Eq. 5, we have the minimal value of \(\text {Var}(\hat{d}_{AZ,two, indirect})\) to be
Suppose we conduct a new threearm trial with treatment A, B and Z with a total sample size of n, the variance of \(\hat{d}_{AZ, three}\) when we analyze the new trial with the existing network by NMA is:
To determine if there exists any condition(s) where the indirect or threearm trial would result in a smaller variance other than the direct twoarm trial (as was the case for the continuous outcomes), we utilize Eqs. (4)(6). For simplicity, the formulas are rewritten as below:
where \(q_i = p_i(1p_i)\) for \(i \in {A, B, Z}\). Let \(v_{3,0} = \frac{1}{n_A q_A} + \frac{1}{n_Z q_Z}  \frac{1}{n_A^2q_A^2 (\frac{1}{n_A q_A} + \frac{1}{n_B q_B})}\). It is straightforward to see that \(v_3 > v_{3,0}\).
Under the condition that \(q_A \ge q_B\), we have the relationship between \(v_1\) and \(v_2\) as follows:
Let \(v_{3,0}' = \frac{1}{n_Aq_A} + \frac{1}{n_Zq_Z}  \frac{1}{n_A^2q_A^2 (\frac{1}{n_A q_A} + \frac{1}{n_B q_A})}\). Under the condition that \(q_A \ge q_B\), it is obvious that \(v_{3,0} \ge v_{3,0}'\). \(v_{3,0}'\) can be simplified as
Sedrakyan’s inequality [3] states that for any reals \(a_1, \cdots , a_n\) and positive reals \(b_1, \cdots , b_n\), we have \(\sum _{i=1}^n\frac{a_i^2}{b_i} \ge \frac{(\sum _{i=1}^n a_i)^2}{\sum _{i=1}^n b_i}\). From the expression of \(v_{3,0}'\), we have \(a_1 = \frac{1}{\sqrt{q_A}}\), \(a_2 = \frac{1}{\sqrt{q_Z}}\), \(b_1 = nn_Z\), and \(b_2 = n_Z\), therefore we have
With above, we have \(v_{3,0}' \ge v_1\), \(v_{3,0} \ge v_{3,0}'\), \(v_3 > v_{3,0}\). To sum up,
By Eqs. (8) and (9), we find that \(v_2 > v_1\) and \(v_3 > v_1\) when \(q_A \ge q_B\), which means, under the condition that \(p_A(1p_A) \ge p_B(1p_B)\), the minimum variance of the estimated effect size between treatment A and Z of the direct twoarm trial is the smallest among all three types of design. Therefore, it is reasonable to choose the direct twoarm trial as the trial design under the condition of \(p_A(1p_A) \ge p_B(1p_B)\) for the consideration of power.
However, for the scenario where \(p_A(1p_A) < p_B(1p_B)\), the relationship between \(v_1\), \(v_2\) and \(v_3\) is uncertain, indicating that any of the three types of trial design could have the smallest minimum variance based on different parameters (n, \(p_A\), \(p_B\) and \(\sigma ^2_{AB,old}\)), which will be shown later by the simulation study. Recall that the minimum variance refers to the minimum variance of the estimated effect size between treatment Z and A that can be achieved by the optimal allocation of total sample size n. Each type of trial design with a fixed n has its minimum variance and the smallest one among the three minimum variances is called the smallest minimum variance.
Optimal sample allocation with a fixed total sample size
In Binary outcome section, we show that for each type of design with a fixed total sample size, the minimum variance of the estimated effect size could be achieved by altering the sample size allocation. Some variance formulas like Eq. 7 are complex to calculate a numeric solution to the optimal sample size allocation given a fixed total sample size for minimizing the variance. For other variance formulas like Eq. 4, even though it is straightforward to calculate the optimal sample size allocation for the direct and indirect twoarm trial. It is still one step away from the final solution due to the constraint that the sample size for each treatment group has to be an integer. Considering those factors, we get the optimal sample size allocation with a fixed total sample size and binary outcome by solving the following optimization problems:
For the direct twoarm trial,
For the indirect twoarm trial,
For the threearm trial,
We set the constraint on the minimum number of samples for each treatment to be 10 to ensure the statistical inference on the new trial is based on a reasonable number of subjects in each group. Without this constraint, it is possible to have a sample size of 1 mathematically, which is not practically feasible. The constraint value 10 can be changed to other appropriate numbers according to the practical trial scenario. Closedform solutions do not exist for minimizing this function. Therefore, nonlinear optimization methods are used to obtain the optimal allocation. In particular, we utilize the “differential evolution optimization” [4]. This optimization method searches over a continuous space so the integer solution can be obtained by enumerating all possible integers around the global solution.
Power formula
In the previous two subsections, we present the variance formula when the outcome type is continuous and binary. The ultimate goal for reducing the variance is to maximize the power. In this section, we present the formula for estimating the probability that the effect size in log odds ratio scale between two treatments will be statistically significant under a specific alternative hypothesis, i.e, power. Let \(\mu _{AZ}\) be the true effect size between treatment A and Z. Let the null hypothesis \(H_0\) for the comparisons AZ be \(\mu _{AZ}=0\). Let \(\mu _{AB}\ne 0\) be the alternative hypothesis \(H_1\), then the expressions for the power is given by
where s.d denotes the population standard deviation of the effect size \(\mu _{AZ}\), which is often replaced with the standard error of the estimated effect size; \(\Phi (\cdot )\) is the standard normal cumulative distribution function; \(\alpha\) is the significance level; \(z_{\alpha /2}\) is the upper \(\alpha /2\)th quantile of the standard normal distribution, which is used here to control the overall type I error of the testing procedure at level \(\alpha\).
Simulation
Dataset description
A previously published network of interventions for the antibiotic treatment of Bovine Respiratory Disease (BRD) in feedlot cattle is used as an illustrative example for the problem of interest [5]. The network comprises 98 trials and 13 treatments in total. Most trials contain two arms and eight trials contain three arms. The network plot is shown in Fig. 1. Armlevel data are available and the outcome is a dichotomous health event. To compare treatments, the log odds ratios for pairwise comparisons are calculated. Our focus for illustration purposes is the antibiotic tulathromycin (TULA) and a combination product of sulfamethoxazole/trimethoprim (TRIM) or ceftiofur sodium (CEFTS). All products are administered according to the manufactures instructions. More details about these data are available in the original publication [5].
Simulation
In Methods section, the relationship of the minimum variance among three types of design is conclusive if the outcome is continuous. For binary data, the minimum variance among three trial designs is determinable when \(p_A(1p_A) \ge p_B(1p_B)\) while it is not when \(p_A(1p_A) < p_B(1p_B)\). To illustrate the possibility for the two alternatives to be the best design regarding power under the condition of \(p_A(1p_A) < p_B(1p_B)\), two simulation studies with binary outcomes are conducted in this section. Two scenarios are included to illustrate that the power gain is dependent upon the comparison of interest due to different disease risks and the extent of information (prior trials) in the network. The simulations are employed to compare the maximum power that each trial design can achieve given the fixed total sample size. Notably, the key distinction between Simulation I and Simulation II lies in the selection of treatment B for the alternative trial designs. We present these specific simulation scenarios to demonstrate the potential for each alternative to emerge as the optimal trial design among the three candidate options. The selection of treatment B is based on an initial exploration of the power formula, providing valuable insights into the potential advantages of each trial design.
Simulation I: example scenario for indirect twoarm to be the most powerful trial design
Assume our COI is between treatment TULA and treatment Z and three options of trial design are open to be chosen (Fig. 1). In the threearm trial, treatment CEFTS is selected from the existing network as the third treatment. For the convenience of notation, we denote TULA and CEFTS as A and B in this subsection. From the NMA of the existing network, the estimated risk of A and B are 0.166 and 0.430, which ensures the condition of \(p_A(1p_A) < p_B(1p_B)\) holds.
We set the total sample size of the new trial to be 80/100/120 and set 4 different values from 0.35 to 0.50 as the risk of the new treatment, Z. For each scenario with the risk of Z to be \(p_Z\) and the total sample size n, the process is conducted as below:

1
From a network metaanalysis of the existing network, the risk of treatment j is estimated and denoted as \(p_j\).

2
Analyze the direct twoarm trial between A and Z

(a)
Find the optimal allocation (\(n_A\), \(n_Z\)) by solving the optimization problems in Eq. 10.

(b)
Data representing the new trial is generated by sampling \(r_i\) from Binom(\(n_i\), \(p_i\)), \(i \in \{A, Z\}\).

(c)
Exact logistic regression is applied to analyze the data and the pvalue is extracted from the result.

(d)
Use a 01 indicator to denote if there is a significant difference between the A and Z (\(\alpha =0.05\)).

(e)
Repeat steps above for 50,000 times. Calculate the proportion of the indicator equal to 1 to obtain the simulation power.

(a)

3
Analyze the indirect twoarm trial between B and Z.

(a)
Find the optimal allocation (\(n_B\), \(n_Z\)) by solving the optimization problems in Eq. 11.

(b)
Data representing the new trial is generated by sampling \(r_i\) from Binom(\(n_i\), \(p_i\)), \(i \in \{B, Z\}\).

(c)
The data representing the new trial is added to the existing network to represent a row of studylevel data.

(d)
Network metaanalysis is applied to analyze the combined data and the pvalue is extracted from the result.

(e)
Use a 01 indicator to denote if there is a significant difference between A and Z (\(\alpha =0.05\)).

(f)
Repeat steps above for 50,000 times. Calculate the proportion of the indicator equal to 1 to get the simulation power.

(a)

4
Analyze the threearm trial with A, B, and Z.

(a)
Find the optimal allocation (\(n_A\), \(n_B\), \(n_Z\)) by solving the optimization problems in Eq. 12.

(b)
Data representing the new trial is generated by sampling \(r_i\) from Binom(\(n_i\), \(p_i\)), \(i \in \{A, B, Z\}\).

(c)
The data representing the new trial is added to the existing network to represent a row of studylevel data.

(d)
Network metaanalysis is applied to analyze the combined data and the pvalue is extracted from the result.

(e)
Use a 01 indicator to denote if there is a significant difference between A and Z (\(\alpha =0.05\)).

(f)
Repeat steps above for 50,000 times. Calculate the proportion of the indicator equal to 1 to get the simulation power.

(a)
Simulation II: example scenario for threearm to be the most powerful trial design
Assume our COI is between treatment TULA and treatment Z. We set 9 values from 0.35 to 0.43 as the risk of the new treatment, Z. For each scenario with the risk of Z to be \(p_Z\) and the total sample size \(n=100\), the process of simulation II is the same as simulation I, except for the choice of treatment B (Fig. 2). In this simulation, treatment A is the same as simulation I while treatment B is TRIM. From the NMA of the existing network, the estimated risk of A and B are 0.166 and 0.553, which satisfies the condition of \(p_A(1p_A) < p_B(1p_B)\).
Results
The outputs from Simulation Study I are in Table 1. Each row represents a different scenario with a different risk of Z, \(p_Z\), and total sample size, N. In each scenario, it records the optimal sample size allocation given a fixed total sample size of a new trial for each trial design from left to right: (1) direct twoarm trial; (2) threearm trial; (3) indirect twoarm trial. The power of each trial design in each scenario can be found in the same row. The powers of trial design (2) and (3) both surpass that of trial design (1). In other words, a threearm trial with TULA, CEFTS and the new treatment Z or a twoarm trial with CEFTS and Z is better in power than a twoarm trial with TULA and Z when our COI is between TULA and Z. Take the fifth row for example, when the risk of Z is 0.35 and the total sample size for the new trial is fixed at 100, from simulation, we are able to gain an additional \(8.2\%\) power when we select the trial design (2) and an additional \(11.8\%\) power when we select the trial design (3) compared with trial design (1).
The outputs from Simulation Study II are in Table 2. Each row represents a different scenario in Simulation Study II with a different risk of Z, \(p_Z\). The left part records the optimal sample size allocation given a fixed total sample size of 100 to maximize the power for each trial design. The power of each trial design in each scenario can be found in the same row. Same as Table 1, The trial design from (1) to (3) represents direct twoarm trial, threearm trial, and indirect twoarm trial respectively. In Simulation Study II, trial design (2) has the largest power among the three trial designs, which means, conducting a threearm trial is the best option in terms of power when our COI is between TULA and Z while the other available treatment in the existing network is TRIM. Take the first row for example, when the risk of Z is 0.35 and the optimal allocation is applied for each trial design to reach its best power, trial design (2) increases \(2.0\%\) and \(3.7\%\) in power simulationwise compared with trial design (1) and (3).
Discussion
Direct twoarm trial is not always the best
Support our COI is between one new treatment (Z) and one old treatment (A) from the existing network, a twoarm trial is commonly the first choice. However, with the network metaanalysis, the possibilities of the trial design are expanded. In this paper, we explore three different types of trial design including direct twoarm trial, indirect twoarm trial, and threearm trial. In both indirect twoarm trial and threearm trial, we introduce another old treatment (B) from the existing network to leverage the existing information of the comparison between A and B to inform the comparison between A and Z. From the method part (Binary outcome section), we conclude that when \(p_A(1p_A) \ge p_B(1p_B)\), the direct twoarm trial between A and Z would always be the best choice in term of power. However, when \(p_A(1p_A) < p_B(1p_B)\), it is possible to leverage information from NMA to gain additional power by using either indirect twoarm trial between Z and B or the threearm trial among A, B and Z. As we show in Simulation Study I and II, both indirect twoarm trial and threearm trial surpass direct twoarm trial in power. Additionally, even if it is not for the consideration of power, choosing a twoarm indirect trial or threearm trial rather than the direct twoarm trial can be attractive when treatment A is highcost. By replacing some or all amounts of treatment A with an appropriate treatment B, we could reduce the cost and gain more power at the same time.
How should we choose the optimal trial design
We learn that an indirect twoarm trial or threearm trial can be more powerful than a direct twoarm trial for certain scenarios. How should we choose between those two candidates for the trial design? In our two sets of simulations, the indirect twoarm trial has larger power than the threearm trial in Simulation Study I while it is the other direction in Simulation Study II. The two simulations are selected from the exploration result to present because each simulation exemplifies a potential scenario wherein one of the alternative trial designs exhibits maximal statistical power. In the exploration, we examined all possible permutations of treatments A and B in the existing network and calculated the power gain by using the two alternative trial designs based on the power formula. In the exploration, we observed that when \(\sigma ^2_{AB, old}\) is smaller, the indirect twoarm trial design tends to outperform the threearm trial design. This phenomenon is rationalized by the fact that diminished values of \(\sigma ^2_{AB, old}\) denote heightened reliability in the existing evidence. Consequently, a judicious allocation of sample sizes to treatments Z and B becomes conducive, given the robustness of the estimation between treatments A and B. Moreover, the choice of total sample size n could also flip the choice of optimal trial design. In certain exploration scenarios, the optimal trial design changes from an indirect twoarm trial to a threearm trial. This shift accentuates the potential utility of incorporating a direct AtoZ comparison within the novel trial design, supplementing the indirect estimation between A and B obtained from the existing NMA. However, it is imperative to acknowledge that beyond the determinants of total sample size (n) and \(\sigma ^2_{AB, old}\), a series of additional factors interplay to influence the variance of the estimated effect size between treatments A and Z, encompassing the associated risks linked to treatments A and B. The intricate interplay among these multifarious elements, as shown in the power formula, precludes the formulation of definitive guidance regarding threshold values that could singularly guide the selection process amongst these candidate trial designs. Therefore, in a real application, we advise the researcher to try our power formula and optimization to compare the optimal power that each trial design can reach to make the decision. It is general practice to specify values of the parameters (risks) in a statistical power calculation. These specific values in power calculation may come from estimation in previous studies, or can be based on experts’ opinions or research expectations. In case there is uncertainly in these parameter values, multiple calculations with a range of values can be performed and compared. Note that there are other factors to consider when choosing between an indirect twoarm trial and a threearm trial given that they are both superior to a direct twoarm trial. For example, some clinical trials may be regulated by certain protocols, which may require having both treatments of the COI in the trial. Under that circumstance, an indirect twoarm trial is not feasible.
An alternative for determining the optimal trial design is to introduce adaptive design, which utilizes results accumulating in the trial to modify the trial’s course. Unlike the traditional approach of predefining a fixed trial design, adaptive design is more efficient, informative, and flexible [6]. We can apply a threearm trial design as a start, then modify the ongoing trial by gradually adding the sample size to a certain group according to the interim results. Allocating more sample sizes to treatment group A gradually would mimic the performance characteristics of a twoarm trial. Increasing the sample sizes allocated to treatment group B brings about a behavior that closely resembles that of an indirect twoarm trial. In the context of adaptive design, data are repeatedly analyzed. Thus, we need to ensure that statistical inferences are correctly conducted with a controlled type I error rate. To facilitate this, we can draw from the established methods proposed by previous researchers. For example, Lu et al. [7] proposed a method to design the nested subpopulations, which maximize study power and keep the overall type I error rate under control. Leveraging their methods can help us to calculate the optimal sample size and the decision threshold for each subpopulation, to provide a foundation for us to start the adaptive design. Moreover, some techniques used in trial monitoring, such as group sequential methods can also be leveraged to adaptive designs under certain conditions, as proved by Xuekui et al. [8]. In that way, each individual hypothesis can be tested at the full \(\alpha\) level to give the study maximum power so that we could decide which question to answer by interim results and then answer the question with maximum power using all data.
Limitations
There are some common limitations for all methodologies on NMA. The assumptions of NMA are required to be met, such as independence, exchangeability, transitivity and consistency. Those assumptions may not always be valid in real applications. Another limitation specific to this paper is that the possibility to gain more power from other types of trial design other than direct twoarm trials is networkdependent. There might not exist a suitable treatment B in the existing network that can bring more power by adding it to the trial design. For example, when all old treatments i have \(p_i(1p_i) \le p_A(1p_A)\) where our COI is between A and a new treatment, it is impossible to gain more power theoretically by the other two trial designs.
Future directions
We develop our proposed methodology under fixedeffect NMA because there is only one new trial involved with the new treatment where the betweenstudy variation is unable to be evaluated. One future direction could be planning a series of new trials and applying the same concept of our methodology on top of the randomeffect NMA, which may be more interesting to some researchers as randomeffect NMA are also widely used. Another future direction could be to add the COI and explore how the formulas changes and how it guides our new trial planning in terms of the type of trial design, which is practical as well since some researchers are interested in exploring multiple comparisons in a new trial.
Availability of data and materials
We provide the R code and data we used in this paper in https://github.com/fangshuye/Threearmtwoarm.
Abbreviations
 RCT:

Randomized controlled trials
 NMA:

Network MetaAnalysis
 COI:

Comparison of Interest
 BRD:

Bovine Respiratory Disease
 TULA:

Tulathromycin
 TRIM:

Trimethoprim
 CEFTS:

Ceftiofur Sodium
References
Salanti G, Nikolakopoulou A, Sutton AJ, Reichenbach S, Trelle S, Naci H, et al. Planning a future randomized clinical trial based on a network of relevant past trials. Trials. 2018;19(1):1–7.
Nikolakopoulou A, Mavridis D, Salanti G. Using conditional power of network metaanalysis (NMA) to inform the design of future clinical trials. Biom J. 2014;56(6):973–90.
Sedrakyan N. About the applications of one useful inequality. Kvant J. 1997;97(2):42–4.
Storn R, Price K. Differential evolutiona simple and efficient heuristic for global optimization over continuous spaces. J Glob Optim. 1997;11(4):341–59.
O’Connor A, Yuan C, Cullen J, Coetzee J, Da Silva N, Wang C. A mixed treatment metaanalysis of antibiotic treatment options for bovine respiratory diseasean update. Prev Vet Med. 2016;132:130–9.
Pallmann P, Bedding AW, ChoodariOskooei B, Dimairo M, Flight L, Hampson LV, et al. Adaptive designs in clinical trials: why use them, and how to run and report them. BMC Med. 2018;16(1):1–15.
Lu Y, Zhou J, Xing L, Zhang X. The optimal design of clinical trials with potential biomarker effects: A novel computational approach. Stat Med. 2021;40(7):1752–66.
Zhang X, Jia H, Xing L, Chen C. Application of Group Sequential Methods to the 2in1 Design and Its Extensions for Interim Monitoring. Stat Biopharm Res. 2023. https://doi.org/10.1080/19466315.2023.2197402.
Acknowledgements
Not applicable.
Funding
None reported.
Author information
Authors and Affiliations
Contributions
FY proposed the method, wrote the code used to conduct the data analysis. CW coordinated the project team, assisted with the data analysis, and interpreted the procedure and results of the analysis. AOC provided the data, assisted with the data analysis. The manuscript was primarily prepared by FY, with secondary input from all other authors.
Corresponding author
Ethics declarations
Ethics approval and consent to participate
Not applicable.
Consent for publication
Not applicable.
Competing interests
The authors declare no competing interests.
Additional information
Publisher’s Note
Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.
Rights and permissions
Open Access This article is licensed under a Creative Commons Attribution 4.0 International License, which permits use, sharing, adaptation, distribution and reproduction in any medium or format, as long as you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons licence, and indicate if changes were made. The images or other third party material in this article are included in the article's Creative Commons licence, unless indicated otherwise in a credit line to the material. If material is not included in the article's Creative Commons licence and your intended use is not permitted by statutory regulation or exceeds the permitted use, you will need to obtain permission directly from the copyright holder. To view a copy of this licence, visit http://creativecommons.org/licenses/by/4.0/. The Creative Commons Public Domain Dedication waiver (http://creativecommons.org/publicdomain/zero/1.0/) applies to the data made available in this article, unless otherwise stated in a credit line to the data.
About this article
Cite this article
Ye, F., Wang, C. & O’Connor, A. Optimal trial design selection: a comparative analysis between twoarm and threearm trials incorporating network metaanalysis for evaluating a new treatment. BMC Med Res Methodol 23, 267 (2023). https://doi.org/10.1186/s1287402302089y
Received:
Accepted:
Published:
DOI: https://doi.org/10.1186/s1287402302089y