Skip to main content
  • Research article
  • Open access
  • Published:

Impact of misspecifying the distribution of a prognostic factor on power and sample size for testing treatment interactions in clinical trials

Abstract

Background

Interaction in clinical trials presents challenges for design and appropriate sample size estimation. Here we considered interaction between treatment assignment and a dichotomous prognostic factor with a continuous outcome. Our objectives were to describe differences in power and sample size requirements across alternative distributions of a prognostic factor and magnitudes of the interaction effect, describe the effect of misspecification of the distribution of the prognostic factor on the power to detect an interaction effect, and discuss and compare three methods of handling the misspecification of the prognostic factor distribution.

Methods

We examined the impact of the distribution of the dichotomous prognostic factor on power and sample size for the interaction effect using traditional one-stage sample size calculation. We varied the magnitude of the interaction effect, the distribution of the prognostic factor, and the magnitude and direction of the misspecification of the distribution of the prognostic factor. We compared quota sampling, modified quota sampling, and sample size re-estimation using conditional power as three strategies for ensuring adequate power and type I error in the presence of a misspecification of the prognostic factor distribution.

Results

The sample size required to detect an interaction effect with 80% power increases as the distribution of the prognostic factor becomes less balanced. Misspecification such that the actual distribution of the prognostic factor was more skewed than planned led to a decrease in power with the greatest loss in power seen as the distribution of the prognostic factor became less balanced. Quota sampling was able to maintain the empirical power at 80% and the empirical type I error at 5%. The performance of the modified quota sampling procedure was related to the percentage of trials switching the quota sampling scheme. Sample size re-estimation using conditional power was able to improve the empirical power under negative misspecifications (i.e. skewed distributions) but it was not able to reach the target of 80% in all situations.

Conclusions

Misspecifying the distribution of a dichotomous prognostic factor can greatly impact power to detect an interaction effect. Modified quota sampling and sample size re-estimation using conditional power improve the power when the distribution of the prognostic factor is misspecified. Quota sampling is simple and can prevent misspecification of the prognostic factor, while maintaining power and type I error.

Peer Review reports

Background

Randomized controlled trials (RCTs) are the gold standard for evaluating the efficacy of a treatment or regimen. While for most RCTs the primary hypothesis is the overall comparison of two (or more) treatments, there has been a continuing discussion over the last two decades about the use of subgroup analyses and formal tests of interaction in RCTs [17]. According to the most recent CONSORT statement, which was published in 2010, the analysis of subgroups should be pre-planned and accompanied by a formal test of interaction [8]. However, systematic reviews of medical and surgical RCTs have shown that many of the analyses of subgroups in RCTs have not been pre-planned and have not been accompanied by a formal test of interaction [14]. The percentage of trials reporting their results using a formal interaction test was 13% in 1985 [1], 43% in 1997 [2], 6% from 2000 to 2003 [3], and 27% from 2005 to 2006 [4].

Investigators planning subgroup analyses within the framework of RCTs are encouraged to design RCTs to detect interaction effects using a formal interaction test. While statistical software such as nQuery Advisor and SAS can handle power and sample size calculations for detecting interaction effects, a modest body of literature exists describing the effect of the magnitude of the interaction effect and distribution of the prognostic factor have on power and sample size. Two articles by Brookes and colleagues showed that there is low power to detect an interaction, scaled as a contrast of cell means, when a study is powered only to detect the main effect unless size of the interaction effect is nearly twice as large as the main effect [6, 7]. They also showed that power for the interaction test is maximized when the prognostic factor is distributed evenly.

There are many instances in which investigators may be interested in studying the interaction between a prognostic factor and treatment. For example, an investigator is interested in studying the effect in improving functional limitation in persons with meniscal tear and concomitant knee osteoarthritis (OA). For this disease, the two treatment choices are performing arthroscopic partial meniscectomy (APM) and physical therapy (PT). However, the investigator also hypothesizes that the effect of APM compared to PT on functional limitation varies by knee OA severity. In this case, knee OA severity is the prognostic factor and how it is distributed will impact the sample size required to detect the interaction between treatment and OA severity.

This article has three objectives. First, we sought to describe differences in power and sample size requirements across alternative distributions of a prognostic factor and magnitudes of the interaction effect. Second, we describe the effect of misspecification of the prognostic factor distribution and how such misspecification affects the power to detect an interaction effect. Third, we describe and discuss three methods of handling misspecification of the prognostic factor distribution by potential readjustment of the sample size or strategy during a trial. Two of these methods are sampling-based and do not require an interim statistical testing of the outcome. The third method uses a two-stage adaptive design approach that re-estimates the sample size based on the conditional power at an interim analysis where 50% of the patients have been enrolled.

Methods

Overview

We conducted an analysis examining the impact of how different distributions of a dichotomous prognostic factor affect the power (and sample size needed to obtain 80% power) to detect an interaction between the prognostic factor and treatment in RCTs. We also studied the impact of misspecifying (positive and negative misspecifications) the distribution of the prognostic factor on power and sample size. We varied the magnitude of the interaction effect, the distribution of the prognostic factor, and the magnitude of the misspecification. Lastly, we compare three methods for ensuring appropriate overall power and type I error under the misspecification of the distribution of the prognostic factor. These methods are quota sampling, modified quota sampling, and sample size re-estimation using conditional power.

Specification of key parameters used in the paper

Treatment variable

The treatment variable was distributed as a binomial variable (active vs. placebo) with probability of 0.5. For the purposes of this paper the treatment variable was assumed to always have a balanced distribution (i.e. 50% on level 1 receiving active treatment; 50% on level 2 receiving placebo). For illustration purposes, we assumed that APM was the active treatment and that PT was the placebo.

Prognostic factor

The prognostic factor was defined as dichotomous variable with k j representing the j th level of the prognostic factor. When referring to the distribution of the prognostic factor we indicated the percentage in the k 1 level of the prognostic factor, defined as p 1 . We varied p 1 from 10% to 50% in 10% increments.

Misspecification of the prognostic factor

The misspecification of the prognostic factor was defined by the parameter q. The misspecification could be positive or negative with negative misspecification implying less balance (more skew) and positive misspecification implying more balance (less skew). For example, if the planned distribution of the prognostic factor was 20% and the actual distribution of the prognostic factor was 25%, then the misspecification of the prognostic factor (q) was +5%. Possible values of q were −15%, -5%, 0 (i.e. no misspecification), +5%, and +15%.

Outcome variable

We assumed that our outcome variable was continuous and normally distributed. In our example, the outcome can be interpreted as the improvement in function after APM or PT as measured by a score or scale. We specified the mean improvement for all four possible combinations of treatment and the prognostic factor. We considered two different values (25 and 15) for the mean improvement in the active/k 1 treatment/prognostic factor combination (i.e. APM/mild knee OA severity). The mean improvement in the active/k 2 , placebo/k 1 , and placebo/k 2 groups were held constant at 5, 5, and 0 respectively. We assumed a common standard deviation (σ) of 10 for all four combinations.

Magnitude of the interaction

We defined the magnitude of the interaction between prognostic factor and treatment effect according to the method by Brookes and colleagues [6, 7]. Let μ ij be mean improvement in the i th treatment and j th level of the prognostic factor. We then defined the treatment efficacy in the j th level of the prognostic factor as:

δ j = μ 1 j μ 2 j
(1)

We then defined the interaction effect (denoted as θ) as follows:

θ = δ 1 δ 2 = μ 11 μ 21 μ 12 μ 22
(2)

Thus θ, which served as the basis of our choice of mean improvement values, varied as μ 11 varied. The magnitudes of the interaction effect that we considered were 15 and 5. The estimate of the interaction effect was defined as follows:

θ ^ = x ¯ 11 x ¯ 21 x ¯ 12 x ¯ 22
(3)

Then, the variance of the interaction effect under balanced treatment groups and distribution of the prognostic factor p 1 can be derived as follows (note that N equals the total sample size for the trial):

V A R θ ^ = V A R x ¯ 11 x ¯ 21 x ¯ 12 x ¯ 22 = V A R x ¯ 11 + V A R x ¯ 21 + V A R x ¯ 12 + V A R x ¯ 22 = σ 2 n 11 + σ 2 n 21 + σ 2 n 12 + σ 2 n 22 = σ 2 0.5 * p 1 * N + σ 2 0.5 * 1 p 1 * N + σ 2 0.5 * p 1 * N + σ 2 0.5 * 1 p 1 * N = 4 σ 2 p 1 * N + 4 σ 2 1 p 1 * N = 4 σ 2 p 1 * 1 p 1 * N
(4)

It is clear from the equation as the prevalence of the prognostic factor (p 1 ) increases, the variance decreases, which would imply that the power increases for a fixed sample size.

Initial sample size for interaction effects

The sample size required for the i th treatment and j th prognostic factor level to detect the interaction effect described under a balanced design (i.e. p 1  = 0.5) with a two-sided significance level of α and power equal to 1– β has been previously published by Lachenbruch [9].

n ij = 4 σ 2 z 1 β + z 1 α 2 2 θ 2
(5)

In these formulas z 1–β represents the z-value at the 1–β (theoretical power) quantile of the standard normal distribution and z 1–α/2 represents the z-value at the 1 α / 2 (probability of a type I error) quantile of the standard normal distribution. Under a balanced design with p 1  = 0.5 we can just multiply n ij by four to obtain the total sample size since there are four combinations of treatment and prognostic factor. A limitation of this formula is that it uses critical values from the standard normal distribution rather than the Student’s t-distribution as most statistical tests of interaction are performed using a t-distribution. To account for this we calculated the total sample size required to detect an interaction effect with a two-sided significance level of α and power equal to 1– β:

  1. 1.

    Use formula 5 (above) to calculate the sample size required for each combination of treatment and prognostic factor under a balanced design.

  2. 2.

    Calculate a new sample size ( n ij * ) required for each combination of treatment and prognostic factor under a balanced design using the following formula 6 below. In this formula the z-critical values have been replaced with t-critical values with n ij degrees of freedom.

    n ij * = 4 σ 2 t 1 β , n ij 1 + t 1 α 2 , n ij 1 2 θ * 2
    (6)
  3. 3.

    Set n ij equal to n ij * and repeat step 2.

  4. 4.

    Repeat step 3 until n ij * converges. This will usually occur after 2 or 3 iterations.

  5. 5.

    Lastly, to correct for imbalance in the prognostic factor, multiply n ij * by 1 p 1 1 p 1 to obtain the final total sample size N.

Effect of misspecifying the distribution of the prognostic factor

The effect of misspecifying the distribution of the prognostic factor was evaluated using power curves. The formula used by Lachenbruch was extended to incorporate the Student’s t-distribution [9]. Power for the interaction test, where Ψ is the cumulative distribution function of the Student’s t-distribution, by actual prevalence of the prognostic factor (p 1  + q) and magnitude of the interaction effect was calculated using equation 7 below.

Power = 1 Ψ N p 1 + q 1 p 1 q θ 2 4 σ 2 t α 2 , N 4
(7)

Strategies for accounting for the misspecification of the distribution of the prognostic factor

Quota sampling

The quota sampling approach was performed using the following steps. First, for a given set of parameters, we would determine the sample size needed to detect an interaction effect with 80% power. We then fixed the number of participants to be recruited for each level of the prognostic factor. For example, if the final total sample size was 200 and the planned distribution of the prognostic factor was 30% in the k 1 group and 70% in the k 2 group then exactly 60 subjects would be recruited in the k 1 group and 140 in the k 2 group. This method removes the variability in the sampling distribution and ensures that the sampled prognostic factor distribution always matches what was planned for. Because of this approach, the observed distribution of the prognostic factor in the trial will always match the planned distribution and there will be no misspecification. However, this method may require turning away potential subjects because one level of the prognostic factor is already filled, delaying trial completion. Also, it may reduce the external validity of the overall treatment results as the trial subjects can become less representative of the unselected population of interest. Because of these limitations we also considered a modified quota sampling approach.

Modified quota sampling

The modified quota sampling approach was performed using the following steps. First, as in the quota sampling approach, the sample size needed to detect an interaction effect with 80% power was determined for the pre-specified parameters. Next, the simulated study enrolled the first N / 2 subjects. After the first N / 2 subjects were enrolled we tested to see if the sampling distribution of the prognostic factor was different from what was planned for using a one-sample test of the proportion. If this result was statistically significant at the 0.05 level then a quota sampling approach was undertaken for the second N / 2 subjects to be enrolled to ensure that the sampling distribution of the prognostic factor matched the planned distribution exactly. If the result was not statistically significant then the study continued to enroll normally, allowing for variability in the distribution of the prognostic factor.

Sample size re-estimation using conditional power

The last method for accounting for the misspecification of the distribution of the prognostic factor used the conditional power of the interaction test at an interim analysis to re-estimate the sample size. We modified the methods of Denne to carry out this procedure [10]. We assumed that the interim analysis occurred after the first N / 2 subjects were enrolle 2 ) were determined by the O’Brien-Fleming alpha-spending function [11] using the SEQDESIGN procedure in the SAS statistical software package. We also used the SEQDESIGN procedure to calculate a futility boundary at the interim analysis (b 1 ). Since these critical values are based on a standard normal distribution and not the student’s t-distribution we converted the critical values to those based on the student’s t-distribution. First, we converted the original critical values to the corresponding percentile of the standard normal distribution. We then converted these percentiles to the corresponding critical value of the Student’s t-distribution with N-4 degrees of freedom.

At the interim analysis, if the absolute value of the interaction test statistic was less than the futility boundary (t 1  < b 1 ) then we stopped the trial for futility and considered the result of the trial to be not statistically significant. If the absolute value of the test statistic was greater than the interim critical value (c 1 ) then we stopped the trial for efficacy and considered the result of the trial to be statistically significant. If absolute value of the test statistic was greater than b 1 but less than c 1 then we evaluated the conditional power and determined if sample size re-estimation was necessary. The following paragraphs outline this procedure.

The following is the conditional power formula proposed by Denne for the two group comparison of means:

CP = 1 Φ c 2 n 2 z 1 n 1 n t n 1 δ σ n t n 1
(8)

Here, c 2 is the final critical value, n 2 is the sample size at the final analysis, n t is the originally planned total sample size, z 1 is the test statistic for the interaction at the interim analysis, n 1 is sample size at the interim analysis, δ is the difference in means, and σ is the common standard deviation for the two groups. We updated the formula by replacing z 1 with t 1 (because the interaction test uses the Student’s t-distribution), δ (difference in means between groups) with θ (magnitude of the interaction effect), and Φ (cumulative distribution function of a standard normal distribution) with Ψ (cumulative distribution function of a student’s t-distribution). Recall that p 1 is the proportion in the k 1 group and σ is the common standard deviation:

CP = 1 Ψ c 2 n 2 t 1 n 1 n t n 1 θ p 1 1 p 1 4 σ 2 n t n 1
(9)

Initially n 2  = n t as conditional power is calculated as if you were to not re-estimate the sample size. The values of θ, σ, and p 1 for the conditional power formula were estimated at the interim analysis. If the conditional power was less than 80% then a new n 2 was estimated such that conditional power was 80% and a new final critical value, c 2 , was calculated as a function of the original final critical value, c ˜ 2 , and the interim test statistic t 1 using the following formula:

c 2 = c ˜ 2 γ 2 γ 1 γ 2 1 γ 1 t 1 γ 1 γ 2 γ 2 γ 1 1 γ 1 1
(10)

In equation 10, γ 1 = n 1 n t and γ 2 = n 2 n t so that the final critical value is also a function of n 1 , n 2 (the new total sample size), and n t (the original total sample size). Since all values except n 2 are fixed, we can calculate the new critical value c 2 for new final sample sizes n 2 . According to Denne, this method for re-estimating the sample size maintains the overall Type I error rate at α (equal to 0.05 in our case) [10]. The final sample size n 2 and final critical value c 2 were chosen so that the conditional power formula shown in equation 9 was equal to 80%. If the conditional power was greater than 80% at the interim analysis then we used the originally calculated n t as the final sample size (n 2  = n t ) so that the final sample size was only altered to increase the conditional power to 80%.

Validating the conditional power formula

To ensure that the modification the conditional power formula (formula 9) was appropriate, we performed a validation study using simulations. For each combination of prevalence of the prognostic factor and magnitude of the interaction ran 10 trials to obtain 10 interim test statistics for each combination of parameters. At the interim analysis we calculated the conditional power based on the hypothesized values of θ, σ, and p 1 . For each trial, the second half of the trial was simulated 5,000 times to obtain the empirical conditional power. Since there were 10 different combinations of prevalence of the prognostic factor and magnitude of the interaction effect and 10 trials for each combination, the plot generated 100 points. We generated a scatter plot of the empirical conditional power based on 5,000 replicates against the calculated conditional power (Figure 1). Values that line up along the y = x line demonstrate that formula provided an accurate estimation of the conditional power.

Figure 1
figure 1

Results of the conditional power validation displaying a plot of the empirical conditional power (y-axis) and calculated conditional power calculated at the interim analysis (x-axis). The solid line represents the y = x line.

Figure 2
figure 2

Power using traditional study design by magnitude of the interaction effect, planned prevalence of the prognostic factor, and misspecification of the prevalence of the prognostic factor.

Simulation study details

Five thousand replications were performed for each combination of the interaction effect and proportion at level k 1 . We first evaluated the empirical power for detecting the interaction effect without accounting for misspecification of the distribution of the prognostic factor. We varied the misspecification of the prognostic factor at −15%, -5%, 0%, +5%, and +15%. For the quota sampling method we did not vary the misspecification of the distribution of the prognostic factor because the definition of the method does not allow for misspecifications. While we did not expect the quota sampling method to have power or type I error estimates that differ from the traditional one-stage sampling design under no misspecification, we conducted the simulation study for this study design method to confirm there was no impact on power and type I error. For the modified quota sampling method and sample size re-estimation using conditional power we used the same misspecifications as described above.

We calculated the overall empirical power for the interaction effect for all three methods. This was defined as the percentage of statistically significant interaction effects across the 5,000 replicates. Empirical type I error was calculated in a similar fashion for these three methods, but the interaction effect was assumed to be zero and the sample size we used was the sample sizes for the planned interaction effects of 15 and 5. For the sample size re-estimation method we also calculated the empirical conditional power. This was defined as the number of statistically significant interaction effects detected at the 0.05 level among trials that re-estimated the sample size. Because the sample size could change, we also calculated the mean and median final sample size for the entire procedure.

The margin of error for empirical power and type I error was calculated using the half width of the 99% confidence interval based on a binomial distribution with a sample size of 5,000. Since trials were planned with 1– β = 0.80 α = 0.05 this led to margins of error equal of 0.015 and 0.008 when assessing empirical power and type I error respectively.

Results

Effect of misspecifying the distribution of the prognostic factor on power for the interaction test

Power curves when using the traditional study design are shown in Figure 2. There was a small difference in power when comparing the magnitude of the interaction effect and holding the planned prevalence of the prognostic factor and the misspecification of the prognostic factor equal. This was due to rounding up of the final sample size and the choice of using the t-critical values instead of the z-critical values, which were larger when the magnitude of the interaction was larger. In short, if the actual prevalence is closer to 50%, the power is higher than planned, and if the actual prevalence is farther away from 50%, the power is lower than planned, see also Figure 2 and equation 4.

Performance of the quota sampling procedure

The quota sampling procedure performed well in terms of empirical power (Table 1) and type I error (Table 2) as a strategy to account for misspecifying the distribution of the prognostic factor. The empirical power was reached or exceeded the target power of 80% for all combinations of θ and distributions of the prognostic factor. The type I error was near the target type I error of 5% for all combinations of sample size and distributions of the prognostic factor.

Table 1 Empirical power for all three methods when there was no misspecification of the distribution of the prognostic factor
Table 2 Empirical type I error for all three methods when there was no misspecification of the distribution of the prognostic factor

Performance of the modified quota sampling procedure

Under no misspecification of the distribution of the prognostic factor, the modified quota sampling procedure performed well with empirical power greater than or equal to 80% across all situations (Table 1). The type I error rate was also near 5% for all combinations under no misspecification (Table 2).

Under negative misspecifications of the distribution of the prognostic factor, empirical power was improved in comparison to doing nothing but 80% power was not achieved in all cases (Table 3). The ability of the procedure to attain 80% power under misspecifications of the prognostic factor was dependent on the percentage of trials that switched to quota sampling after 50% enrollment. The likelihood of switching to quota sampling was related to the magnitude of the interaction effect, the planned distribution of the prognostic factor, and the magnitude of the negative misspecification. When the magnitude of the interaction effect was five and the misspecification of the distribution of the prognostic factor was −15%, greater than 99.8% of the trials switched to the quota sampling method and the procedure attained 80% power. However, when the magnitude of the interaction effect was 15, and the misspecification of the distribution of the prognostic factor was −5%, the modified quota sampling approach only attained 80% power when the planned distribution of the prognostic factor was 40% or 50% (Table 3).

Table 3 Empirical power and percentage of trials switching to the quota sampling scheme for the modified quota sampling method

For positive misspecifications of the distribution of the prognostic factor, the modified quota sampling procedure attained 80% for all combinations of the magnitude of the interaction effect and planned distribution of the prognostic factor (Table 3).

Type I error was maintained at 5% or within the margin of error for all combinations of sample size, planned distribution of the prognostic factor, and misspecification of the distribution of the prognostic factor (Table 4).

Table 4 Empirical type I error and percentage of trials switching to the quota sampling scheme for the modified quota sampling method

Validating the conditional power formula

Figure 1 shows the validation results of the conditional power formula used in this paper. The points line up along the y = x line, which implies that the formula we used to calculate the conditional power was similar to the empirical conditional power. These results give us confidence that the sample size re-estimation presented in the next section performed as expected.

Performance of the sample size re-estimation using conditional power procedure

Under no misspecification of the distribution of the prognostic factor, using the sample size re-estimation procedure resulted in an increase in overall power due to the requirement of conditional power of 80% at the interim analysis. Across different combinations θ and the planned distribution of the prognostic factor, the empirical power ranged between 88% and 91% (Table 1). Despite the increase in power, type I error was maintained at 5% or less for all of the simulations under no misspecification of the distribution of the prognostic factor (Table 2).

When we assumed there was a misspecification of −5% for the distribution of the prognostic factor, the empirical power was greater than 80% except when the planned distribution of the prognostic factor was 10%. In this case the empirical power was 72%-73%, which was an improvement compared to using the traditional one-stage design (Figure 2). For a misspecification of the distribution of the prognostic factor of −15% and planned prognostic factor distribution of 20%, the empirical power was also less than 80% (empirical power of 56%-60%), but was higher than using the traditional one-stage design (Figure 2). The inability to attain 80% power in these situations was directly related to the fact that more trials were stopped for futility at the interim analysis. In the situations where the empirical power failed to attain 80% power the percentage of trials stopping for futility ranged between 15% and 26% (Table 5).

Table 5 Empirical power and the percentage of trials stopping for futility and efficacy for sample size re-estimation using conditional power

The empirical type I error was below 5%, suggesting room for power to gain by changing the critical values of c 1 and c 2 for all combinations of θ, planned distribution of the prognostic factor, and misspecification of the distribution of the prognostic factor. Under the null hypothesis, the percentage of trials stopping for futility ranged between 42% and 45%, while the percentage of trials stopping for efficacy was at most 0.6% (Table 6).

Table 6 Empirical type I error and the percentage of trials stopping for futility and efficacy for sample size re-estimation using conditional power

Conditional properties are displayed in Table 7. In almost all cases the empirical conditional power greater than 80%. The two situations in which the empirical conditional power was less than 80% was when there was a negative misspecification of −15% of the distribution of the prognostic factor coupled with an initial planned distribution of the prognostic factor of 20%. Here the empirical conditional power was 76% for θ equal to 5 and 15. The mean total sample size was always greater than the original planned sample size. Some of these mean total sample sizes were more than double the final sample size. However, the median sample size was equal to or very close to the original total sample size in all cases (Table 7).

Table 7 Percentage of trials re-estimating the sample size, conditional power among trials that re-estimated the sample size and overall mean and median sample size for sample size re-estimation using conditional power

Discussion

We evaluated the impact of misspecifying the distribution of a prognostic factor on the power and sample size for interaction effects in an RCT setting. We showed that negative misspecification of the distribution of the prognostic factor resulted in a loss of power and a need for an increased sample size, because the planned distribution of the prognostic factor moved further away from a balanced design.

We evaluated three methods for handling misspecifying the distribution of the prognostic factor when investigating interaction effects in an RCT setting. The first two methods dealt with how the subjects would be sampled. The quota sampling method removed any variability in the prognostic factor and by definition misspecification of the distribution of the prognostic factor was not possible. For example if a trial was set to enroll 200 subjects with 30% in the k 1 level of the prognostic factor then enrollment would be capped at 60 subjects in the k 1 level and 140 in the k 2 level. This method did a good job of maintaining the power at 80% and controlling the type I error at 5%. The modified quota sampling approach did not perform as well in all situations. In summary, this method enrolled subjects randomly for the first half the trial. The sampling method would switch to the quota sampling approach if the distribution of the prognostic factor differed significantly from what was planned. Power was maintained at 80% when the percentage of trials switching to the quota sampling approach was large. However, when the percentage switching was small and there was a negative misspecification of the distribution of the prognostic factor, the power was compromised, but rarely substantially.

The last method used conditional power at an interim analysis (after 50% enrollment) to re-estimate the sample size. We adapted the method used by Denne [10]. This method resulted in more optimal overall power and type I error estimates than the modified quota sampling procedure. The main reason this method could not maintain empirical power at 80% under a negative misspecification is that too many trials stopped for futility before the sample size could be re-estimated and all of the type II error was used up at the interim analysis. This result does not diminish the value of the sample size re-estimation procedure since misspecification of other trial parameters that diminish power would have a similar effect.

The findings from our study detail methods for handling the misspecification of the distribution of a prognostic factor when detecting an interaction effect in an RCT setting. An advantage of the quota sampling approach over using the conditional power formula is that the final sample size does not need to be changed and an interim analysis that uses some of the alpha level does not need to be undertaken. However, the quota sampling approach is sensitive to distortions in misspecifying the distribution of the prognostic factor in terms of trial duration as it may take longer to recruit the necessary patients from the appropriate level of the prognostic factor. Using sample size re-estimation does not have this issue, but it is sensitive to the values used in the conditional power formula. In particular, the mean sample size tended to be larger the original planned sample size in situations where the misspecification was leading to a more balanced design. In theory, this should increase the power and reduce the sample size needed. However, in some of the simulations in which the conditional power was low, but did not reach the futility stopping rule, the new sample size estimated was very large, resulting in an outlier. These outliers inflated the final mean sample size. To overcome this, we also reported the median final sample size of the simulations, which was less than or equal to the originally planned sample size.

As with any study that uses simulations, there are several limitations to our study. One limitation is that not all interaction effects were explored and we only study a 2 by 2 interaction effect. However summarizing the interaction effect with one contrast would not be feasible if three or more treatments or levels of the prognostic factor were explored. Future work could explore looking at these interaction effects. We also did not examine the impact of informative cross-over. Many times subjects randomized to one arm (ex. non-surgical therapy) in a study will cross-over to another arm (ex. surgical therapy). The impact of differing cross-over rates should be explored.

Another limitation is that we did not look at the impact of unequal variances. This should be the goal of future work as levels of a prognostic factor can impact variability in the outcome.

Lastly, we only studied the impact of one sample size re-estimation procedure as described by Denne in 2001 [10]. However, there are many different methods of re-estimating the sample size that could be studied [12]. However, the method described by Denne still is a valid and acceptable method according to the FDA guidelines on Adaptive Design Clinical Trials for Drugs and Biologics [13].

Conclusions

We examined three methods for dealing with the misspecification of the distribution of the prognostic factor when determining the treatment by prognostic factor interaction effects in an RCT setting. Sample size re-estimation using conditional power was able to improve the power when there was a negative misspecification of the distribution of the prognostic factor while maintaining appropriate type I error. As more studies seek to explore interaction effects as their primary outcome in RCTs, these methods will be useful for clinicians planning their studies. Further research should look at the impact of cross-over between treatment groups.

Grant support

This research was supported in part by the National Institutes of Health, National Institute of Arthritis and Musculoskeletal and Skin Diseases grants T32 AR055885 and K24 AR057827.

References

  1. Pocock SJ, Hughes MD, Lee RJ: Statistical problems in the reporting of clinical trials. A survey of three medical journals. N Engl J Med. 1987, 317 (7): 426-432. 10.1056/NEJM198708133170706.

    Article  CAS  PubMed  Google Scholar 

  2. Assmann SF, Pocock SJ, Enos LE, Kasten LE: Subgroup analysis and other (mis)uses of baseline data in clinical trials. Lancet. 2000, 355 (9209): 1064-1069. 10.1016/S0140-6736(00)02039-0.

    Article  CAS  PubMed  Google Scholar 

  3. Bhandari M, Devereaux PJ, Li P, Mah D, Lim K, Schunemann HJ, Tornetta P: Misuse of baseline comparison tests and subgroup analyses in surgical trials. Clin Orthop Relat Res. 2006, 447: 247-251.

    Article  PubMed  Google Scholar 

  4. Wang R, Lagakos SW, Ware JH, Hunter DJ, Drazen JM: Statistics in medicine–reporting of subgroup analyses in clinical trials. N Engl J Med. 2007, 357 (21): 2189-2194. 10.1056/NEJMsr077003.

    Article  CAS  PubMed  Google Scholar 

  5. Lagakos SW: The challenge of subgroup analyses–reporting without distorting. N Engl J Med. 2006, 354 (16): 1667-1669. 10.1056/NEJMp068070.

    Article  CAS  PubMed  Google Scholar 

  6. Brookes ST, Whitley E, Peters TJ, Mulheran PA, Egger M, Davey Smith G: Subgroup analyses in randomised controlled trials: quantifying the risks of false-positives and false-negatives. Health Technol Assess. 2001, 5 (33): 1-56.

    Article  CAS  PubMed  Google Scholar 

  7. Brookes ST, Whitely E, Egger M, Smith GD, Mulheran PA, Peters TJ: Subgroup analyses in randomized trials: risks of subgroup-specific analyses; power and sample size for the interaction test. J Clin Epidemiol. 2004, 57 (3): 229-236. 10.1016/j.jclinepi.2003.08.009.

    Article  PubMed  Google Scholar 

  8. Schulz KF, Altman DG, Moher D, CONSORT: Statement: updated guidelines for reporting parallel group randomised trials. Trials. 2010, 11: 32-10.1186/1745-6215-11-32.

    Article  PubMed  PubMed Central  Google Scholar 

  9. Lachenbruch PA: A note on sample size computation for testing interactions. Stat Med. 1988, 7 (4): 467-469. 10.1002/sim.4780070403.

    Article  CAS  PubMed  Google Scholar 

  10. Denne JS: Sample size recalculation using conditional power. Stat Med. 2001, 20 (17–18): 2645-2660.

    Article  CAS  PubMed  Google Scholar 

  11. O’Brien PC, Fleming TR: A multiple testing procedure for clinical trials. Biometrics. 1979, 35 (3): 549-556. 10.2307/2530245.

    Article  PubMed  Google Scholar 

  12. Chow SC, Chang M: Adaptive design methods in clinical trials. 2007, Boca Raton, FL: Chapman and Hall/CRC Press

    Google Scholar 

  13. Food and Drug Administration: Guidance for industry: adaptive design clinical trials for drugs and biologics. http://www.fda.gov/downloads/DrugsGuidanceComplianceRegulatoryInformation/Guidances/UCM201790.pdf,

Pre-publication history

Download references

Acknowledgements

We would like to acknowledge Dr. C. Robert Horsburgh for his valuable comments on earlier drafts of this manuscript.

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to William M Reichmann.

Additional information

Competing interests

The authors declare that they have no competing interests.

Authors’ contributions

WMR designed the simulation study, interpreted the data, and drafted the manuscript. MPL interpreted the data and critically revised the manuscript. DRG interpreted the data and critically revised the manuscript. EL interpreted the data and critically revised the manuscript. All authors read and approved the final manuscript.

Authors’ original submitted files for images

Below are the links to the authors’ original submitted files for images.

Authors’ original file for figure 1

Authors’ original file for figure 2

Rights and permissions

This article is published under license to BioMed Central Ltd. This is an Open Access article distributed under the terms of the Creative Commons Attribution License (http://creativecommons.org/licenses/by/2.0), which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited.

Reprints and permissions

About this article

Cite this article

Reichmann, W.M., LaValley, M.P., Gagnon, D.R. et al. Impact of misspecifying the distribution of a prognostic factor on power and sample size for testing treatment interactions in clinical trials. BMC Med Res Methodol 13, 21 (2013). https://doi.org/10.1186/1471-2288-13-21

Download citation

  • Received:

  • Accepted:

  • Published:

  • DOI: https://doi.org/10.1186/1471-2288-13-21

Keywords