Estimating required information size by quantifying diversity in random-effects model meta-analyses
© Wetterslev et al. 2009
Received: 15 May 2009
Accepted: 30 December 2009
Published: 30 December 2009
Skip to main content
© Wetterslev et al. 2009
Received: 15 May 2009
Accepted: 30 December 2009
Published: 30 December 2009
There is increasing awareness that meta-analyses require a sufficiently large information size to detect or reject an anticipated intervention effect. The required information size in a meta-analysis may be calculated from an anticipated a priori intervention effect or from an intervention effect suggested by trials with low-risk of bias.
Information size calculations need to consider the total model variance in a meta-analysis to control type I and type II errors. Here, we derive an adjusting factor for the required information size under any random-effects model meta-analysis.
We devise a measure of diversity (D 2) in a meta-analysis, which is the relative variance reduction when the meta-analysis model is changed from a random-effects into a fixed-effect model. D 2 is the percentage that the between-trial variability constitutes of the sum of the between-trial variability and a sampling error estimate considering the required information size. D 2 is different from the intuitively obvious adjusting factor based on the common quantification of heterogeneity, the inconsistency (I 2), which may underestimate the required information size. Thus, D 2 and I 2 are compared and interpreted using several simulations and clinical examples. In addition we show mathematically that diversity is equal to or greater than inconsistency, that is D 2 ≥ I 2, for all meta-analyses.
We conclude that D 2 seems a better alternative than I 2 to consider model variation in any random-effects meta-analysis despite the choice of the between trial variance estimator that constitutes the model. Furthermore, D 2 can readily adjust the required information size in any random-effects model meta-analysis.
Outcome measures in a single randomised trial or a meta-analysis of several randomised trials are typically dichotomous, especially for important clinical outcomes such as death, acute myocardial infarction, etc. Although meta-analysts cannot directly influence the number of participants in a meta-analysis like trialists conducting a single trial, the assessment of the meta-analytic result depends heavily on the amount of information provided. A limited number of events from a few small trials and the associated random error may be under-recognised sources of spurious findings. If a meta-analysis is conducted before reaching a required information size (i.e., the required number of participants in a meta-analysis) it should be evaluated according to the increased risk that the result may represent a chance finding. It has recently been suggested that sample size estimation in a single trial may be less important in the era of systematic review and meta-analysis . Therefore, the reliability of a conclusion drawn from a meta-analysis, despite standardly calculated confidence limits, may depend even more on the number of events and the total number of participants included than hitherto perceived [2–8]. Both numbers determine the amount of available information in a meta-analysis. The information size (IS) required for a reliable and conclusive meta-analysis may be assumed to be at least as large as the sample size (SS) of a single well-powered randomised clinical trial to detect or reject an anticipated intervention effect [2–4].
The estimation of a required information size for a meta-analysis in order to detect or reject an anticipated intervention effect on a binary outcome measure should be considered based on reasonable assumptions. These assumptions may be derived from two kinds of information. Firstly, by anticipating an a priori intervention effect, most appropriately decided at the time when the protocol for a systematic review is prepared. An a priori intervention effect may be estimated by consulting related interventions for the same disease or the same intervention for related diseases suggesting a clinically relevant effect to be detected or ruled out [2–4]. This situation would be almost analogous to the hypothesis testing in a single randomised trial. Secondly, an intervention effect estimated by trials with low-risk of bias in the meta-analysis may represent our best estimate, at a given time point, of a possible intervention effect knowing the available data . This would be a kind of a post hoc analysis of the information needed to detect or reject an intervention effect suggested by data already available. When planning a new trial it may be very important to estimate which IS is needed for the updated meta-analysis to be conclusive. In both instances the estimated required information size may be applied to grade the evidence reported in a cumulative meta-analysis adjusting for the risk of random error due to repetitive testing on accumulating data [5, 6]. If the number of actually accrued participants falls short of the required IS the meta-analysis may be inconclusive even though the confidence interval is suggestive of a clinical relevant effect or. Because if the confidence interval (or the p-value) is appropriately adjusted with sequential methods, it may no longer show a statistically significant or clinically relevant effect. Conversely, if the actually accrued number of participants supersedes the required information size without the meta-analysis becoming statistically significant we may be able to rule out the anticipated intervention effect size .
It is not realistic to assume that the population of the included trials in a meta-analysis is truly homogenous, as it may be in a single clinical trial. Meta-analysis, therefore, should not analyse included participants as if they are coming from one trial . Consequently the difference between obtaining the required IS and SS is rooted in the underlying assumption of between trial variability, and thus, the chosen meta-analytical model.
If the between-trial variability of the outcome measure estimates in a meta-analysis is incorporated into the model using the traditional one-way random-effects model, the required IS will be affected . In this vein, the required IS is a monotonically increasing function of the total variability among the included trials. An estimate of the required IS can therefore be derived once the degree of variability is known or prespecified . The test statistic for heterogeneity in a meta-analysis, the inconsistency factor (I 2) based on Cochran's Q proposed by Higgins and Thompson , may seem an obvious quantity to use for this purpose as it allow us to estimate the degree of the variation, which is not covered by assumption of homogeneity . However, I 2 is derived using a set of general assumptions that may be inappropriate in this context.
In this paper we derive a general expression for the required IS in any random-effects model. We prove the monotone relationship between IS and the degree of total variability in a one-way random-effects meta-analysis. We use our results to define a quantification of diversity (D 2) between included trials in a meta-analysis, which is the relative model variance reduction when the model of pooling is changed from a random-effects model into a fixed-effect model. We analyse and discuss the differences between our definition of diversity, D 2, and the commonly used measure for heterogeneity, I 2.
If the required IS needed to detect or reject an intervention effect in a meta-analysis should be at least the sample size needed to detect or reject a similar effect in a single trial, then the following scenario applies:
This yields the intuitive interpretation that the required IS in a random-effects model is a monotone increasing function of the degree of heterogeneity.
Number of trials
Number of participants
Afshari and others
Antithrombin III for critically ill patients
Al-Inany and others,
Cycle cancellations due to poor ovarian response
Gonadotropin releasing hormone for assisted reproductive therapy
Number of cycle cancellations
Soll and others,
Prophylactic surfactant to prevent morbidity and mortality in preterm infants.
Mortality or pneumothorax
Wetterslev and Juhl,
Effect of perioperative β-blockade on non-fatal perioperative AMI
Perioperative β-blockers for non-cardiac surgery
Perioperative myocardial infarction within 30 days of operation
Bury and Tudehope,
Effect of antibiotics on
necrotizing enterocolitis in newborn
Enteral antibiotics in newborn
Li and others,
Intravenous magnesium for acute myocardial infarction
Meyhoff and others,
Perioperative ventilation with 80% versus 30% oxygen during intestinal surgery
Perioperative ventilation with 80% oxygen
Wound infection within 15 days of surgery
Derived data from meta-analyses examples
Range of weights w i (% weights) in the fixed-effect model
(D 2 - I 2)%
A priori relative risk reduction %
Unadjusted information size
Diversity-adjusted information size
Afshari and others
Al-Inany and others,
Soll and others,
Wetterslev and Juhl,
Bury and Tudehope,
1.5 - 9.6
Li and others,
Meyhoff and others,
If the focus is shifted towards a sufficient IS estimation, then adjusting factors based on I 2 calculated from a moment-based sampling error may be insufficient. We therefore suggest to consider an alternative adjusting factor to obtain an adequate estimation of the required IS.
This way, D 2 in a meta-analysis may become a central measure of the between-trial variability relative to the sum of the between-trial variability with an estimate of the sampling error basically originating from the required information size.
As such, D 2 is able to quantify the relative model variance change from a random-effects into a fixed-effect model. More importantly D 2, in contrast to I 2, is not based on underlying assumptions of a 'typical' sampling error that are violated in most meta-analyses. D 2 is the percentage of the total variance (the sum of between trial variance and sampling error), in a random-effects model, contributed by the between trial variance.
In our simulations, we considered meta-analyses with k = 6 and k = 20 trials. For each k, we considered the four combinations from two different average control event proportions, (PC) of 10% and 30%, and two true values of the overall effect in terms of odds ratios of 1 and 0.7. The above values were selected aiming to cover different plausible meta-analytic scenarios. In total, these values make up for 8 simulation scenarios.
For each combination of the above mentioned variables we generated data for k 2×2 tables. For all k trials, within group sample sizes were determined by sampling an integer between 20 and 500 participants. Group sizes were equal in each simulated trial. We drew the trial specific control group event rate, PC i , from a uniform distribution, PC i ~U(PC-0.15, PC+0.15). We drew the number of observed events in the control group from a binomial distribution e iC ~bin(n i , PC i ). For each meta-analysis scenario we varied the degree of heterogeneity by sampling the between-rial standard deviation, τ (not the between-trial variance τ 2), from a uniform distribution, τ ~U(10 -10, SQRT(0.60)). We simulated the underlying true trial intervention effects, as log odds ratio ln(OR i )~N(OR, τ 2), where OR is the true intervention effect expressed as an odds ratio. We drew the observed number of events in the intervention group from a binomial distribution e iE ~bin(n i ,PE i ), where PE i = PC i exp(ln(OR i ))/(1 - PC i + PC i exp(ln(OR i )))
For all meta-analysis scenarios we simulated 10,000 meta-analyses and for each of these we calculated the and the . For each scenario we plotted D 2 against I 2 and incorporated the line of unity in the scatter-plot.
We selected traditional random-effects meta-analyses to cover a range of inconsistency I 2 from 0% to 100% and to come from a wide range of medical research fields.
So it follows from (3.5) that for all meta-analyses. Furthermore if we apply Chebyshev's inequality  arranging the weights and at the same time w 1 ≥ w 2 ≥ ...... w k then:
and, finally, D 2 ≥ T 2 ≥ I 2 in all meta-analyses.
demonstrating that the percentage of increase in variance when the model of meta-analysis is changed from a fixed-effect model into a random-effects model can, of course, also be expressed in terms of diversity.
as (1 + w i·τ 2) ≥ 1 for all i and for all estimators of τ 2 including the DerSimonian-Laird estimator  with being at least greater than or equal to 0.
We used the expression of D 2 to calculate this quantity in seven traditional random-effects meta-analyses [16–22] listed in Table 1. These meta-analyses cover a range of inconsistency, I 2, from 0% to 74.2% and come from different medical research fields: intensive care , assisted reproductive technology , perioperative medicine [19, 22], neonatology [18, 20], and cardiology . The results of the calculations of I 2,D 2, inconsistency-adjusted information size HIS ( ), and diversity-adjusted information size DIS ( ) from these meta-analyses are shown in Table 2. The range of the calculated unadjusted SS range from 440 to 31,094 participants.
Using a mathematical derivation, meta-analyses simulations, and examples of meta-analyses we derive a concept of diversity, D 2. D 2 may be used for adjustment of the required information size in any random-effects model meta-analysis once the between trial variance is estimated. Focusing on the required information size estimation in a random-effects meta-analysis, D 2 seems less biased compared to I 2. The D 2 is directly constructed to fulfil the requirements of the information size calculation and is subsequently independent of any 'typical' a priori sampling error estimate, whereas the I 2 is influenced by an a priori 'typical' sampling error estimate. We therefore find that it is possible and appropriate taking D 2 into consideration to calculate the required IS in meta-analyses as DIS.
DIS has several advantages. It measures the required IS needed to preserve the anticipated risk of type I and type II errors in a random-effects model meta-analysis. DIS considers total variance change when the model shifts from a fixed-effect into a random-effects model. DIS is a model dependent and derived estimate of the required IS. The adjustment is dependent only on the anticipated intervention effect and on the model used to incorporate the between-trial variance estimate . D2 applies to random-effects models other than that proposed by DerSimonian-Laird  as long as the between-trial estimator, , is specified. The adjustment of IS does not depend on the level of type I and II errors, as (Z1-α/2 + Z1-β )2 is levelled out during the derivation of the adjustment factor A RF (see equation 2.1, 2.2, and 2.5). The relationship D2 ≥ I2 in all the simulations and in all the examples (shown as points above the line of unity in figure 1, 2, and 3) are in accordance with the properties of D2 compared to I2 derived in section 3.1.
There are limitations of DIS. Like HIS the use of DIS cannot compensate for systematic bias such as selection bias, allocation bias, reporting bias, collateral intervention bias, and time lag bias [5, 23–28]. Furthermore, DIS is always greater than or equal to HIS, which may emphasise that caution is needed when interpreting meta-analysis before the required DIS has been reached [2–8].
The calculation of HIS and DIS may seem to contrast the SS calculation in a single trial where no adjustment for heterogeneity or diversity is performed. However, Fedorov and Jones  advocated the necessity of adjusting SS for heterogeneity arising from different accrual numbers among centres in a multi-centre trial in order to avoid the trial being underpowered. If such an adjustment seems fair for a single trial, it also appears appropriate for a meta-analysis of several trials. As an example, we calculated the DIS to 14,164 participants for a meta-analysis of the effect on mortality of perioperative beta-blockade in patients for non-cardiac surgery (Table 2). This may explain why a recent meta-analysis of seven randomised trials with low-risk of bias including 11,862 participants indicates, but still does not convincingly show, firm evidence for harm . The actual accrual of 11,862 participants is beyond the HIS of 9,726 participants, but below the DIS of 14,164 participants, and the meta-analysis  may still be inconclusive. This suggest that HIS is not a sufficiently adjusted meta-analytic information size. Furthermore, the example demonstrates the important question of the stability of I 2 and D 2 beyond a certain number of trials in a meta-analysis as I 2 was 13.4% in the meta-analysis after 2,211 participants  and has now doubled to I 2 = 27.0% after 11,862 accrued participants in the meta-analysis of seven trials with low-risk of bias . The assumption of I 2 and D 2 becoming stable after five trials is probably wrong and illustrates the moving target concept, which we have to face doing cumulative meta-analysis as evidence accumulates. Although a moving target may cause conceptual problems, a moving target may be better than no target at all.
The assumption that the IS required for a reliable and conclusive fixed-effect meta-analysis should be as large as the SS of a single well-powered randomised clinical trial to detect or reject an anticipated intervention effect [2–4] may not be necessary in some instances. The statistical information (SINF) required in a meta-analysis could ultimately be expressed as , with δ being the effect size. As SINF is the reciprocal of the variance in the meta-analysis, say , it follows that in meta-analyses with , the amount of information may eventually suffice to detect, or reject, an effect size of δ, without yet having reached HIS or DIS. This criterion, however, is not a simple one and may only be fulfilled occasionally. Furthermore, it seems impossible to forecast or even to get an idea of the magnitude of in the beginning of a series of trials as well as along the course of trials being performed.
D2 offers a number of useful properties compared to I2. In contrast to I2, D2 reflects the relative variance expansion due to the between trial variance estimate without assuming an estimate of a 'typical' sampling error σ2. D2 is reduced when the estimate is reduced, even for the same set of trials. In case diversity is larger than inconsistency this may be an indication that total variability among trials in the meta-analysis is even greater than suggested by I2. I2 is intrinsically influenced by a potentially overestimated sampling error ( ), thereby underestimating and inherently placing less weight on large trials with many events. On the other hand a 'typical' sampling error originating from the required information size, , could be deduced from the D2. We would, however, advise great cautiousness in such an attempt. The difference (D2 - I2) reflects the difference of the moment-based and the information size-based 'typical' sampling error estimate. The calculation of diversity and (D2 - I2) may serve as supplementary tools to the assessment of variability in a meta-analysis. D2 is a transformation of the variance ratio of the variances from the random-effects model and the fixed-effect model. This variance ratio was a candidate for the quantification of heterogeneity .
D2 may vary within the same set of trials when different between trial variance estimators are used in the corresponding random-effects model. On the contrary, I2 is intimately linked to the specific between trial variance estimator in the DerSimonian-Laird random-effects model as I2 by definition is  and Q is used to estimate a moment-based between trial variance . The interpretation of heterogeneity is obviously dependent on the variance estimator as well. An estimate of τ2 is a prerequisite for any random-effects model and the actual estimated value, together with the way is incorporated into the model, actually constitutes the model . Therefore, a quantification of the between-trial variability rather than sampling error which is independent of the specific random-effects model is impossible, as it is constituted by the between trial variance estimator . D2 adapt automatically to different between trial variance estimators  while I2 is linked to the estimator from the DerSimonian-Laird random-effects model.
D2 may have some limitations too. The derivation of D2 depends on the assumption that the point estimate of the intervention effect in the fixed-effect model and the point estimate of the intervention effect in the random-effects model are approximately equal. Meta-analyses with considerable difference of the point estimate in the fixed-effect model and the point estimate in the random-effects model represent specific problems. Probably more information is needed when μ F >> μ R since the formula yields higher values for N R under the assumption of a constant variance ratio. On the other hand less information may be needed when μ F <<μ R since the formula then yields lower values for N R under the assumption of a constant variance ratio. However, examples with considerable differences of the point estimates in a fixed- and random-effects model presumably represent meta-analyses of interventions with considerable between trial variance due to small trial bias. The meta-analysis of the effect of magnesium in patients with myocardial infarction is such an example  where one large trial totally dominate the result in the fixed-effect model but are unduly down-weighted in the random-effects model. Care should be taken to interpret the random-effects model despite any calculated information size in such a situation. Further, to foresee a priori the size of the difference between μ F and μ R seems impossible and the calculation may then degenerate exclusively to a post hoc analysis.
Second, D 2, though potentially unbiased with respect to information size calculations, could come with a greater variance than I 2 when both are calculated in the same set of meta-analyses. This latter situation presents a potentially unfavourable 'bias-variance-trade off' but an estimate of its magnitude will have to await simulation studies addressing the issue.
It may seem an advantage that I 2 is always reported in meta-analysis and therefore readily available to adjust the expected information size. On the other hand is also calculable for meta-analysis of ratio measures (e.g, RR or OR), widthF and widthR refers to the widths of the confidence intervals for the logarithmic transformed measures in the fixed-effect and the random-effects models, respectively.
Last but not least the decision to pool intervention effect estimates in meta-analysis should be the clinical relevance of any inconsistency or diversity present. The between trial variance,τ 2, rather than I 2 or D 2, may be the appropriate measure for this purpose [33–35].
The estimation of a required IS for a meta-analysis to detect or reject an anticipated intervention effect on a binary outcome measure should be considered based on reasonable assumptions. Accordingly, it may not be wise to assume absence of heterogeneity in a meta-analysis unless the intervention effect is anticipated to be zero [36, 37]. On the contrary it may be wise to anticipate moderate to substantial heterogeneity (e.g., more than 50%) in an a priori adjustment of the required IS . The concept of diversity points to the fact that an adjustment based on the experience with inconsistency would result in underestimated heterogeneity and hence an underestimated required IS . Alternatively for a future updated meta-analysis to become conclusive we may apply the actual estimated heterogeneity of the available trials in a meta-analysis as the best we have for the adjustment of the required IS. D 2 seems more capable than I 2 in obtaining such an adequate adjustment.
A quantity to characterise the proportion of between trial variation in any meta-analysis relative to the total model variance of the included trials is needed. Diversity, D 2, may be such a quantity. D 2 describes the relative model variance reduction changing from a random-effects model into a fixed-effect model. Diversity may be described as the proportion of the total variance in a random-effects model contributed by the between trial variation despite the chosen between trial variance estimator. Furthermore, D 2 can adequately adjust the required information size in any random-effects meta-analysis irrespective the meta-analytic model.
The authors declare that they have no competing interests.
JW is an anaesthesiologist and a trialist working with meta-analysis and trial sequential analysis at the Copenhagen Trial Unit having special interests in perioperative medicine.
KT is a biostatistician working with meta-analysis and trial sequential analysis at the Copenhagen Trial Unit.
JB is an intern working in paediatrics with meta-analysis and trial sequential analysis.
CG is head of the Copenhagen Trial Unit, Editor-In-Chief of the Cochrane Hepato-Biliary Group, a trialist, and an associate professor at Copenhagen University.
Risk of type 1 error
Risk of type 2 error
Adjustment factor of information size changing from a fixed-effect to a random-effects model
Diversity adjusted information size ( )
Heterogeneity adjusted information size ( )
Number of trials in a meta-analysis
Required number of participants in a random-effects meta-analysis
Required number of participants in a fixed-effect meta-analysis
Required number of participants in a meta-analysis
Estimate of the intervention effect in a fixed-effect meta-analysis
Estimate of the intervention effect in a random-effects meta-analysis
Control event rate
Relative risk reduction
Sample size in a single randomised clinical trial
Estimate of a typical sampling error considering diversity
Estimate of a typical moment-based sampling error
Mean of estimates of sampling errors in a meta-analysis
Estimator of the variance of between trial intervention effect estimates
Estimate of the variance of between trial intervention effect estimates
DerSimonian-Laird estimate of the variance of between trial intervention effect estimates
The variance in a fixed-effect meta-analysis
The variance in a random-effects meta-analysis
Fractile for 1-α/2
Fractile for 1-β.
We are grateful to Jørgen Hilden, M.D., associate professor emeritus at the Department of Biostatistics, Copenhagen University, for having critically reviewed a former version of our manuscript. We thank the peer reviewers Rebecca Turner, MSc in statistics and Gerta Rücker, MSc in statistics for helpful suggestions for improvements of the manuscript.
This article is published under license to BioMed Central Ltd. This is an Open Access article distributed under the terms of the Creative Commons Attribution License (http://creativecommons.org/licenses/by/2.0), which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited.