Using a mathematical derivation, meta-analyses simulations, and examples of meta-analyses we derive a concept of diversity, *D*
^{2}. *D*
^{2} may be used for adjustment of the required information size in any random-effects model meta-analysis once the between trial variance is estimated. Focusing on the required information size estimation in a random-effects meta-analysis, *D*
^{2} seems less biased compared to *I*
^{2}. The *D*
^{2} is directly constructed to fulfil the requirements of the information size calculation and is subsequently independent of any 'typical' *a priori* sampling error estimate, whereas the *I*
^{2} is influenced by an *a priori* 'typical' sampling error estimate. We therefore find that it is possible and appropriate taking *D*
^{2} into consideration to calculate the required *IS* in meta-analyses as *DIS*.

*DIS* has several advantages. It measures the required *IS* needed to preserve the anticipated risk of type I and type II errors in a random-effects model meta-analysis. *DIS* considers total variance change when the model shifts from a fixed-effect into a random-effects model. *DIS* is a model dependent and derived estimate of the required *IS*. The adjustment is dependent only on the anticipated intervention effect and on the model used to incorporate the between-trial variance estimate
. *D*^{2} applies to random-effects models other than that proposed by DerSimonian-Laird [16] as long as the between-trial estimator,
, is specified. The adjustment of *IS* does not depend on the level of type I and II errors, as (*Z*_{1-α/2} + *Z*_{1-β
})^{2} is levelled out during the derivation of the adjustment factor *A*_{
RF
} (see equation 2.1, 2.2, and 2.5). The relationship *D*^{2} ≥ *I*^{2} in all the simulations and in all the examples (shown as points above the line of unity in figure 1, 2, and 3) are in accordance with the properties of *D*^{2} compared to *I*^{2} derived in section 3.1.

There are limitations of *DIS*. Like *HIS* the use of *DIS* cannot compensate for systematic bias such as selection bias, allocation bias, reporting bias, collateral intervention bias, and time lag bias [5, 23–28]. Furthermore, *DIS* is always greater than or equal to *HIS*, which may emphasise that caution is needed when interpreting meta-analysis before the required *DIS* has been reached [2–8].

The calculation of *HIS* and *DIS* may seem to contrast the *SS* calculation in a single trial where no adjustment for heterogeneity or diversity is performed. However, Fedorov and Jones [29] advocated the necessity of adjusting *SS* for heterogeneity arising from different accrual numbers among centres in a multi-centre trial in order to avoid the trial being underpowered. If such an adjustment seems fair for a single trial, it also appears appropriate for a meta-analysis of several trials. As an example, we calculated the *DIS* to 14,164 participants for a meta-analysis of the effect on mortality of perioperative beta-blockade in patients for non-cardiac surgery (Table 2). This may explain why a recent meta-analysis of seven randomised trials with low-risk of bias including 11,862 participants indicates, but still does not convincingly show, firm evidence for harm [30]. The actual accrual of 11,862 participants is beyond the *HIS* of 9,726 participants, but below the *DIS* of 14,164 participants, and the meta-analysis [30] may still be inconclusive. This suggest that *HIS* is not a sufficiently adjusted meta-analytic information size. Furthermore, the example demonstrates the important question of the stability of *I*
^{2} and *D*
^{2} beyond a certain number of trials in a meta-analysis as *I*
^{2} was 13.4% in the meta-analysis after 2,211 participants [19] and has now doubled to *I*
^{2} = 27.0% after 11,862 accrued participants in the meta-analysis of seven trials with low-risk of bias [30]. The assumption of *I*
^{2} and *D*
^{2} becoming stable after five trials is probably wrong and illustrates the moving target concept, which we have to face doing cumulative meta-analysis as evidence accumulates. Although a moving target may cause conceptual problems, a moving target may be better than no target at all.

The assumption that the *IS* required for a reliable and conclusive fixed-effect meta-analysis should be as large as the *SS* of a single well-powered randomised clinical trial to detect or reject an anticipated intervention effect [2–4] may not be necessary in some instances. The statistical information (*SINF*) required in a meta-analysis could ultimately be expressed as
[31], with *δ* being the effect size. As *SINF* is the reciprocal of the variance in the meta-analysis, say
, it follows that in meta-analyses with
, the amount of information may eventually suffice to detect, or reject, an effect size of *δ*, without yet having reached *HIS* or *DIS*. This criterion, however, is not a simple one and may only be fulfilled occasionally. Furthermore, it seems impossible to forecast or even to get an idea of the magnitude of
in the beginning of a series of trials as well as along the course of trials being performed.

*D*^{2} offers a number of useful properties compared to *I*^{2}. In contrast to *I*^{2}, *D*^{2} reflects the relative variance expansion due to the between trial variance estimate
without assuming an estimate of a 'typical' sampling error *σ*^{2}. *D*^{2} is reduced when the estimate
is reduced, even for the same set of trials. In case diversity is larger than inconsistency this may be an indication that total variability among trials in the meta-analysis is even greater than suggested by *I*^{2}. *I*^{2} is intrinsically influenced by a potentially overestimated sampling error (
), thereby underestimating
and inherently placing less weight on large trials with many events. On the other hand a 'typical' sampling error originating from the required information size,
, could be deduced from the *D*^{2}. We would, however, advise great cautiousness in such an attempt. The difference (*D*^{2} - *I*^{2}) reflects the difference of the moment-based and the information size-based 'typical' sampling error estimate. The calculation of diversity and (*D*^{2} - *I*^{2}) may serve as supplementary tools to the assessment of variability in a meta-analysis. *D*^{2} is a transformation of the variance ratio of the variances from the random-effects model and the fixed-effect model. This variance ratio was a candidate for the quantification of heterogeneity [10].

*D*^{2} may vary within the same set of trials when different between trial variance estimators
are used in the corresponding random-effects model. On the contrary, *I*^{2} is intimately linked to the specific between trial variance estimator in the DerSimonian-Laird random-effects model as *I*^{2} by definition is
[10] and *Q* is used to estimate a moment-based between trial variance
[15]. The interpretation of heterogeneity
is obviously dependent on the variance estimator
as well. An estimate of *τ*^{2} is a prerequisite for any random-effects model and the actual estimated value, together with the way
is incorporated into the model, actually constitutes the model [32]. Therefore, a quantification of the between-trial variability rather than sampling error which is independent of the specific random-effects model is impossible, as it is constituted by the between trial variance estimator [32]. *D*^{2} adapt automatically to different between trial variance estimators [32] while *I*^{2} is linked to the estimator from the DerSimonian-Laird random-effects model.

*D*^{2} may have some limitations too. The derivation of *D*^{2} depends on the assumption that the point estimate of the intervention effect in the fixed-effect model and the point estimate of the intervention effect in the random-effects model are approximately equal. Meta-analyses with considerable difference of the point estimate in the fixed-effect model and the point estimate in the random-effects model represent specific problems. Probably more information is needed when *μ*_{
F
} >> *μ*_{
R
} since the formula
yields higher values for *N*_{
R
} under the assumption of a constant variance ratio. On the other hand less information may be needed when *μ*_{
F
}<<*μ*_{
R
} since the formula
then yields lower values for *N*_{
R
} under the assumption of a constant variance ratio. However, examples with considerable differences of the point estimates in a fixed- and random-effects model presumably represent meta-analyses of interventions with considerable between trial variance due to small trial bias. The meta-analysis of the effect of magnesium in patients with myocardial infarction is such an example [21] where one large trial totally dominate the result in the fixed-effect model but are unduly down-weighted in the random-effects model. Care should be taken to interpret the random-effects model despite any calculated information size in such a situation. Further, to foresee *a priori* the size of the difference between *μ*_{
F
} and *μ*_{
R
} seems impossible and the calculation may then degenerate exclusively to a post hoc analysis.

Second, *D*
^{2}, though potentially unbiased with respect to information size calculations, could come with a greater variance than *I*
^{2} when both are calculated in the same set of meta-analyses. This latter situation presents a potentially unfavourable 'bias-variance-trade off' but an estimate of its magnitude will have to await simulation studies addressing the issue.

It may seem an advantage that *I*
^{2} is always reported in meta-analysis and therefore readily available to adjust the expected information size. On the other hand
is also calculable for meta-analysis of ratio measures (e.g, RR or OR), width_{F} and width_{R} refers to the widths of the confidence intervals for the logarithmic transformed measures in the fixed-effect and the random-effects models, respectively.

Last but not least the decision to pool intervention effect estimates in meta-analysis should be the clinical relevance of any inconsistency or diversity present. The between trial variance,*τ*
^{2}, rather than *I*
^{2} or *D*
^{2}, may be the appropriate measure for this purpose [33–35].

The estimation of a required *IS* for a meta-analysis to detect or reject an anticipated intervention effect on a binary outcome measure should be considered based on reasonable assumptions. Accordingly, it may not be wise to assume absence of heterogeneity in a meta-analysis unless the intervention effect is anticipated to be zero [36, 37]. On the contrary it may be wise to anticipate moderate to substantial heterogeneity (e.g., more than 50%) in an *a priori* adjustment of the required *IS* [37]. The concept of diversity points to the fact that an adjustment based on the experience with inconsistency would result in underestimated heterogeneity and hence an underestimated required *IS* [37]. Alternatively for a future updated meta-analysis to become conclusive we may apply the actual estimated heterogeneity of the available trials in a meta-analysis as the best we have for the adjustment of the required *IS*. *D*
^{2} seems more capable than *I*
^{2} in obtaining such an adequate adjustment.