Meta-analysis of continuous outcomes traditionally uses mean difference (MD) or standardized mean difference (SMD; mean difference in pooled standard deviation (SD) units). We recently used an alternative ratio of mean values (RoM) method, calculating RoM for each study and estimating its variance by the delta method. SMD and RoM allow pooling of outcomes expressed in different units and comparisons of effect sizes across interventions, but RoM interpretation does not require knowledge of the pooled SD, a quantity generally unknown to clinicians.
Objectives and methods
To evaluate performance characteristics of MD, SMD and RoM using simulated data sets and representative parameters.
MD was relatively bias-free. SMD exhibited bias (~5%) towards no effect in scenarios with few patients per trial (n = 10). RoM was bias-free except for some scenarios with broad distributions (SD 70% of mean value) and medium-to-large effect sizes (0.5–0.8 pooled SD units), for which bias ranged from -4 to 2% (negative sign denotes bias towards no effect). Coverage was as expected for all effect measures in all scenarios with minimal bias. RoM scenarios with bias towards no effect exceeding 1.5% demonstrated lower coverage of the 95% confidence interval than MD (89–92% vs. 92–94%). Statistical power was similar. Compared to MD, simulated heterogeneity estimates for SMD and RoM were lower in scenarios with bias because of decreased weighting of extreme values. Otherwise, heterogeneity was similar among methods.
Simulation suggests that RoM exhibits comparable performance characteristics to MD and SMD. Favourable statistical properties and potentially simplified clinical interpretation justify the ratio of means method as an option for pooling continuous outcomes.
Meta-analysis is a method of statistically combining results of similar studies . For binary outcome variables both difference and ratio methods are commonly used. For each study, the risk difference is the difference in proportions of patients experiencing the outcome of interest between the experimental and control groups, the risk ratio is the ratio of these proportions, and the odds ratio is the ratio of the odds. Meta-analytic techniques are used to combine each study's effect measure to generate a pooled effect measure. Standard meta-analytic procedures for each of these effect measures also estimate heterogeneity, which is the variability in treatment effects of individual trials beyond that expected by chance. Each effect measure (risk difference, risk ratio, odds ratio) has advantages and disadvantages in terms of consistency, mathematical properties, and ease of interpretation, implying that none is universally optimal .
In contrast, for continuous outcome variables, only difference methods are commonly used for group comparison studies . If the outcome of interest is measured in identical units across trials, then the effect measure for each trial is the difference in means, and the pooled effect measure is the mean difference (MD), which more accurately should be described as the weighted mean of mean differences. If the outcome of interest is measured in different units, then each trial's effect measure is the difference in mean values divided by the pooled standard deviation of the two groups, and the pooled effect measure is the standardized mean difference (SMD), which more accurately should be described as the weighted mean of standardized mean differences. Normalizing the differences using the standard deviation allows pooling of such results, in addition to allowing comparison of effect sizes across unrelated interventions. By convention , SMD's of 0.2, 0.5, and 0.8 are considered "small", "medium", and "large" effect sizes, respectively. When trials in meta-analyses are weighted by the inverse of the variance of the effect measure (the weighting scheme generally used for MD and SMD), the pooled SMD has the unfavorable statistical property of negative bias (i.e. towards the null value) [5, 6]. Alternative methods of estimating the variance of individual trial SMDs used in the inverse variance method have been proposed to minimize this bias [5, 6].
In principle, meta-analysts could also use ratio methods to analyze continuous outcomes, by calculating a ratio of mean values instead of a difference. Since the ratio is unitless, this calculation can be carried out regardless of the specific units used in individual trials. Moreover, as with SMD, a ratio can be used to combine related but different outcomes (e.g. quality of life scales). We have recently used this Ratio of Means (RoM) method in meta-analyses [7–9] in which we estimated the variance of this ratio using the delta method . For this method, each individual study RoM is converted to its natural logarithm before being pooled, and the pooled result is then back transformed, similar to odds and risk ratio calculations used for binary outcomes. Table 1 presents the pooled results for the continuous variables from the meta-analysis of low-dose dopamine for renal dysfunction , analyzed using mean difference methods and the RoM method, in addition to heterogeneity expressed using the I2 measure. (I2 is the percentage of total variation in results across studies due to heterogeneity rather than chance [11, 12].) Table 1 shows similar results among the three methods. The point estimates are similar in direction (i.e. a positive mean difference or standardized mean difference corresponds to a RoM greater than one, while a negative mean difference corresponds to a RoM less than one). The confidence intervals result in similar p-values for statistically significant increases or decreases for each of these parameters. Finally, heterogeneity is similar.
Given the similarity of these results, the objective of this current study was to test the hypothesis that MD, SMD, and RoM methods exhibit comparable performance characteristics in terms of bias, coverage and statistical power, using simulated data sets with a range of parameters commonly encountered in meta-analyses.
The RoM Effect Measure
For mean difference meta-analysis, one calculates a difference in mean values between the experimental and control groups for each study. (A review of the inverse-variance weighted fixed and random effects models and calculation of the point estimates and variances for MD and SMD using standard methods [including a correction factor for small samples for SMD], can be found in the Appendix). Instead of calculating a difference in mean values between the experimental and control groups, one can calculate a ratio of mean values. The following uses the natural logarithm scale to carry out such calculations, similar to statistical procedures for binary effect measures (risk ratio and odds ratio), due to its desirable statistical properties .
For a study reporting a continuous outcome, let the mean, standard deviation, and number of patients be denoted by meanexp, sdexp, and nexprespectively in the experimental group and meancontr, sdcontr, and ncontr, respectively in the control group. The and the variance (Var) of its natural logarithm is estimated as follows:
The natural logarithm transformed ratios are aggregated across studies using the generalized inverse variance method described in the Appendix. The pooled transformed ratio is then back transformed to obtain a pooled ratio and 95% confidence interval (CI), as follows:
Log transformation of the ratio of mean values, a non-normally distributed function, allows this approximation of the 95% confidence interval of this approximately normally distributed transformed function. This approach is similar to that applied to other ratio methods such as OR and RR, used for binary group comparison studies.
As the ratio of means method is unitless, this method can be used irrespective of the units used in trial outcome measures. Using the delta method limited to first order terms results in a straightforward formula to estimate the variance of the ratio. Second order terms would be raised to the fourth power and are not included as they would not increase the variance by much. For example, even choosing simulation parameters that maximized the contribution of these second order terms (ratio of the standard deviation to the mean equal to 0.7, and n = 10 patients per trial arm [see below]), would increase the variance estimate by less than 2.5%.
Design of the Simulation Study
The parameters and their assigned values used to simulate continuous variable meta-analysis data sets for the individual scenarios are shown in Table 2. The values were chosen to be representative of typical meta-analyses exhibiting a range of standard deviations, number of trials, number of participants per trial, effect sizes, and heterogeneity. The standard deviation of the control group, expressed as a percentage of the mean value in the control group, was varied between 10% and 70% to reflect a narrow to broad distribution around the mean value. The standard deviations of the control and experimental groups were assumed to be equal for all simulations. The number of trials (k) ranged from 5 to 30. The number of patients per trial arm (n) was set to either 10 or 100 in each of the experimental and control groups, and all trials within each simulation were assumed to have the same number of patients. The effect size, expressed in pooled standard deviation units (i.e. SMD), was set at 0.2, 0.5, and 0.8 .
For each simulated scenario, k simulated study means and standard deviations were calculated from a collection of n individual values randomly sampled from a normal distribution. This was done independently for the control and experimental groups. For the control group the normal distribution from which values were randomly sampled had a mean value set to 100, resulting in a standard deviation of 10, 40, or 70. For the experimental group the normal distribution from which values were randomly sampled had a mean value of [100 + (effect size) × (standard deviation)] and the same standard deviation as the control group. Using the simulated study mean values and standard deviations, meta-analysis was carried out using MD, SMD, and RoM, with inverse variance weighting and a random effects model as described in the Appendix. With the parameters described above, the expected MD = (effect size) × (standard deviation), the expected SMD = effect size, and the expected RoM = 1 + [(effect size) × (standard deviation)/(mean value in control group[= 100])], where effect size varies as 0.2, 0.5 and 0.8, and the standard deviation varies as 10, 40, and 70.
Heterogeneity for each scenario was introduced by setting τ = 0.5 standard deviation units. This was achieved by introducing an additional study-specific standard deviation equal to 0.5/√2 standard deviation units to both the experimental and the control groups, since the study-specific standard deviation of the difference between experimental and control groups is given by √[(0.5/√2)2 + (0.5/√2)2] = 0.5 standard deviation units. In other words, study-specific variance was added to experimental and control group means but the baseline difference and ratio in mean values was held constant. Since a given degree of result heterogeneity may be reflected differently in the difference methods (MD and SMD) compared to the ratio method (RoM), heterogeneity was added at the level of the individual mean values rather than the level of the treatment effects to ensure that the degree of heterogeneity added was comparable between the three methods. Heterogeneity of each meta-analysis scenario is presented using I2. Since I2 = τ2/(τ2 + s2), where s2 is the variance of the effect measure, as described in the Appendix, the expected value for I2 for τ = 0.5 can be calculated to be 56% when n = 10 patients per trial arm, corresponding to the introduction of a moderate (i.e. I2 = 50–75%) degree of heterogeneity, and 93% when n = 100 patients per trial arm, corresponding to a high (i.e. I2 > 75%) degree of heterogeneity [11, 12].
The baseline scenarios assumed equal numbers of participants in both the experimental and control arms and were constructed by randomly selecting data points from normally distributed data. Separate sensitivity analyses were also carried out to determine 1) the effect of unequal numbers of participants (chosen to have a 2:1 and 1:2 experimental:control arm ratio but keeping the total number of participants constant (i.e. 14:6 instead of 10:10 and 134:66 instead of 100:100)) and 2) the effect of selecting the data points from an underlying skewed distribution. The skewed distribution was empirically constructed by mixing a combination of 3 normal distributions with identical standard deviations (0.24) centered at 0.84, 1.42 and 1.92 and weighted 77%, 17%, and 6% respectively in the overall mixed skewed distribution. This created a graphical distribution appearing markedly skewed on visual inspection with an overall mean of unity and overall standard deviation similar to that of the middle normally distributed data scenario (i.e. 40% of the control mean value), but skewness (third standardized moment about the mean ) of 0.88.
For each scenario, data points were generated and analyzed 10,000 times and performance characteristics of each effect measure were assessed. These consisted of bias (expressed as a percentage of the true parameter value, directed away or towards the null value [zero for MD and SMD, and one for RoM]), coverage (of the 95% confidence interval of the simulated result, i.e. the percentage of time that the true parameter value falls within the 95% confidence interval of the simulated result), statistical power (the percentage of time that the 95% confidence interval of the simulated result yields a significant treatment effect, by excluding zero for MD and SMD or one for RoM), and heterogeneity (expressed as I2). Simulations were programmed and carried out using SAS (version 8.2, Cary, NC).
Table 3 presents simulation results for the baseline scenarios (standard deviation 40% of the control mean, equal numbers of patients in the control and experimental groups, underlying normal distribution of the individual data points), for each combination of effect size (0.2, 0.5, and 0.8 standard deviation units), number of patients (10 and 100 control and experimental patients per trial), and number of trials (5, 10, and 30). Results for similar scenarios except with lower (10% of the control mean) or higher (70% of the control mean) standard deviations are shown in Tables 4 and 5. The skewed distribution results were similar (Table 6).
The MD method exhibits minimal bias (less than 0.5%) in almost all of scenarios. In contrast, there is one principal source of bias for the SMD method and two for the RoM method.
SMD Bias Towards No Effect with Smaller Trials
SMD is biased towards zero or no effect, with the bias more prominent when the number of patients per study is small. Table 3 shows this bias to be (-)4 to 6% in the baseline scenario with 10 patients per trial, regardless of the number of trials. The bias decreases in the 100-patient per trial scenarios. The lower weighting of extreme values (i.e. values far from no effect [zero]) in the SMD method also results in decreased heterogeneity (I2), in the scenarios with 10 patients per trial where the bias is largest (discussed in the heterogeneity section below). Sampling variance alone results in bias toward zero, but this bias is even larger when heterogeneity is present since this results in a further increase in dispersion (or the effective variance) of the results. These findings are consistent with theoretical considerations (see Appendix).
In contrast, the RoM bias depends on the relative effects of two competing sources of bias. The first is a negative bias towards unity or no effect due to properties of the variance of ln(RoM) and is most pronounced when the number of patients per trial is small. The second is a bias away from unity or no effect occurring when heterogeneity is present, due to properties of RoM. Although bias from both sources is absent or less than 0.5% in all scenarios with 100 patients per trial and no heterogeneity, one or both sources of bias can be significant in other scenarios. These are described in more detail below.
RoM Bias Towards No Effect with Smaller Trials
To understand the bias towards unity or no effect, one must consider the factors influencing the variance of ln(RoM) described in the Methods. As in the scenarios studied with equal standard deviations in the control and experimental groups, consider RoM >1, where the experimental mean is greater than the control mean. In this situation the contribution of the experimental group's relative error to the variance of ln(RoM) is smaller than that of the control group's relative error. As RoM increases, either the experimental mean value (meanexp) increases for a given control mean value (meancontr) or meancontrfalls for a given meanexp. In the former case, the term (1/meanexp)2 falls and the variance of ln(RoM) becomes relatively smaller, compared to lower RoM values. In the latter case, the term (1/meancontr)2 increases and the variance of ln(RoM) becomes relatively larger, compared to lower RoM values. Because of the different relative error term contributions discussed above, the decrease in the experimental group relative error term determined by (1/meanexp)2 is smaller than the increase in the control group relative error term determined by (1/meancontr)2. Thus, when these effects are averaged, the overall effect is that higher RoM values have a higher variance and therefore receive relatively lower weighting in the inverse variance weighted meta-analysis, leading to bias towards unity or no effect. This bias is accentuated by i) larger standard deviations, ii) higher heterogeneity (due to larger effective standard deviations), and iii) smaller trials. The bias is best demonstrated in the scenarios without heterogeneity in which the standard deviation is 70% of the mean control value, the number of patients per trial is 10 and the effect size is moderate to large, as shown in Table 5. (This bias is also present in the scenarios with heterogeneity and 10 patients per trial shown in Table 5; however, the overall bias in these scenarios is due to the combined effect of both this bias towards unity discussed in this section and a bias away from unity discussed in the next section.) Under these conditions, the magnitude of this bias ranges up to 2–3%. Due to its inverse dependence on study size it decreases to less than 0.5% in the scenarios without heterogeneity enrolling 100 patients per trial. The bias is also accentuated in scenarios with either larger RoM or a ratio of the number of patients in the experimental group to the number of patients in the control group (nexp/ncontr) >1. This occurs because as either meanexpincreases relative to meancontror nexpincreases relative to ncontr, the term (1/nexpmeanexp2) decreases and changes in the control group relative error predominate to an even greater extent. For example, compare the results from scenarios without heterogeneity shown in Table 3 with a 1:1 experimental to control patient ratio to those shown in Table 7 with a 2:1 experimental to control patient ratio. In contrast, increasing ncontrelative to nexpdecreases the contribution of the control group's relative error. This decreases the magnitude of the bias towards unity and can even change the direction of the bias from negative to positive (i.e. to a bias away from unity or no effect) if the ratio ncontr/nexpis increased to a value greater than meanexp2/meancontr2 (RoM2). This is illustrated in Table 8 where ncontr/nexp= 2 and RoM2 < 2.
RoM Bias Away from No Effect Due to Heterogeneity
The second RoM bias is a bias away from unity (or no effect) that occurs only in the scenarios with heterogeneity and is due to the effects of heterogeneity on the RoM. It is most apparent in the scenarios with heterogeneity with higher standard deviations (70% of the mean control value as shown in Table 5) when the number of patients per trial is 100, since in the 100-patient per trial scenarios the simultaneously present bias towards unity (or no effect) discussed above is less than 0.5%. (In the scenarios with heterogeneity and 10 patients per trial shown in Table 5, the overall bias is due to the combined effect of both the bias towards unity discussed in the previous section and the bias away from unity discussed in this section.) This bias away from unity ranges up to 1–2% in the scenarios with 100 patients per trial and occurs for the following reason. Heterogeneity is introduced in the simulations by shifting individual trial experimental and control mean values upwards or downwards. As in the scenarios presented, for RoM>1, in trials where both the experimental and control means are shifted upwards, the RoM value is decreased, and in trials when the means are both shifted downwards, the RoM value is increased. For upward and downward shifts of equal magnitude, the increase in the RoM for downward shifts is greater than the decrease in the RoM for upward shifts. This results in a pooled RoM value that is greater in the presence of heterogeneity and results in the bias away from unity. Table 5 demonstrates that this bias is higher for increasing effect sizes (i.e. larger ratios). Comparing the results where the standard deviation is higher (70% of the mean control value, Table 5) to those where the standard deviation is lower (e.g. 40% of the mean control value, Table 3) demonstrates that this bias increases with increasing standard deviations due to an increased range of individual trial RoM estimates.
The proportion of the scenarios for which the 95% confidence interval contains the true effect size is relatively similar among the three methods for most scenarios. The coverage is close to 95%, as expected, for the scenarios with no heterogeneity, but decreases when heterogeneity is introduced. The lowest coverage of 87–88% is equally low with all three methods and occurs when heterogeneity is present with 5 trials and 100 patients per trial arm. This low coverage occurs because with the degree of heterogeneity in these scenarios (I2 = 92–93%) the mean values can be widely variable. With only 5 trials, the pooled value of these mean values can be far from the true value and due to the large number of patients per trial the confidence intervals for the individual trials are relatively narrow resulting in missed coverage of the true value. Increasing the number of patients to 1000 patients per trial arm still results in coverage rates between 87–88% (results not shown), because the degree of missed coverage is dominated by the degree of heterogeneity and the increase in I2 from 92–93% in the scenarios with 100 patients per trial arm to I2 = 99% for the scenarios with 1000 patients per trial arm is relatively small.
Statistical Power to Detect a Significant Treatment Effect
As expected, statistical power (the proportion of scenarios yielding a significant treatment effect) increases with increasing effect size, number of patients, and number of trials, and decreases with more heterogeneity. Power also decreases with imbalanced patient allocation between groups (Tables 7 and 8) because confidence intervals are wider compared to balanced allocation scenarios. Statistical power is similar among the three methods in most scenarios. In scenarios where SMD or RoM are biased towards no effect, the power decreases compared to MD, and in the scenarios where RoM is biased away from unity, power increases compared to MD. Overall, the effect of these biases is small so that the proportion of scenarios yielding significant treatment effects is within 5 percentage points for almost all scenarios.
For the scenarios with heterogeneity, I2 is around 55–60% and greater than 90% for scenarios with n = 10 and n = 100 patients per trial, respectively, close to the expected values. In scenarios where SMD and RoM are biased, I2 is lower compared to MD, which is relatively free of bias (for example, scenarios with 10 patients per trial in Tables 3, 4, 5 [SMD] or Table 5 [RoM]). This occurs because bias decreases the weighting of values greatly deviating from no effect (zero for SMD and one for RoM), decreasing I2. In the scenarios exhibiting less bias, I2 among all methods is similar (for example, scenarios with 100 patients per trial in Tables 3, 4, 5).
This study examines the use of a new effect measure for meta-analysis of continuous outcomes that we call the ratio of means (RoM). In this method, the ratio of the mean value in the experimental group to that of the control group is calculated. The natural logarithm-transformed delta method approximated to first order terms provides a straightforward equation estimating the variance of the RoM for each study. Using this formulation, we performed simulations to compare the performance of RoM to traditionally used difference of means methods, MD and SMD.
Each method performed well within the simulated parameters with low bias and high coverage, even in scenarios with moderate or high heterogeneity. The methods had similar statistical power to detect significant treatment effects. SMD exhibited some bias towards zero or no effect, especially with smaller studies, as previously described [5, 6], whereas MD was relatively bias-free. RoM gave acceptable results, with bias usually less than 2–3%.
As discussed earlier, SMD and RoM, unlike MD, allow pooling of studies expressed in different units and allow comparisons regarding relative effect sizes across different interventions. However, interpreting the results of a meta-analysis that uses SMD to determine the expected treatment effect in a specific patient population requires knowledge of the pooled standard deviation. This information is frequently unknown to clinicians. In contrast, interpretation of the results of a meta-analysis that uses RoM does not require knowledge of the pooled standard deviation and may permit clinicians to more readily estimate treatment effects for their patients. Moreover, RoM provides a result similar in form to a risk ratio, a binary effect measure preferred by clinicians . Thus, overall RoM may be easier for clinicians to interpret.
One limitation of RoM is that the mean values of the intervention and control groups must both be positive or negative, since the logarithm of a negative ratio is undefined. All simulations assumed positive mean values in both groups. This limitation may be less important for biological variables since these generally have positive values. Another related limitation inherent to ratio methods occurs for a normally distributed control variable with a very broad distribution (i.e. a significant proportion of expected negative values) or for a control variable with only positive values but a distribution heavily skewed towards zero. In both such distributions, a high proportion of the control mean values will be very small. These small values in the denominator of the RoM can result in a high proportion of exceedingly large ratios. This could generate results biased to higher values.
In addition to statistical properties, the choice between a difference or a ratio method for a specific situation should be determined by the biological effect of the treatment as either additive or relative for different control group values. Unfortunately, this information is frequently not known in advance. For binary outcomes, empirical comparisons between difference methods (risk difference) and ratio methods (risk ratio and odds ratio) using published meta-analyses have shown that the risk difference exhibits less consistency compared to ratio methods, resulting in increased heterogeneity [17, 18]. This suggests that for binary outcomes, relative differences are more preserved than absolute differences as baseline risk varies. It is unclear whether this is also the case for continuous outcomes, but such an empirical comparison between difference and ratio methods can be performed using our description of RoM.
The results of our meta-analytic simulation studies suggest that the RoM method compares favorably to MD and SMD in terms of bias, coverage, and statistical power. Similar to binary outcome analysis for which both ratio and difference methods are available, this straightforward method provides researchers the option of using a ratio method in addition to difference methods for analyzing continuous outcomes.
This appendix briefly reviews the inverse-variance weighted fixed and random effects models and the determination of the point estimate and variance for the continuous outcome measures, MD and SMD. The derivation of the point estimate and variance for RoM is described in the main text.
The inverse-variance weighted fixed and random effects models
In fixed effects meta-analysis, the individual studies' treatment effect measures are assumed to be distributed around the same value for each study. An estimate of this effect measure is obtained by taking a weighted average of individual studies' effect measures, weighting each study by the inverse of the variance of the effect measure used:
where ΘIV(FE) is the inverse-variance weighted fixed effects pooled effect estimate for k total studies, Θi is the effect measure estimate for study i, and weighting wi = 1/variance(Θi).
In random effects meta-analysis, the individual studies' effect measures are assumed to vary around an overall average treatment effect. An estimate of the variance of this distribution of treatment effects, also known as between-study heterogeneity, τ2, is incorporated into the weights  to produce a summary estimate
where wi* = 1/(wi-1 + τ2). One estimate of τ2 uses the Q statistic:
Q = Σi = 1, k wi × (Θi - ΘIV(FE))2
which has a χ2 distribution with k-1 degrees of freedom when τ2 = 0. An estimate of τ2 follows:
When there is no between-trial heterogeneity (τ2 = 0), the Q-statistic has the expected value of k-1, and the ratio Q/(k-1)  has an expected value of unity. Under these circumstances the random effects model is equivalent to the fixed effects model. In situations with heterogeneity (τ2 > 0), Q/(k-1)> 1, and the proportion of variation in study-level estimates of treatment effect due to between-study heterogeneity can be expressed using the I2 measure expressed as a percentage. I2 can be expressed in terms of Q and k-1, where I2 = [Q/(k-1) - 1]/[Q/(k-1)] which simplifies to (Q-(k-1))/Q . I2 can also be expressed as τ2/(τ2 + s2), where s2 is the variance of the effect measure, and s2 = Σi = 1, k wi (k-1)/[Σi = 1, k wi)2 - Σi = 1, k wi2] . When the variance (and thus weighting) of each trial is identical, as is the case with all the simulated scenarios in this study, then the variance of the effect measure, or s2, reduces to the variance of a single trial.
Thus, to carry out a random effects meta-analysis requires calculating the effect measure and its variance for each study to be combined. First the fixed effects pooled effect measure is calculated, which is then used to estimate Q and τ2, and finally τ2 is used to estimate the random effects pooled effect measure and its variance.
The Mean Difference Effect Measure
Using the measured values, the mean difference effect measure for each study (MDi) is estimated as:
MDi = meanexp- meancontr
with estimated variance,
Var (MDi) = Var (meanexp) + Var (meancontr) = (sdexp/√nexp)2 + (sdcontr/√ncontr)2
where the subscripts "exp" and "contr" refer to the experimental and control groups, respectively, mean to the mean value, sd to the standard deviation, and n to the number of patients in each group. The individual effect measures and their variances are combined as described previously. All studies need to be reported in identical units for mean difference to be used as the effect measure.
The Standardized Mean Difference Effect Measure
When the outcome is not measured in identical units across studies, one can use the standardized mean difference for each study (SMDi), in which the difference in the means is divided by the pooled standard deviation. The estimated value of SMDi is often multiplied by a correction factor to correct for bias away from zero (towards larger effect sizes) when the number of patients in each group is small , as follows:
with estimated variance,
The individual effect measures and their variances are combined as described previously. As SMDi assumes more extreme positive or negative values deviating from zero, Var(SMDi) increases, resulting in a smaller weighting for such trials. This means that in general SMD is biased towards zero or no effect [5, 6]. This bias towards zero is independent of the number of studies in the meta-analysis, and decreases for larger N. Using a random effects model instead of a fixed effects model can reduce this bias because the between-study variance, estimated by τ2, tends to equalize the study weights. However, this advantage is offset by a lower Q (used to estimate τ2), which depends on the inverse of the variance for each study and therefore is also biased towards lower values. Alternate weighting methods have been proposed to address this bias [5, 6].
counter ranging from 1 to the number of trials in each meta-analysis (k)
I2 heterogeneity measure
number of trials in each meta-analysis
mean value in the control group
mean value in the experimental group
number of experimental or number of control patients per trial
number of control patients per trial
number of experimental patients per trial
total number of patients per trial (N = ncontr + nexp)
Cochran's Q statistic for heterogeneity
sampling variance of the effect measure
standardized mean difference
standard deviation in the control group
standard deviation in the experimental group
pooled standard deviation of the control and experimental groups
variance due to heterogeneity
weighting of study i
weighting of study i incorporating the variance due to heterogeneity
MD SMD, or RoM effect measure estimate for study i
Eggar M, Davey Smith G, Altman DG, editors: Systematic Reviews in Health Care: Meta-Analysis in Context. 2001, London: BMJ Books
Deeks JJ, Altman DG: Effect measures for meta-analysis of trials with binary outcomes. Systematic Reviews in Health Care: Meta-Analysis in Context. Edited by: Eggar M, Davey Smith G, Altman DG. 2001, London: BMJ Books, 313-335.
Deeks JJ, Altman DG, Bradburn MJ: Statistical methods for examining heterogeneity and combining results from several studies in meta-analysis. Systematic Reviews in Health Care: Meta-Analysis in Context. Edited by: Eggar M, Davey Smith G, Altman DG. 2001, London: BMJ Books, 285-312.
van den Noortgate W, Onghena P: Estimating the mean effect size in meta-analysis: bias, precision, and mean squared error of different weighting methods. Behavior Research Methods, Instruments, & Computers. 2003, 35: 504-511.
Sud S, Sud M, Friedrich JO, Adhikari NKJ: Effect of mechanical ventilation in the prone position on clinical outcomes in patients with acute hypoxemic respiratory failure: a systematic review and meta-analysis. CMAJ. 2008, 178: 1153-1161.
The study received no specific funding. JF is supported by a Clinician Scientist Award from the Canadian Institutes of Health Research (CIHR), and JB by CIHR Grant No. 84392. CIHR had no involvement in the conduct of this study.
Authors and Affiliations
Department of Medicine, University of Toronto, Toronto, Canada
Jan O Friedrich
Interdepartmental Division of Critical Care, University of Toronto, Toronto, Canada
Jan O Friedrich & Neill KJ Adhikari
Critical Care and Medicine Departments and Li Ka Shing Knowledge Institute, St. Michael's Hospital, Toronto, Canada
Jan O Friedrich
Department of Critical Care Medicine and Sunnybrook Research Institute, Sunnybrook Health Sciences Centre, Toronto, Canada
Neill KJ Adhikari
Department of Public Health Sciences, University of Toronto, Toronto, Canada
Child Health Evaluative Sciences, Hospital for Sick Children Research Institute, Toronto, Canada
The authors declare that they have no competing interests.
JOF was involved with the conception and design of the study, acquisition, analysis and interpretation of data and drafted the manuscript. NKJA was involved with the conception and design of the study, interpretation of data and critical revision of the manuscript for important intellectual content. JB was involved in the conception and design of the study, interpretation of data and critical revision of the manuscript for important intellectual content. All authors read and approved the final version of the manuscript.
Rights and permissions
This article is published under license to BioMed Central Ltd. This is an Open Access article distributed under the terms of the Creative Commons Attribution License (http://creativecommons.org/licenses/by/2.0), which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited.
Friedrich, J.O., Adhikari, N.K. & Beyene, J. The ratio of means method as an alternative to mean differences for analyzing continuous outcome variables in meta-analysis: A simulation study.
BMC Med Res Methodol8, 32 (2008). https://doi.org/10.1186/1471-2288-8-32