Skip to main content

The ratio of means method as an alternative to mean differences for analyzing continuous outcome variables in meta-analysis: A simulation study

Abstract

Background

Meta-analysis of continuous outcomes traditionally uses mean difference (MD) or standardized mean difference (SMD; mean difference in pooled standard deviation (SD) units). We recently used an alternative ratio of mean values (RoM) method, calculating RoM for each study and estimating its variance by the delta method. SMD and RoM allow pooling of outcomes expressed in different units and comparisons of effect sizes across interventions, but RoM interpretation does not require knowledge of the pooled SD, a quantity generally unknown to clinicians.

Objectives and methods

To evaluate performance characteristics of MD, SMD and RoM using simulated data sets and representative parameters.

Results

MD was relatively bias-free. SMD exhibited bias (~5%) towards no effect in scenarios with few patients per trial (n = 10). RoM was bias-free except for some scenarios with broad distributions (SD 70% of mean value) and medium-to-large effect sizes (0.5–0.8 pooled SD units), for which bias ranged from -4 to 2% (negative sign denotes bias towards no effect). Coverage was as expected for all effect measures in all scenarios with minimal bias. RoM scenarios with bias towards no effect exceeding 1.5% demonstrated lower coverage of the 95% confidence interval than MD (89–92% vs. 92–94%). Statistical power was similar. Compared to MD, simulated heterogeneity estimates for SMD and RoM were lower in scenarios with bias because of decreased weighting of extreme values. Otherwise, heterogeneity was similar among methods.

Conclusion

Simulation suggests that RoM exhibits comparable performance characteristics to MD and SMD. Favourable statistical properties and potentially simplified clinical interpretation justify the ratio of means method as an option for pooling continuous outcomes.

Peer Review reports

Background

Meta-analysis is a method of statistically combining results of similar studies [1]. For binary outcome variables both difference and ratio methods are commonly used. For each study, the risk difference is the difference in proportions of patients experiencing the outcome of interest between the experimental and control groups, the risk ratio is the ratio of these proportions, and the odds ratio is the ratio of the odds. Meta-analytic techniques are used to combine each study's effect measure to generate a pooled effect measure. Standard meta-analytic procedures for each of these effect measures also estimate heterogeneity, which is the variability in treatment effects of individual trials beyond that expected by chance. Each effect measure (risk difference, risk ratio, odds ratio) has advantages and disadvantages in terms of consistency, mathematical properties, and ease of interpretation, implying that none is universally optimal [2].

In contrast, for continuous outcome variables, only difference methods are commonly used for group comparison studies [3]. If the outcome of interest is measured in identical units across trials, then the effect measure for each trial is the difference in means, and the pooled effect measure is the mean difference (MD), which more accurately should be described as the weighted mean of mean differences. If the outcome of interest is measured in different units, then each trial's effect measure is the difference in mean values divided by the pooled standard deviation of the two groups, and the pooled effect measure is the standardized mean difference (SMD), which more accurately should be described as the weighted mean of standardized mean differences. Normalizing the differences using the standard deviation allows pooling of such results, in addition to allowing comparison of effect sizes across unrelated interventions. By convention [4], SMD's of 0.2, 0.5, and 0.8 are considered "small", "medium", and "large" effect sizes, respectively. When trials in meta-analyses are weighted by the inverse of the variance of the effect measure (the weighting scheme generally used for MD and SMD), the pooled SMD has the unfavorable statistical property of negative bias (i.e. towards the null value) [5, 6]. Alternative methods of estimating the variance of individual trial SMDs used in the inverse variance method have been proposed to minimize this bias [5, 6].

In principle, meta-analysts could also use ratio methods to analyze continuous outcomes, by calculating a ratio of mean values instead of a difference. Since the ratio is unitless, this calculation can be carried out regardless of the specific units used in individual trials. Moreover, as with SMD, a ratio can be used to combine related but different outcomes (e.g. quality of life scales). We have recently used this Ratio of Means (RoM) method in meta-analyses [79] in which we estimated the variance of this ratio using the delta method [10]. For this method, each individual study RoM is converted to its natural logarithm before being pooled, and the pooled result is then back transformed, similar to odds and risk ratio calculations used for binary outcomes. Table 1 presents the pooled results for the continuous variables from the meta-analysis of low-dose dopamine for renal dysfunction [7], analyzed using mean difference methods and the RoM method, in addition to heterogeneity expressed using the I 2 measure. (I 2 is the percentage of total variation in results across studies due to heterogeneity rather than chance [11, 12].) Table 1 shows similar results among the three methods. The point estimates are similar in direction (i.e. a positive mean difference or standardized mean difference corresponds to a RoM greater than one, while a negative mean difference corresponds to a RoM less than one). The confidence intervals result in similar p-values for statistically significant increases or decreases for each of these parameters. Finally, heterogeneity is similar.

Table 1 Renal Physiological Parameters from Low-Dose Dopamine Meta-Analysis 1 Day After Starting Therapy [7].

Given the similarity of these results, the objective of this current study was to test the hypothesis that MD, SMD, and RoM methods exhibit comparable performance characteristics in terms of bias, coverage and statistical power, using simulated data sets with a range of parameters commonly encountered in meta-analyses.

Methods

The RoM Effect Measure

For mean difference meta-analysis, one calculates a difference in mean values between the experimental and control groups for each study. (A review of the inverse-variance weighted fixed and random effects models and calculation of the point estimates and variances for MD and SMD using standard methods [including a correction factor for small samples for SMD], can be found in the Appendix). Instead of calculating a difference in mean values between the experimental and control groups, one can calculate a ratio of mean values. The following uses the natural logarithm scale to carry out such calculations, similar to statistical procedures for binary effect measures (risk ratio and odds ratio), due to its desirable statistical properties [14].

For a study reporting a continuous outcome, let the mean, standard deviation, and number of patients be denoted by mean exp , sd exp , and n exp respectively in the experimental group and mean contr , sd contr , and n contr , respectively in the control group. The RoM = m e a n exp m e a n c o n t r MathType@MTEF@5@5@+=feaafiart1ev1aaatCvAUfKttLearuWrP9MDH5MBPbIqV92AaeXatLxBI9gBaebbnrfifHhDYfgasaacPC6xNi=xH8viVGI8Gi=hEeeu0xXdbba9frFj0xb9qqpG0dXdb9aspeI8k8fiI+fsY=rqGqVepae9pg0db9vqaiVgFr0xfr=xfr=xc9adbaqaaeGaciGaaiaabeqaaeqabiWaaaGcbaGaeeOuaiLaee4Ba8Maeeyta0Kaeyypa0tcfa4aaSaaaeaacqWGTbqBcqWGLbqzcqWGHbqycqWGUbGBdaWgaaqaaiGbcwgaLjabcIha4jabcchaWbqabaaabaGaemyBa0MaemyzauMaemyyaeMaemOBa42aaSbaaeaacqWGJbWycqWGVbWBcqWGUbGBcqWG0baDcqWGYbGCaeqaaaaaaaa@4765@ and the variance (Var) of its natural logarithm is estimated as follows:

V a r [ ln ( m e a n exp m e a n c o n t r ) ] = V a r [ ln ( m e a n exp ) ln ( m e a n c o n t r ) ] = V a r [ ln ( m e a n exp ) ] + V a r [ ln ( m e a n c o n t r ) ] [ since the groups are independent ] = ( 1 m e a n exp ) 2 V a r ( m e a n exp ) + ( 1 m e a n c o n t r ) 2 V a r ( m e a n c o n t r ) = 1 n exp ( s d exp m e a n exp ) 2 + 1 n c o n t r ( s d c o n t r m e a n c o n t r ) 2 [ since for random variable  X V a r ( m e a n X ) = V a r ( X ) n X = s d X 2 n X ] MathType@MTEF@5@5@+=feaafiart1ev1aaatCvAUfKttLearuWrP9MDH5MBPbIqV92AaeXatLxBI9gBaebbnrfifHhDYfgasaacPC6xNi=xI8qiVKYPFjYdHaVhbbf9v8qqaqFr0xc9vqFj0dXdbba91qpepeI8k8fiI+fsY=rqGqVepae9pg0db9vqaiVgFr0xfr=xfr=xc9adbaqaaeGaciGaaiaabeqaaeqabiWaaaGcbaqbaeWabuqaaaaabaGaemOvayLaemyyaeMaemOCai3aamWaaeaacyGGSbaBcqGGUbGBdaqadaqcfayaamaalaaabaGaemyBa0MaemyzauMaemyyaeMaemOBa42aaSbaaeaacyGGLbqzcqGG4baEcqGGWbaCaeqaaaqaaiabd2gaTjabdwgaLjabdggaHjabd6gaUnaaBaaabaGaem4yamMaem4Ba8MaemOBa4MaemiDaqNaemOCaihabeaaaaaakiaawIcacaGLPaaaaiaawUfacaGLDbaacqGH9aqpcqWGwbGvcqWGHbqycqWGYbGCdaWadaqaaiGbcYgaSjabc6gaUnaabmaabaGaemyBa0MaemyzauMaemyyaeMaemOBa42aaSbaaSqaaiGbcwgaLjabcIha4jabcchaWbqabaaakiaawIcacaGLPaaacqGHsislcyGGSbaBcqGGUbGBdaqadaqaaiabd2gaTjabdwgaLjabdggaHjabd6gaUnaaBaaaleaacqWGJbWycqWGVbWBcqWGUbGBcqWG0baDcqWGYbGCaeqaaaGccaGLOaGaayzkaaaacaGLBbGaayzxaaaabaGaeyypa0JaemOvayLaemyyaeMaemOCai3aamWaaeaacyGGSbaBcqGGUbGBdaqadaqaaiabd2gaTjabdwgaLjabdggaHjabd6gaUnaaBaaaleaacyGGLbqzcqGG4baEcqGGWbaCaeqaaaGccaGLOaGaayzkaaaacaGLBbGaayzxaaGaey4kaSIaemOvayLaemyyaeMaemOCai3aamWaaeaacyGGSbaBcqGGUbGBdaqadaqaaiabd2gaTjabdwgaLjabdggaHjabd6gaUnaaBaaaleaacqWGJbWycqWGVbWBcqWGUbGBcqWG0baDcqWGYbGCaeqaaaGccaGLOaGaayzkaaaacaGLBbGaayzxaaGaeeiiaaIaeeiiaaIaeeiiaaIaeeiiaaIaeeiiaaYaamWaaeaacqqGZbWCcqqGPbqAcqqGUbGBcqqGJbWycqqGLbqzcqqGGaaicqqG0baDcqqGObaAcqqGLbqzcqqGGaaicqqGNbWzcqqGYbGCcqqGVbWBcqqG1bqDcqqGWbaCcqqGZbWCcqqGGaaicqqGHbqycqqGYbGCcqqGLbqzcqqGGaaicqqGPbqAcqqGUbGBcqqGKbazcqqGLbqzcqqGWbaCcqqGLbqzcqqGUbGBcqqGKbazcqqGLbqzcqqGUbGBcqqG0baDaiaawUfacaGLDbaaaeaacqGH9aqpdaqadaqcfayaamaalaaabaGaeGymaedabaGaemyBa0MaemyzauMaemyyaeMaemOBa42aaSbaaeaacyGGLbqzcqGG4baEcqGGWbaCaeqaaaaaaOGaayjkaiaawMcaamaaCaaaleqabaGaeGOmaidaaOGaemOvayLaemyyaeMaemOCai3aaeWaaeaacqWGTbqBcqWGLbqzcqWGHbqycqWGUbGBdaWgaaWcbaGagiyzauMaeiiEaGNaeiiCaahabeaaaOGaayjkaiaawMcaaiabgUcaRmaabmaajuaGbaWaaSaaaeaacqaIXaqmaeaacqWGTbqBcqWGLbqzcqWGHbqycqWGUbGBdaWgaaqaaiabdogaJjabd+gaVjabd6gaUjabdsha0jabdkhaYbqabaaaaaGccaGLOaGaayzkaaWaaWbaaSqabeaacqaIYaGmaaGccqWGwbGvcqWGHbqycqWGYbGCdaqadaqaaiabd2gaTjabdwgaLjabdggaHjabd6gaUnaaBaaaleaacqWGJbWycqWGVbWBcqWGUbGBcqWG0baDcqWGYbGCaeqaaaGccaGLOaGaayzkaaaabaGaeyypa0tcfa4aaSaaaeaacqaIXaqmaeaacqWGUbGBdaWgaaqaaiGbcwgaLjabcIha4jabcchaWbqabaaaaOWaaeWaaKqbagaadaWcaaqaaiabdohaZjabdsgaKnaaBaaabaGagiyzauMaeiiEaGNaeiiCaahabeaaaeaacqWGTbqBcqWGLbqzcqWGHbqycqWGUbGBdaWgaaqaaiGbcwgaLjabcIha4jabcchaWbqabaaaaaGccaGLOaGaayzkaaWaaWbaaSqabeaacqaIYaGmaaGccqGHRaWkjuaGdaWcaaqaaiabigdaXaqaaiabd6gaUnaaBaaabaGaem4yamMaem4Ba8MaemOBa4MaemiDaqNaemOCaihabeaaaaGcdaqadaqcfayaamaalaaabaGaem4CamNaemizaq2aaSbaaeaacqWGJbWycqWGVbWBcqWGUbGBcqWG0baDcqWGYbGCaeqaaaqaaiabd2gaTjabdwgaLjabdggaHjabd6gaUnaaBaaabaGaem4yamMaem4Ba8MaemOBa4MaemiDaqNaemOCaihabeaaaaaakiaawIcacaGLPaaadaahaaWcbeqaaiabikdaYaaaaOqaamaadmaabaGaee4CamNaeeyAaKMaeeOBa4Maee4yamMaeeyzauMaeeiiaaIaeeOzayMaee4Ba8MaeeOCaiNaeeiiaaIaeeOCaiNaeeyyaeMaeeOBa4MaeeizaqMaee4Ba8MaeeyBa0MaeeiiaaIaeeODayNaeeyyaeMaeeOCaiNaeeyAaKMaeeyyaeMaeeOyaiMaeeiBaWMaeeyzauMaeeiiaaIaemiwaGLaeeilaWIaeeiiaaIaemOvayLaemyyaeMaemOCai3aaeWaaeaacqWGTbqBcqWGLbqzcqWGHbqycqWGUbGBdaWgaaWcbaGaemiwaGfabeaaaOGaayjkaiaawMcaaiabg2da9KqbaoaalaaabaGaemOvayLaemyyaeMaemOCai3aaeWaaeaacqWGybawaiaawIcacaGLPaaaaeaacqWGUbGBdaWgaaqaaiabdIfaybqabaaaaOGaeyypa0tcfa4aaSaaaeaacqWGZbWCcqWGKbazdaqhaaqaaiabdIfaybqaaiabikdaYaaaaeaacqWGUbGBdaWgaaqaaiabdIfaybqabaaaaaGccaGLBbGaayzxaaaaaaaa@99CD@

The natural logarithm transformed ratios are aggregated across studies using the generalized inverse variance method described in the Appendix. The pooled transformed ratio is then back transformed to obtain a pooled ratio and 95% confidence interval (CI), as follows:

95 % C I = exp { [ ln ( m e a n exp m e a n c o n t r ) ] ± 1.96 V a r [ ln ( m e a n exp m e a n c o n t r ) ] } MathType@MTEF@5@5@+=feaafiart1ev1aaatCvAUfKttLearuWrP9MDH5MBPbIqV92AaeXatLxBI9gBaebbnrfifHhDYfgasaacPC6xNi=xI8qiVKYPFjYdHaVhbbf9v8qqaqFr0xc9vqFj0dXdbba91qpepeI8k8fiI+fsY=rqGqVepae9pg0db9vqaiVgFr0xfr=xfr=xc9adbaqaaeGaciGaaiaabeqaaeqabiWaaaGcbaGaeGyoaKJaeGynauJaeiyjauIaem4qamKaemysaKKaeyypa0JagiyzauMaeiiEaGNaeiiCaa3aaiWaaeaadaWadaqaaiGbcYgaSjabc6gaUnaabmaajuaGbaWaaSaaaeaacqWGTbqBcqWGLbqzcqWGHbqycqWGUbGBdaWgaaqaaiGbcwgaLjabcIha4jabcchaWbqabaaabaGaemyBa0MaemyzauMaemyyaeMaemOBa42aaSbaaeaacqWGJbWycqWGVbWBcqWGUbGBcqWG0baDcqWGYbGCaeqaaaaaaOGaayjkaiaawMcaaaGaay5waiaaw2faaiabgglaXkabigdaXiabc6caUiabiMda5iabiAda2maakaaabaGaemOvayLaemyyaeMaemOCai3aamWaaeaacyGGSbaBcqGGUbGBdaqadaqcfayaamaalaaabaGaemyBa0MaemyzauMaemyyaeMaemOBa42aaSbaaeaacyGGLbqzcqGG4baEcqGGWbaCaeqaaaqaaiabd2gaTjabdwgaLjabdggaHjabd6gaUnaaBaaabaGaem4yamMaem4Ba8MaemOBa4MaemiDaqNaemOCaihabeaaaaaakiaawIcacaGLPaaaaiaawUfacaGLDbaaaSqabaaakiaawUhacaGL9baaaaa@7C98@

Log transformation of the ratio of mean values, a non-normally distributed function, allows this approximation of the 95% confidence interval of this approximately normally distributed transformed function. This approach is similar to that applied to other ratio methods such as OR and RR, used for binary group comparison studies.

As the ratio of means method is unitless, this method can be used irrespective of the units used in trial outcome measures. Using the delta method limited to first order terms results in a straightforward formula to estimate the variance of the ratio. Second order terms would be raised to the fourth power and are not included as they would not increase the variance by much. For example, even choosing simulation parameters that maximized the contribution of these second order terms (ratio of the standard deviation to the mean equal to 0.7, and n = 10 patients per trial arm [see below]), would increase the variance estimate by less than 2.5%.

Design of the Simulation Study

The parameters and their assigned values used to simulate continuous variable meta-analysis data sets for the individual scenarios are shown in Table 2. The values were chosen to be representative of typical meta-analyses exhibiting a range of standard deviations, number of trials, number of participants per trial, effect sizes, and heterogeneity. The standard deviation of the control group, expressed as a percentage of the mean value in the control group, was varied between 10% and 70% to reflect a narrow to broad distribution around the mean value. The standard deviations of the control and experimental groups were assumed to be equal for all simulations. The number of trials (k) ranged from 5 to 30. The number of patients per trial arm (n) was set to either 10 or 100 in each of the experimental and control groups, and all trials within each simulation were assumed to have the same number of patients. The effect size, expressed in pooled standard deviation units (i.e. SMD), was set at 0.2, 0.5, and 0.8 [4].

Table 2 Parameter Values Used in the Simulated Data Sets

For each simulated scenario, k simulated study means and standard deviations were calculated from a collection of n individual values randomly sampled from a normal distribution. This was done independently for the control and experimental groups. For the control group the normal distribution from which values were randomly sampled had a mean value set to 100, resulting in a standard deviation of 10, 40, or 70. For the experimental group the normal distribution from which values were randomly sampled had a mean value of [100 + (effect size) × (standard deviation)] and the same standard deviation as the control group. Using the simulated study mean values and standard deviations, meta-analysis was carried out using MD, SMD, and RoM, with inverse variance weighting and a random effects model as described in the Appendix. With the parameters described above, the expected MD = (effect size) × (standard deviation), the expected SMD = effect size, and the expected RoM = 1 + [(effect size) × (standard deviation)/(mean value in control group[= 100])], where effect size varies as 0.2, 0.5 and 0.8, and the standard deviation varies as 10, 40, and 70.

Heterogeneity for each scenario was introduced by setting τ = 0.5 standard deviation units. This was achieved by introducing an additional study-specific standard deviation equal to 0.5/√2 standard deviation units to both the experimental and the control groups, since the study-specific standard deviation of the difference between experimental and control groups is given by √[(0.5/√2)2 + (0.5/√2)2] = 0.5 standard deviation units. In other words, study-specific variance was added to experimental and control group means but the baseline difference and ratio in mean values was held constant. Since a given degree of result heterogeneity may be reflected differently in the difference methods (MD and SMD) compared to the ratio method (RoM), heterogeneity was added at the level of the individual mean values rather than the level of the treatment effects to ensure that the degree of heterogeneity added was comparable between the three methods. Heterogeneity of each meta-analysis scenario is presented using I 2. Since I 2 = τ 2/(τ 2 + s2), where s2 is the variance of the effect measure, as described in the Appendix, the expected value for I 2 for τ = 0.5 can be calculated to be 56% when n = 10 patients per trial arm, corresponding to the introduction of a moderate (i.e. I 2 = 50–75%) degree of heterogeneity, and 93% when n = 100 patients per trial arm, corresponding to a high (i.e. I 2 > 75%) degree of heterogeneity [11, 12].

The baseline scenarios assumed equal numbers of participants in both the experimental and control arms and were constructed by randomly selecting data points from normally distributed data. Separate sensitivity analyses were also carried out to determine 1) the effect of unequal numbers of participants (chosen to have a 2:1 and 1:2 experimental:control arm ratio but keeping the total number of participants constant (i.e. 14:6 instead of 10:10 and 134:66 instead of 100:100)) and 2) the effect of selecting the data points from an underlying skewed distribution. The skewed distribution was empirically constructed by mixing a combination of 3 normal distributions with identical standard deviations (0.24) centered at 0.84, 1.42 and 1.92 and weighted 77%, 17%, and 6% respectively in the overall mixed skewed distribution. This created a graphical distribution appearing markedly skewed on visual inspection with an overall mean of unity and overall standard deviation similar to that of the middle normally distributed data scenario (i.e. 40% of the control mean value), but skewness (third standardized moment about the mean [15]) of 0.88.

For each scenario, data points were generated and analyzed 10,000 times and performance characteristics of each effect measure were assessed. These consisted of bias (expressed as a percentage of the true parameter value, directed away or towards the null value [zero for MD and SMD, and one for RoM]), coverage (of the 95% confidence interval of the simulated result, i.e. the percentage of time that the true parameter value falls within the 95% confidence interval of the simulated result), statistical power (the percentage of time that the 95% confidence interval of the simulated result yields a significant treatment effect, by excluding zero for MD and SMD or one for RoM), and heterogeneity (expressed as I 2). Simulations were programmed and carried out using SAS (version 8.2, Cary, NC).

Results

Table 3 presents simulation results for the baseline scenarios (standard deviation 40% of the control mean, equal numbers of patients in the control and experimental groups, underlying normal distribution of the individual data points), for each combination of effect size (0.2, 0.5, and 0.8 standard deviation units), number of patients (10 and 100 control and experimental patients per trial), and number of trials (5, 10, and 30). Results for similar scenarios except with lower (10% of the control mean) or higher (70% of the control mean) standard deviations are shown in Tables 4 and 5. The skewed distribution results were similar (Table 6).

Table 3 Simulation Results (Normal Distribution, Equal Experimental and Control Groups, Standard Deviation 40% of Control Mean Value).
Table 4 Simulation Results (Normal Distribution, Equal Experimental and Control Groups, Standard Deviation 10% of Control Mean Value).
Table 5 Simulation Results (Normal Distribution, Equal Experimental and Control Groups, Standard Deviation 70% of Control Mean Value).
Table 6 Simulation Results (Skewed Distribution, Equal Experimental and Control Groups, Standard Deviation 40% of Control Mean Value).

Bias

The MD method exhibits minimal bias (less than 0.5%) in almost all of scenarios. In contrast, there is one principal source of bias for the SMD method and two for the RoM method.

SMD Bias Towards No Effect with Smaller Trials

SMD is biased towards zero or no effect, with the bias more prominent when the number of patients per study is small. Table 3 shows this bias to be (-)4 to 6% in the baseline scenario with 10 patients per trial, regardless of the number of trials. The bias decreases in the 100-patient per trial scenarios. The lower weighting of extreme values (i.e. values far from no effect [zero]) in the SMD method also results in decreased heterogeneity (I 2), in the scenarios with 10 patients per trial where the bias is largest (discussed in the heterogeneity section below). Sampling variance alone results in bias toward zero, but this bias is even larger when heterogeneity is present since this results in a further increase in dispersion (or the effective variance) of the results. These findings are consistent with theoretical considerations (see Appendix).

RoM Bias

In contrast, the RoM bias depends on the relative effects of two competing sources of bias. The first is a negative bias towards unity or no effect due to properties of the variance of ln(RoM) and is most pronounced when the number of patients per trial is small. The second is a bias away from unity or no effect occurring when heterogeneity is present, due to properties of RoM. Although bias from both sources is absent or less than 0.5% in all scenarios with 100 patients per trial and no heterogeneity, one or both sources of bias can be significant in other scenarios. These are described in more detail below.

RoM Bias Towards No Effect with Smaller Trials

To understand the bias towards unity or no effect, one must consider the factors influencing the variance of ln(RoM) described in the Methods. As in the scenarios studied with equal standard deviations in the control and experimental groups, consider RoM >1, where the experimental mean is greater than the control mean. In this situation the contribution of the experimental group's relative error to the variance of ln(RoM) is smaller than that of the control group's relative error. As RoM increases, either the experimental mean value (mean exp ) increases for a given control mean value (mean contr ) or mean contr falls for a given mean exp . In the former case, the term (1/mean exp )2 falls and the variance of ln(RoM) becomes relatively smaller, compared to lower RoM values. In the latter case, the term (1/mean contr )2 increases and the variance of ln(RoM) becomes relatively larger, compared to lower RoM values. Because of the different relative error term contributions discussed above, the decrease in the experimental group relative error term determined by (1/mean exp )2 is smaller than the increase in the control group relative error term determined by (1/mean contr )2. Thus, when these effects are averaged, the overall effect is that higher RoM values have a higher variance and therefore receive relatively lower weighting in the inverse variance weighted meta-analysis, leading to bias towards unity or no effect. This bias is accentuated by i) larger standard deviations, ii) higher heterogeneity (due to larger effective standard deviations), and iii) smaller trials. The bias is best demonstrated in the scenarios without heterogeneity in which the standard deviation is 70% of the mean control value, the number of patients per trial is 10 and the effect size is moderate to large, as shown in Table 5. (This bias is also present in the scenarios with heterogeneity and 10 patients per trial shown in Table 5; however, the overall bias in these scenarios is due to the combined effect of both this bias towards unity discussed in this section and a bias away from unity discussed in the next section.) Under these conditions, the magnitude of this bias ranges up to 2–3%. Due to its inverse dependence on study size it decreases to less than 0.5% in the scenarios without heterogeneity enrolling 100 patients per trial. The bias is also accentuated in scenarios with either larger RoM or a ratio of the number of patients in the experimental group to the number of patients in the control group (n exp /n contr ) >1. This occurs because as either mean exp increases relative to mean contr or n exp increases relative to n contr , the term (1/n exp mean exp 2) decreases and changes in the control group relative error predominate to an even greater extent. For example, compare the results from scenarios without heterogeneity shown in Table 3 with a 1:1 experimental to control patient ratio to those shown in Table 7 with a 2:1 experimental to control patient ratio. In contrast, increasing n cont relative to n exp decreases the contribution of the control group's relative error. This decreases the magnitude of the bias towards unity and can even change the direction of the bias from negative to positive (i.e. to a bias away from unity or no effect) if the ratio n contr /n exp is increased to a value greater than mean exp 2/mean contr 2 (RoM2). This is illustrated in Table 8 where n contr /n exp = 2 and RoM2 < 2.

Table 7 Simulation Results (Normal Distribution, 2:1 Experimental to Control Group Sizes, Standard Deviation 40% of Control Mean Value).
Table 8 Simulation Results (Normal Distribution, 1:2 Experimental to Control Group Sizes, Standard Deviation 40% of Control Mean Value).
RoM Bias Away from No Effect Due to Heterogeneity

The second RoM bias is a bias away from unity (or no effect) that occurs only in the scenarios with heterogeneity and is due to the effects of heterogeneity on the RoM. It is most apparent in the scenarios with heterogeneity with higher standard deviations (70% of the mean control value as shown in Table 5) when the number of patients per trial is 100, since in the 100-patient per trial scenarios the simultaneously present bias towards unity (or no effect) discussed above is less than 0.5%. (In the scenarios with heterogeneity and 10 patients per trial shown in Table 5, the overall bias is due to the combined effect of both the bias towards unity discussed in the previous section and the bias away from unity discussed in this section.) This bias away from unity ranges up to 1–2% in the scenarios with 100 patients per trial and occurs for the following reason. Heterogeneity is introduced in the simulations by shifting individual trial experimental and control mean values upwards or downwards. As in the scenarios presented, for RoM>1, in trials where both the experimental and control means are shifted upwards, the RoM value is decreased, and in trials when the means are both shifted downwards, the RoM value is increased. For upward and downward shifts of equal magnitude, the increase in the RoM for downward shifts is greater than the decrease in the RoM for upward shifts. This results in a pooled RoM value that is greater in the presence of heterogeneity and results in the bias away from unity. Table 5 demonstrates that this bias is higher for increasing effect sizes (i.e. larger ratios). Comparing the results where the standard deviation is higher (70% of the mean control value, Table 5) to those where the standard deviation is lower (e.g. 40% of the mean control value, Table 3) demonstrates that this bias increases with increasing standard deviations due to an increased range of individual trial RoM estimates.

Coverage

The proportion of the scenarios for which the 95% confidence interval contains the true effect size is relatively similar among the three methods for most scenarios. The coverage is close to 95%, as expected, for the scenarios with no heterogeneity, but decreases when heterogeneity is introduced. The lowest coverage of 87–88% is equally low with all three methods and occurs when heterogeneity is present with 5 trials and 100 patients per trial arm. This low coverage occurs because with the degree of heterogeneity in these scenarios (I 2 = 92–93%) the mean values can be widely variable. With only 5 trials, the pooled value of these mean values can be far from the true value and due to the large number of patients per trial the confidence intervals for the individual trials are relatively narrow resulting in missed coverage of the true value. Increasing the number of patients to 1000 patients per trial arm still results in coverage rates between 87–88% (results not shown), because the degree of missed coverage is dominated by the degree of heterogeneity and the increase in I 2 from 92–93% in the scenarios with 100 patients per trial arm to I 2 = 99% for the scenarios with 1000 patients per trial arm is relatively small.

Statistical Power to Detect a Significant Treatment Effect

As expected, statistical power (the proportion of scenarios yielding a significant treatment effect) increases with increasing effect size, number of patients, and number of trials, and decreases with more heterogeneity. Power also decreases with imbalanced patient allocation between groups (Tables 7 and 8) because confidence intervals are wider compared to balanced allocation scenarios. Statistical power is similar among the three methods in most scenarios. In scenarios where SMD or RoM are biased towards no effect, the power decreases compared to MD, and in the scenarios where RoM is biased away from unity, power increases compared to MD. Overall, the effect of these biases is small so that the proportion of scenarios yielding significant treatment effects is within 5 percentage points for almost all scenarios.

Heterogeneity

For the scenarios with heterogeneity, I 2 is around 55–60% and greater than 90% for scenarios with n = 10 and n = 100 patients per trial, respectively, close to the expected values. In scenarios where SMD and RoM are biased, I 2 is lower compared to MD, which is relatively free of bias (for example, scenarios with 10 patients per trial in Tables 3, 4, 5 [SMD] or Table 5 [RoM]). This occurs because bias decreases the weighting of values greatly deviating from no effect (zero for SMD and one for RoM), decreasing I 2. In the scenarios exhibiting less bias, I 2 among all methods is similar (for example, scenarios with 100 patients per trial in Tables 3, 4, 5).

Discussion

This study examines the use of a new effect measure for meta-analysis of continuous outcomes that we call the ratio of means (RoM). In this method, the ratio of the mean value in the experimental group to that of the control group is calculated. The natural logarithm-transformed delta method approximated to first order terms provides a straightforward equation estimating the variance of the RoM for each study. Using this formulation, we performed simulations to compare the performance of RoM to traditionally used difference of means methods, MD and SMD.

Each method performed well within the simulated parameters with low bias and high coverage, even in scenarios with moderate or high heterogeneity. The methods had similar statistical power to detect significant treatment effects. SMD exhibited some bias towards zero or no effect, especially with smaller studies, as previously described [5, 6], whereas MD was relatively bias-free. RoM gave acceptable results, with bias usually less than 2–3%.

As discussed earlier, SMD and RoM, unlike MD, allow pooling of studies expressed in different units and allow comparisons regarding relative effect sizes across different interventions. However, interpreting the results of a meta-analysis that uses SMD to determine the expected treatment effect in a specific patient population requires knowledge of the pooled standard deviation. This information is frequently unknown to clinicians. In contrast, interpretation of the results of a meta-analysis that uses RoM does not require knowledge of the pooled standard deviation and may permit clinicians to more readily estimate treatment effects for their patients. Moreover, RoM provides a result similar in form to a risk ratio, a binary effect measure preferred by clinicians [16]. Thus, overall RoM may be easier for clinicians to interpret.

One limitation of RoM is that the mean values of the intervention and control groups must both be positive or negative, since the logarithm of a negative ratio is undefined. All simulations assumed positive mean values in both groups. This limitation may be less important for biological variables since these generally have positive values. Another related limitation inherent to ratio methods occurs for a normally distributed control variable with a very broad distribution (i.e. a significant proportion of expected negative values) or for a control variable with only positive values but a distribution heavily skewed towards zero. In both such distributions, a high proportion of the control mean values will be very small. These small values in the denominator of the RoM can result in a high proportion of exceedingly large ratios. This could generate results biased to higher values.

In addition to statistical properties, the choice between a difference or a ratio method for a specific situation should be determined by the biological effect of the treatment as either additive or relative for different control group values. Unfortunately, this information is frequently not known in advance. For binary outcomes, empirical comparisons between difference methods (risk difference) and ratio methods (risk ratio and odds ratio) using published meta-analyses have shown that the risk difference exhibits less consistency compared to ratio methods, resulting in increased heterogeneity [17, 18]. This suggests that for binary outcomes, relative differences are more preserved than absolute differences as baseline risk varies. It is unclear whether this is also the case for continuous outcomes, but such an empirical comparison between difference and ratio methods can be performed using our description of RoM.

Conclusion

The results of our meta-analytic simulation studies suggest that the RoM method compares favorably to MD and SMD in terms of bias, coverage, and statistical power. Similar to binary outcome analysis for which both ratio and difference methods are available, this straightforward method provides researchers the option of using a ratio method in addition to difference methods for analyzing continuous outcomes.

Appendix

This appendix briefly reviews the inverse-variance weighted fixed and random effects models and the determination of the point estimate and variance for the continuous outcome measures, MD and SMD. The derivation of the point estimate and variance for RoM is described in the main text.

The inverse-variance weighted fixed and random effects models

In fixed effects meta-analysis, the individual studies' treatment effect measures are assumed to be distributed around the same value for each study. An estimate of this effect measure is obtained by taking a weighted average of individual studies' effect measures, weighting each study by the inverse of the variance of the effect measure used:

Θ IV ( FE ) = Σ i = 1 , k w i × Θ i Σ i = 1 , k w i with variance  ( Θ IV ( FE ) ) = 1 / Σ i = 1 , k w i MathType@MTEF@5@5@+=feaafiart1ev1aaatCvAUfKttLearuWrP9MDH5MBPbIqV92AaeXatLxBI9gBaebbnrfifHhDYfgasaacPC6xNi=xI8qiVKYPFjYdHaVhbbf9v8qqaqFr0xc9vqFj0dXdbba91qpepeI8k8fiI+fsY=rqGqVepae9pg0db9vqaiVgFr0xfr=xfr=xc9adbaqaaeGaciGaaiaabeqaaeqabiWaaaGcbaqbaeqabeGaaaqaaiabfI5arnaaBaaaleaacqqGjbqscqqGwbGvcqGGOaakcqqGgbGrcqqGfbqrcqGGPaqkaeqaaOGaeyypa0tcfa4aaSaaaeaacqqHJoWudaWgaaqaaiabbMgaPjabg2da9iabigdaXiabcYcaSiabbUgaRbqabaGaee4DaC3aaSbaaeaacqqGPbqAaeqaaiabgEna0kabfI5arnaaBaaabaGaeeyAaKgabeaaaeaacqqHJoWudaWgaaqaaiabbMgaPjabg2da9iabigdaXiabcYcaSiabbUgaRbqabaGaee4DaC3aaSbaaeaacqqGPbqAaeqaaaaaaOqaaiabbEha3jabbMgaPjabbsha0jabbIgaOjabbccaGiabbAha2jabbggaHjabbkhaYjabbMgaPjabbggaHjabb6gaUjabbogaJjabbwgaLjabbccaGiabcIcaOiabfI5arnaaBaaaleaacqqGjbqscqqGwbGvcqGGOaakcqqGgbGrcqqGfbqrcqGGPaqkaeqaaOGaeiykaKIaeyypa0JaeGymaeJaei4la8Iaeu4Odm1aaSbaaSqaaiabbMgaPjabg2da9iabigdaXiabcYcaSiabbUgaRbqabaGccqqG3bWDdaWgaaWcbaGaeeyAaKgabeaaaaaaaa@777E@

where ΘIV(FE) is the inverse-variance weighted fixed effects pooled effect estimate for k total studies, Θi is the effect measure estimate for study i, and weighting wi = 1/variance(Θi).

In random effects meta-analysis, the individual studies' effect measures are assumed to vary around an overall average treatment effect. An estimate of the variance of this distribution of treatment effects, also known as between-study heterogeneity, τ 2, is incorporated into the weights [13] to produce a summary estimate

Θ IV ( RE ) = Σ i = 1 , k w i × Θ i Σ i = 1 , k w i with variance  ( Θ IV ( RE ) ) = 1 / Σ i = 1 , k w i MathType@MTEF@5@5@+=feaafiart1ev1aaatCvAUfKttLearuWrP9MDH5MBPbIqV92AaeXatLxBI9gBaebbnrfifHhDYfgasaacPC6xNi=xI8qiVKYPFjYdHaVhbbf9v8qqaqFr0xc9vqFj0dXdbba91qpepeI8k8fiI+fsY=rqGqVepae9pg0db9vqaiVgFr0xfr=xfr=xc9adbaqaaeGaciGaaiaabeqaaeqabiWaaaGcbaqbaeqabeGaaaqaaiabfI5arnaaBaaaleaacqqGjbqscqqGwbGvcqGGOaakcqqGsbGucqqGfbqrcqGGPaqkaeqaaOGaeyypa0tcfa4aaSaaaeaacqqHJoWudaWgaaqaaiabbMgaPjabg2da9iabigdaXiabcYcaSiabbUgaRbqabaGaee4DaC3aa0baaeaacqqGPbqAaeaacqGHxiIkaaGaey41aqRaeuiMde1aaSbaaeaacqqGPbqAaeqaaaqaaiabfo6atnaaBaaabaGaeeyAaKMaeyypa0JaeGymaeJaeiilaWIaee4AaSgabeaacqqG3bWDdaqhaaqaaiabbMgaPbqaaiabgEHiQaaaaaaakeaacqqG3bWDcqqGPbqAcqqG0baDcqqGObaAcqqGGaaicqqG2bGDcqqGHbqycqqGYbGCcqqGPbqAcqqGHbqycqqGUbGBcqqGJbWycqqGLbqzcqqGGaaicqGGOaakcqqHyoqudaWgaaWcbaGaeeysaKKaeeOvayLaeiikaGIaeeOuaiLaeeyrauKaeiykaKcabeaakiabcMcaPiabg2da9iabigdaXiabc+caViabfo6atnaaBaaaleaacqqGPbqAcqGH9aqpcqaIXaqmcqGGSaalcqqGRbWAaeqaaOGaee4DaC3aa0baaSqaaiabbMgaPbqaaiabgEHiQaaaaaaaaa@7A7E@

where wi* = 1/(wi -1 + τ 2). One estimate of τ 2 uses the Q statistic:

Q = Σi = 1, k wi × (Θi - ΘIV(FE))2

which has a χ 2 distribution with k-1 degrees of freedom when τ 2 = 0. An estimate of τ 2 follows:

τ 2 = Q ( k 1 ) Σ i = 1 , k w i Σ i = 1 , k ( w i ) 2 Σ i = 1 , k w i if Q k 1 ,  and τ 2 = 0 if Q < k 1 MathType@MTEF@5@5@+=feaafiart1ev1aaatCvAUfKttLearuWrP9MDH5MBPbIqV92AaeXatLxBI9gBaebbnrfifHhDYfgasaacPC6xNi=xI8qiVKYPFjYdHaVhbbf9v8qqaqFr0xc9vqFj0dXdbba91qpepeI8k8fiI+fsY=rqGqVepae9pg0db9vqaiVgFr0xfr=xfr=xc9adbaqaaeGaciGaaiaabeqaaeqabiWaaaGcbaqbaeaabiGaaaqaaiabes8a0naaCaaaleqabaGaeGOmaidaaOGaeyypa0tcfa4aaSaaaeaacqqGrbqucqGHsislcqGGOaakcqqGRbWAcqGHsislcqaIXaqmcqGGPaqkaeaacqqHJoWudaWgaaqaaiabbMgaPjabg2da9iabigdaXiabcYcaSiabbUgaRbqabaGaee4DaC3aaSbaaeaacqqGPbqAaeqaaiabgkHiTmaalaaabaGaeu4Odm1aaSbaaeaacqqGPbqAcqGH9aqpcqaIXaqmcqGGSaalcqqGRbWAaeqaaiabcIcaOiabbEha3naaBaaabaGaeeyAaKgabeaacqGGPaqkdaahaaqabeaacqaIYaGmaaaabaGaeu4Odm1aaSbaaeaacqqGPbqAcqGH9aqpcqaIXaqmcqGGSaalcqqGRbWAaeqaaiabbEha3naaBaaabaGaeeyAaKgabeaaaaaaaaGcbaGaeeyAaKMaeeOzayMaeeiiaaIaeeyuaeLaeyyzImRaee4AaSMaeyOeI0IaeGymaeJaeiilaWIaeeiiaaIaeeyyaeMaeeOBa4MaeeizaqgabaGaeqiXdq3aaWbaaSqabeaacqaIYaGmaaGccqGH9aqpcqaIWaamaeaacqqGPbqAcqqGMbGzcqqGGaaicqqGrbqucqGH8aapcqqGRbWAcqGHsislcqaIXaqmaaaaaa@76ED@

When there is no between-trial heterogeneity (τ 2 = 0), the Q-statistic has the expected value of k-1, and the ratio Q/(k-1) [12] has an expected value of unity. Under these circumstances the random effects model is equivalent to the fixed effects model. In situations with heterogeneity (τ 2 > 0), Q/(k-1)> 1, and the proportion of variation in study-level estimates of treatment effect due to between-study heterogeneity can be expressed using the I 2 measure expressed as a percentage. I 2 can be expressed in terms of Q and k-1, where I 2 = [Q/(k-1) - 1]/[Q/(k-1)] which simplifies to (Q-(k-1))/Q [12]. I 2 can also be expressed as τ 2/(τ 2 + s2), where s2 is the variance of the effect measure, and s2 = Σi = 1, k wi (k-1)/[Σi = 1, k wi)2 - Σi = 1, k wi 2] [12]. When the variance (and thus weighting) of each trial is identical, as is the case with all the simulated scenarios in this study, then the variance of the effect measure, or s2, reduces to the variance of a single trial.

Thus, to carry out a random effects meta-analysis requires calculating the effect measure and its variance for each study to be combined. First the fixed effects pooled effect measure is calculated, which is then used to estimate Q and τ 2, and finally τ 2 is used to estimate the random effects pooled effect measure and its variance.

The Mean Difference Effect Measure

Using the measured values, the mean difference effect measure for each study (MDi) is estimated as:

MDi = mean exp - mean contr

with estimated variance,

Var (MDi) = Var (mean exp ) + Var (mean contr ) = (sd exp /√n exp )2 + (sd contr /√n contr )2

where the subscripts "exp" and "contr" refer to the experimental and control groups, respectively, mean to the mean value, sd to the standard deviation, and n to the number of patients in each group. The individual effect measures and their variances are combined as described previously. All studies need to be reported in identical units for mean difference to be used as the effect measure.

The Standardized Mean Difference Effect Measure

When the outcome is not measured in identical units across studies, one can use the standardized mean difference for each study (SMDi), in which the difference in the means is divided by the pooled standard deviation. The estimated value of SMDi is often multiplied by a correction factor to correct for bias away from zero (towards larger effect sizes) when the number of patients in each group is small [5], as follows:

SMD i = ( m e a n e x p m e a n c o n t r ) s d p o o l × [ 1 3 ( 4 N 9 ) ] MathType@MTEF@5@5@+=feaafiart1ev1aaatCvAUfKttLearuWrP9MDH5MBPbIqV92AaeXatLxBI9gBaebbnrfifHhDYfgasaacPC6xNi=xI8qiVKYPFjYdHaVhbbf9v8qqaqFr0xc9vqFj0dXdbba91qpepeI8k8fiI+fsY=rqGqVepae9pg0db9vqaiVgFr0xfr=xfr=xc9adbaqaaeGaciGaaiaabeqaaeqabiWaaaGcbaGaee4uamLaeeyta0Kaeeiraq0aaSbaaSqaaiabbMgaPbqabaGccqGH9aqpjuaGdaWcaaqaaiabcIcaOiabd2gaTjabdwgaLjabdggaHjabd6gaUnaaBaaabaacbiGae8xzauMae8hEaGNae8hCaahabeaacqGHsislcqWGTbqBcqWGLbqzcqWGHbqycqWGUbGBdaWgaaqaaiabdogaJjabd+gaVjabd6gaUjabdsha0jabdkhaYbqabaGaeiykaKcabaGaem4CamNaemizaq2aaSbaaeaacqWGWbaCcqWGVbWBcqWGVbWBcqWGSbaBaeqaaaaacqGHxdaTcqGGBbWwcqaIXaqmcqGHsisldaWcaaqaaiabiodaZaqaaiabcIcaOiabisda0iabd6eaojabgkHiTiabiMda5iabcMcaPaaacqGGDbqxaaa@6138@

with estimated variance,

V a r ( SMD i ) = N n e x p n c o n t r + SMD i 2 2 ( N 3.94 ) MathType@MTEF@5@5@+=feaafiart1ev1aaatCvAUfKttLearuWrP9MDH5MBPbIqV92AaeXatLxBI9gBaebbnrfifHhDYfgasaacPC6xNi=xI8qiVKYPFjYdHaVhbbf9v8qqaqFr0xc9vqFj0dXdbba91qpepeI8k8fiI+fsY=rqGqVepae9pg0db9vqaiVgFr0xfr=xfr=xc9adbaqaaeGaciGaaiaabeqaaeqabiWaaaGcbaGaemOvayLaemyyaeMaemOCaiNaeiikaGIaee4uamLaeeyta0Kaeeiraq0aaSbaaSqaaiabbMgaPbqabaGccqGGPaqkcqGH9aqpjuaGdaWcaaqaaiabb6eaobqaaiabd6gaUnaaBaaabaGaemyzauMaemiEaGNaemiCaahabeaacqWGUbGBdaWgaaqaaiabdogaJjabd+gaVjabd6gaUjabdsha0jabdkhaYbqabaaaaiabgUcaRmaalaaabaGaee4uamLaeeyta0Kaeeiraq0aa0baaeaacqqGPbqAaeaacqaIYaGmaaaabaGaeGOmaiJaeiikaGIaeeOta4KaeyOeI0IaeG4mamJaeiOla4IaeGyoaKJaeGinaqJaeiykaKcaaaaa@56ED@

where

N = n e x p + n c o n t r  and s d p o o l = [ ( n e x p 1 ) s d e x p 2 + ( n c o n t r 1 ) s d c o n t r 2 1 ] N 2 MathType@MTEF@5@5@+=feaafiart1ev1aaatCvAUfKttLearuWrP9MDH5MBPbIqV92AaeXatLxBI9gBaebbnrfifHhDYfgasaacPC6xNi=xI8qiVKYPFjYdHaVhbbf9v8qqaqFr0xc9vqFj0dXdbba91qpepeI8k8fiI+fsY=rqGqVepae9pg0db9vqaiVgFr0xfr=xfr=xc9adbaqaaeGaciGaaiaabeqaaeqabiWaaaGcbaqbaeaabiqaaaqaaiabd6eaojabg2da9iabd6gaUnaaBaaaleaacqWGLbqzcqWG4baEcqWGWbaCaeqaaOGaey4kaSIaemOBa42aaSbaaSqaaiabdogaJjabd+gaVjabd6gaUjabdsha0jabdkhaYbqabaGccqqGGaaicqqGHbqycqqGUbGBcqqGKbazaeaacqWGZbWCcqWGKbazdaWgaaWcbaGaemiCaaNaem4Ba8Maem4Ba8MaemiBaWgabeaakiabg2da9GGaaiab=PHiwNqbaoaalaaabaGaei4waSLaeiikaGIaemOBa42aaSbaaeaacqWGLbqzcqWG4baEcqWGWbaCaeqaaiabgkHiTiabigdaXiabcMcaPiabdohaZjabdsgaKnaaDaaabaGaemyzauMaemiEaGNaemiCaahabaGaeGOmaidaaiabgUcaRiabcIcaOiabd6gaUnaaBaaabaGaem4yamMaem4Ba8MaemOBa4MaemiDaqNaemOCaihabeaacqGHsislcqaIXaqmcqGGPaqkcqWGZbWCcqWGKbazdaqhaaqaaiabdogaJjabd+gaVjabd6gaUjabdsha0jabdkhaYbqaaiabikdaYaaacqGHsislcqaIXaqmcqGGDbqxaeaacqWGobGtcqGHsislcqaIYaGmaaaaaaaa@7EA6@

The individual effect measures and their variances are combined as described previously. As SMDi assumes more extreme positive or negative values deviating from zero, Var(SMDi) increases, resulting in a smaller weighting for such trials. This means that in general SMD is biased towards zero or no effect [5, 6]. This bias towards zero is independent of the number of studies in the meta-analysis, and decreases for larger N. Using a random effects model instead of a fixed effects model can reduce this bias because the between-study variance, estimated by τ 2, tends to equalize the study weights. However, this advantage is offset by a lower Q (used to estimate τ 2), which depends on the inverse of the variance for each study and therefore is also biased towards lower values. Alternate weighting methods have been proposed to address this bias [5, 6].

Abbreviations

CI:

confidence interval

FE:

fixed effects

IV:

inverse variance

i:

counter ranging from 1 to the number of trials in each meta-analysis (k)

I 2 :

I 2 heterogeneity measure

k:

number of trials in each meta-analysis

MD:

mean difference

meancontr :

mean value in the control group

meanexp :

mean value in the experimental group

n:

number of experimental or number of control patients per trial

ncontr :

number of control patients per trial

nexp :

number of experimental patients per trial

N:

total number of patients per trial (N = ncontr + nexp)

Q:

Cochran's Q statistic for heterogeneity

RE:

random effects

s:

standard deviation

s2 :

sampling variance of the effect measure

SMD:

standardized mean difference

SD:

standard deviation

sdcontr :

standard deviation in the control group

sdexp :

standard deviation in the experimental group

sdpool :

pooled standard deviation of the control and experimental groups

t 2 :

variance due to heterogeneity

Var:

variance

wi :

weighting of study i

wi *:

weighting of study i incorporating the variance due to heterogeneity

Ti :

MD SMD, or RoM effect measure estimate for study i

References

  1. Eggar M, Davey Smith G, Altman DG, editors: Systematic Reviews in Health Care: Meta-Analysis in Context. 2001, London: BMJ Books

    Google Scholar 

  2. Deeks JJ, Altman DG: Effect measures for meta-analysis of trials with binary outcomes. Systematic Reviews in Health Care: Meta-Analysis in Context. Edited by: Eggar M, Davey Smith G, Altman DG. 2001, London: BMJ Books, 313-335.

    Chapter  Google Scholar 

  3. Deeks JJ, Altman DG, Bradburn MJ: Statistical methods for examining heterogeneity and combining results from several studies in meta-analysis. Systematic Reviews in Health Care: Meta-Analysis in Context. Edited by: Eggar M, Davey Smith G, Altman DG. 2001, London: BMJ Books, 285-312.

    Chapter  Google Scholar 

  4. Cohen J: Statistical Power Analysis for the Behavioral Sciences. 1988, Hillside, New Jersey: Lawrence Erlbaum Associates, 24-7. Second

    Google Scholar 

  5. Hedges LV, Olkin I: Statistical Methods for Meta-Analysis. 1985, Orlando, Florida: Academic Press

    Google Scholar 

  6. van den Noortgate W, Onghena P: Estimating the mean effect size in meta-analysis: bias, precision, and mean squared error of different weighting methods. Behavior Research Methods, Instruments, & Computers. 2003, 35: 504-511.

    Article  Google Scholar 

  7. Friedrich JO, Adhikari N, Herridge MS, Beyene J: Meta-analysis: low-dose dopamine increases urine output but does not prevent renal dysfunction or death. Ann Intern Med. 2005, 142 (7): 510-524.

    Article  CAS  PubMed  Google Scholar 

  8. Adhikari NKJ, Burns KEA, Friedrich JO, Granton JT, Cook DJ, Meade MO: Nitric oxide improves oxygenation but not mortality in acute lung injury: meta-analysis. BMJ. 2007, 334: 779-

    Article  PubMed  PubMed Central  Google Scholar 

  9. Sud S, Sud M, Friedrich JO, Adhikari NKJ: Effect of mechanical ventilation in the prone position on clinical outcomes in patients with acute hypoxemic respiratory failure: a systematic review and meta-analysis. CMAJ. 2008, 178: 1153-1161.

    Article  PubMed  PubMed Central  Google Scholar 

  10. Armitage P, Colton T, editors: Encyclopedia of Biostatistics. 1998, Chichester, United Kingdom: John Wiley & Sons, 3731-3737.

    Google Scholar 

  11. Higgins JPT, Thompson SG, Deeks JJ, Altman DG: Measuring inconsistency in meta-analysis. BMJ. 2003, 327: 557-560.

    Article  PubMed  PubMed Central  Google Scholar 

  12. Higgins JPT, Thompson SG: Quantifying heterogeneity in a meta-analysis. Statistics in Medicine. 2002, 21: 1539-1558.

    Article  PubMed  Google Scholar 

  13. DerSimonian R, Laird N: Meta-analysis in clinical trials. Controlled Clinical Trials. 1986, 7: 177-188.

    Article  CAS  PubMed  Google Scholar 

  14. Fleiss JL: The statistical basis of meta-analysis. Statistical Methods in Medical Research. 1993, 2: 121-145.

    Article  CAS  PubMed  Google Scholar 

  15. Ghahramani S: Fundamentals of Probability. 2000, Upper Saddle River, United States: Prentice-Hall, 416-2

    Google Scholar 

  16. Schwartz LM, Woloshin S, Welch HG: Misunderstandings about the effects of race and sex on physician's referrals for cardiac catheterization. NEJM. 1999, 341: 279-283.

    Article  CAS  PubMed  Google Scholar 

  17. Engels EA, Schmid CH, Terrin N, Olkin I, Lau J: Heterogeneity and statistical significance in meta-analysis: an empirical study of 125 meta-analyses. Statistics in Medicine. 2000, 19: 1707-1728.

    Article  CAS  PubMed  Google Scholar 

  18. Deeks JJ: Issues in the selection of a summary statistic for meta-analysis of clinical trials with binary outcomes. Statistics in Medicine. 2002, 21: 1575-1600.

    Article  PubMed  Google Scholar 

Pre-publication history

Download references

Acknowledgements

The study received no specific funding. JF is supported by a Clinician Scientist Award from the Canadian Institutes of Health Research (CIHR), and JB by CIHR Grant No. 84392. CIHR had no involvement in the conduct of this study.

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Jan O Friedrich.

Additional information

Competing interests

The authors declare that they have no competing interests.

Authors' contributions

JOF was involved with the conception and design of the study, acquisition, analysis and interpretation of data and drafted the manuscript. NKJA was involved with the conception and design of the study, interpretation of data and critical revision of the manuscript for important intellectual content. JB was involved in the conception and design of the study, interpretation of data and critical revision of the manuscript for important intellectual content. All authors read and approved the final version of the manuscript.

Rights and permissions

This article is published under license to BioMed Central Ltd. This is an Open Access article distributed under the terms of the Creative Commons Attribution License (http://creativecommons.org/licenses/by/2.0), which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited.

Reprints and permissions

About this article

Cite this article

Friedrich, J.O., Adhikari, N.K. & Beyene, J. The ratio of means method as an alternative to mean differences for analyzing continuous outcome variables in meta-analysis: A simulation study. BMC Med Res Methodol 8, 32 (2008). https://doi.org/10.1186/1471-2288-8-32

Download citation

  • Received:

  • Accepted:

  • Published:

  • DOI: https://doi.org/10.1186/1471-2288-8-32

Keywords