Combining estimates of interest in prognostic modelling studies after multiple imputation: current practice and guidelines
© Marshall et al. 2009
Received: 11 March 2009
Accepted: 28 July 2009
Published: 28 July 2009
Skip to main content
© Marshall et al. 2009
Received: 11 March 2009
Accepted: 28 July 2009
Published: 28 July 2009
Multiple imputation (MI) provides an effective approach to handle missing covariate data within prognostic modelling studies, as it can properly account for the missing data uncertainty. The multiply imputed datasets are each analysed using standard prognostic modelling techniques to obtain the estimates of interest. The estimates from each imputed dataset are then combined into one overall estimate and variance, incorporating both the within and between imputation variability. Rubin's rules for combining these multiply imputed estimates are based on asymptotic theory. The resulting combined estimates may be more accurate if the posterior distribution of the population parameter of interest is better approximated by the normal distribution. However, the normality assumption may not be appropriate for all the parameters of interest when analysing prognostic modelling studies, such as predicted survival probabilities and model performance measures.
Guidelines for combining the estimates of interest when analysing prognostic modelling studies are provided. A literature review is performed to identify current practice for combining such estimates in prognostic modelling studies.
Methods for combining all reported estimates after MI were not well reported in the current literature. Rubin's rules without applying any transformations were the standard approach used, when any method was stated.
The proposed simple guidelines for combining estimates after MI may lead to a wider and more appropriate use of MI in future prognostic modelling studies.
Prognostic models play an important role in the clinical decision making process as they help clinicians to determine the most appropriate management of patients. A good prognostic model can provide an insight into the relationship between the outcome of patients and known patient and disease characteristics [1, 2].
Missing covariate data and censored outcomes are unfortunately common occurrences in prognostic modelling studies , which can complicate the modelling process. Multiple imputation (MI) is one approach to handle the missing covariate data that can properly account for the missing data uncertainty . Missing values are replaced with m (>1) values to give m imputed datasets. Previously, three to five imputations were considered sufficient to give reasonable efficiency provided that the fraction of missing information is not excessive . However, with increased computer capabilities, the limitations on m have diminished and therefore it may be more sensible to use 20  or more  imputations. The imputation model, used to generate plausible values for the missing data, should contain all variables to be subsequently analysed including the outcome and any variables that help to explain the missing data . Outcome tends to be incorporated into the imputation model by including both the event status, indicating whether the event, i.e. death, has occurred or not, and the survival time, with the most appropriate transformation . Due to censoring, this approach is not exact and may introduce some bias, but should still help to preserve important relationships in the data. The m imputed datasets are each analysed using standard statistical methods. The estimates from each imputed dataset must then be combined into one overall estimate together with an associated variance that incorporates both the within and between imputation variability . Rubin  developed a set of rules for combining the individual estimates and standard errors (SE) from each of the m imputed datasets into an overall MI estimate and SE to provide valid statistical results, which will be described in the methods section. These rules are based on asymptotic theory . It is assumed that complete data inferences about the population parameter of interest (Q) are based on the normal approximation , where is a complete data estimate of Q and U is the associated variance for . In a frequentist analysis, would be a maximum likelihood estimate of Q, U the inverse of the observed information matrix and the sampling distribution of is considered approximately normal with mean Q and variance U . From a Bayesian perspective, and associated variance U should approximate to the posterior mean and variance of Q respectively, under a reasonable complete data model and prior . Inference is based on the large sample approximation of the posterior distribution of Q to the normal distribution . With missing data, estimates of the parameters of interest are calculated on each of the m imputed datasets to give with associated variances U 1,..., U m . Provided that the imputation procedure is proper , thus reflecting sufficient variability due to the missing data, and samples are large, the overall MI estimate and variance approximate the mean and variance of the posterior distribution of Q [6, 8]. The overall MI estimators and confidence intervals would be improved if combined on a scale where the posterior of Q is better approximated by the normal distribution [6, 10, 11]. When the normality assumption appears inappropriate for estimates of the parameters of interest, suitable transformations that make the normality assumption more applicable should be considered . In circumstances where transformations cannot be identified, alternative robust summary measures , such as medians and ranges, may provide better results than applying Rubin's rules. In the context of prognostic modelling, there are no explicit guidelines for handling estimates of the parameters of interest after MI, such as predicted survival probabilities and assessments of model performance, where it is unclear whether simply applying Rubin's rules is appropriate.
Example techniques and parameters of interest in prognostic modelling studies and the rules currently available for combining estimates after MI are summarised. This paper will then provide guidelines on how estimates of the parameters of interest in prognostic modelling studies can be combined after performing MI. A review of the current practice for combining estimates after MI within published prognostic modelling studies is provided.
Prognostic models, focusing on time to event data that may be censored, are often constructed using survival analysis techniques such as the Cox proportional hazards model or parametric survival models. Ideally, pre-specification of the covariates prior to the modelling process, and hence fitting the full model results in more reliable and less biased prognostic models than data derived models based on statistical significance testing . Such a model can be as large and complex as permitted by the number of observed events [13, 14].
Parameter of interest in prognostic modelling studies and ways to combine estimates after MI
Possible methods for combining estimates of parameters after MI*
Rubin's rules after Fisher's Z transformation
Rubin's rules after logarithmic transformation
Prognostic Index/linear predictor per patient
Model fit and performance
Testing significance of individual covariate in model
Rubin's rules using a Wald test for a single estimates (Table 2(A))
Testing significance of all fitted covariates in model
Rubin's rules using a Wald test for multivariate estimates (Table 2(B))
Likelihood ratio χ 2 test statistic
Rules for combining likelihood ratio statistics if parametric model (Table 2(D)) or χ 2 statistics if Cox model (Table 2(C))
Proportion of variance explained (e.g. R2 statistics)
Prognostic Separation D statistic
Calibration (Shrinkage estimate)
Rubin's rules after complementary log-log transformation
Percentiles of a survival distribution
Rubin's rules after logarithmic transformation
The likelihood ratio chi-square (χ 2) statistic tests the hypothesis of no difference between the null model given a specified distribution and the fitted prognostic model with p parameters . Various proportion of explained variance measures have been proposed as measures of the goodness of fit and predictive accuracy (e.g. by Schemper and Stare , Schemper and Hendersen , O'Quigley, Xu and Stare  and Nagelkerke's R2 ). However, no approach is completely satisfactory when applied to censored survival data. Discrimination assesses the ability to distinguish between patients with different prognoses, which can be assessed using the concordance index (c-index)  or alternatively using the prognostic separation D statistic . Calibration determines the extent of the bias in the predicted probabilities compared to the observed values. A shrinkage estimator provides a measure of the amount needed to recalibrate the model to correctly predict the outcome of future patients using the fitted model . The prognostic model is often summarised by reporting the predictive survival probabilities at specific time-points of interest or quantiles of the survival distribution for each prognostic risk group.
The rules developed by Rubin  for combining either a single estimate or multiple estimates from each imputed dataset into an overall MI estimate and associated SE will be summarised. Performing hypothesis testing for a single estimate or based on multiple estimates will be described together with the extensions for combining χ 2 statistics  and likelihood ratio χ 2 statistics .
For a single population parameter of interest, Q, e.g. a regression coefficient, the MI overall point estimate is the average of the m estimates of Q from the imputed datasets, . The associated total variance for this overall MI estimate is , where is the estimated within imputation variance and is the between imputation variance . Inflating the between imputation variance by a factor 1/m reflects the extra variability as a consequence of imputing the missing data using a finite number of imputations instead of an infinite number of imputations . When B dominates greater efficiency, and hence more accurate estimates, can be obtained by increasing m. Conversely, when dominates B, little is gained from increasing m .
These procedures for combining a single quantity of interest can be extended in matrix form to combine k estimates of parameters, e.g. k regression coefficients, where is a k × 1 vector of these estimates and U is the associated k × k covariance matrix .
Summary of significance tests for combining different estimates from m imputed datasets after MI
Degrees of freedom (df)
Relative increase in variance (r)
F 1, v
, H0: Q = Q0
v = (m - 1)(1 + r-1)2
H0:Q = Q 0,
k = number of parameters
where a = k(m - 1)
χ 2 statistics
w1,..., w m
k = df associated with χ2 tests
D) Likelihood Ratio χ 2 statistics
wL1,..., w Lm
k = number of parameters in fitted model
where a = k(m - 1)
In the context of prognostic modelling, it is useful to test the global null hypothesis that all k regression estimates are a specific value, zero say. A significance level for testing the hypothesis that the combined MI estimate, , equals a particular vector of values Q o is provided in Table 2(B). This ideal approach using a Wald test requires a vector of point estimates and a covariance matrix to be stored from each imputed dataset, which can be cumbersome for large k, as can result from fitting categorical variables in the regression model.
An alternative to testing the multivariate point estimates is the method for combining χ 2 statistics, associated with testing a null hypothesis of H o : Q = Q o , e.g. a regression coefficient is zero or all regression coefficients are zero (Table 2(C)) . This approach is useful when there are a large number of parameters to estimate, the full covariance matrix is unobtainable from standard software or too large to store, or only the χ 2 statistics are available. This approach is deficient compared to the method for combining multivariate estimates and should be used only as a guide, especially when there are a large number of parameters compared to only a small number of imputations . The true p-value lies between a half and twice this calculated value . A considerable amount of information is wasted from only using the χ 2 statistics and thus there is a consequent loss of power . This approach may be improved by multiplying the relative increase in variance estimate (r 2 in Table 2(C)) by a factor representing the number of model parameters. Justification for this adjustment lies in the fact that each χ 2 statistic is based on k degrees of freedom, but unlike the other approaches, this is not accounted for in the relative increase in variance calculations originally proposed by Li et al. .
The method for combining the likelihood ratio χ 2 statistics  from each imputed dataset is used to obtain an overall significance level for testing the hypothesis of no difference between two nested prognostic models (Table 2(D)). This is an intermediate approach between combining multivariate estimates and combining χ 2 statistics. The obtained significance level should be asymptotically equivalent to that based on the combined multivariate estimates .
The likelihood function needs to be fully specified in order to calculate the likelihood ratio statistics determined at the average of the parameter estimates over the m imputations from fitting the regression model either subject to the null hypothesis or the alternative hypothesis with covariates included. This may be difficult for the Cox proportional hazards model, which uses the partial likelihood function.
The procedures for combining multiply imputed estimates that are of particular interest in prognostic modelling are discussed in the following subsections. It is assumed that the full prognostic model is fitted and its performance evaluated within each imputed dataset and the required estimates (as given in Table 1) obtained. The estimates of the parameters of interest (Table 1) are separated into those where the Rubin's rules for MI inference can be applied, those where suitable transformations can be found to improve normality and those where suitable transformations cannot be identified and therefore alternative summary measures are proposed.
The sample mean of a covariate, standard deviation, regression coefficients, individual prognostic index and the prognostic separation estimates can all be combined using Rubin's rules for single estimates. It is important to emphasise that the variance associated with a sample mean of a covariate is the sample variance divided by the number of observations and hence not just its sample variance . The standard deviation of the data can be treated like any other parameter to give a more appropriate and efficient combined MI estimate than reporting the standard deviation from only one imputed dataset. The regression coefficients, and hence the prognostic separation D statistic, from fitting either a Cox proportional hazards or a Weibull model should be asymptotically normal at least with large samples , thus making Rubin's rules appropriate.
The likelihood ratio statistic for testing the hypothesis of no difference between two nested prognostic models from each imputed dataset can be combined using the inferences for likelihood ratio statistics (Table 2(D)), provided that the log-likelihood function can be fully specified, e.g. for fully parametric models such as the Weibull model. The Cox proportional hazards model uses the partial likelihood function as the baseline hazards are unspecified, and therefore can be more difficult to specify. Hence, it may be easier to use the less precise approach for combining χ 2 statistics (Table 2(C)). However, both these approaches are less accurate than testing the significance of the model using a Wald test based on the combined multivariate regression parameter estimates (Table 2(B)). The latter approach may be considered the preferred approach, when possible.
The correlation coefficient, hazard ratios, predicted survival probabilities and percentiles of the survival distribution can all be combined using Rubin's rules after suitable transformations to improve normality. The obtained combined estimates should be back transformed onto their original scale prior to analysis.
Fisher's z transformation  provides a suitable transformation for the sample correlation coefficient, which has an approximate normal distribution . For large samples, the log hazard ratio from a survival model, which is simply the regression coefficient, is approximately normally distributed and therefore should be used. A more extreme pooled estimate than appropriate would be obtained if the hazard ratio was not transformed, as the estimates would be averaged over the posterior medians and not the posterior means as required.
The complementary log-log transformation for the predicted survival probability at particular time-points gives a possible range of (-∞, +∞) instead of the survivorship estimate being bounded by zero and one, and is often used to determine reasonable confidence intervals . A suitable transformation for the survival time associated with the p th percentile of a survival distribution is the logarithmic transformation, as this gives a possible range of (-∞, +∞) instead of being bounded by zero and infinity and is generally used to obtain a confidence interval . Estimates for the predicted survival probabilities at specific time-points, e.g. at 2 years, or survival times at particular percentiles can be obtained within each imputed dataset for the average covariate values, provided that researchers acknowledge that this does not represent the diversity of the patients in the sample . Alternatively, predicted survival probabilities can be obtained for specific covariate patterns or for an individual patient.
When considering model performance measures, the imputation model should be more general than the prognostic models being investigated, as the performance measures are more sensitive to the choice of imputation model and therefore may produce more bias than seen in the regression parameter estimates from the prognostic model. If one is willing to accept the large sample approximation to normality for the proportion of variance explained measures, e.g. Nagelkerke's R2 statistic , the c-index and the shrinkage estimator, then these estimates can be simply treated as another estimate that can be averaged using Rubin's rules for single estimates. However, estimates for these measures are generally bounded by zero and one, not symmetrically distributed and do not necessarily follow a specific distribution, so are unlikely to follow a normal distribution. Therefore the standard MI techniques for combining into one estimate, even after applying a transformation, may not provide the best estimate. In addition, an overall MI variance incorporating sufficient uncertainty cannot be determined as variance estimates associated with these performance measures are generally unavailable. The lack of a within imputation variance estimate also restricts the use of sophisticated robust location and scale estimators, such as the M-estimators . The median, inter-quartile range or full range of the m estimates may provide a more appropriate reflection of the distribution of the values over the imputed datasets, as reported by Clark and Altman  and Sinharay, Stern and Russell  for the R2 statistics and by Clark and Altman  for the c-index. Using the median absolute deviation  could provide an alternative measure of the dispersion of values around the median.
A literature search was performed within the PubMed (National Library of Medicine) and Web of Science® bibliographic software of all articles published before June 2008 that used multiple imputation techniques and a survival analysis to obtain a prognostic model. Methodological papers were excluded. The aim of the review was to identify how estimates of the parameters of interest in prognostic modelling studies have been combined after performing MI in the published literature.
Sixteen non-methodological articles were identified. The MI techniques reported were varied with no overall consensus on technique or statistical software. The number of imputations ranged from five to 10000, with the majority of studies using five or ten imputations. The amount of missingness reported also varied from studies with relatively little missing data  to those with large amounts of missingness .
In seven articles, no mention of how the estimates of interest were combined after MI was given. Clark et al.  reported pooled summary estimates from the imputed datasets and Rouxel et al.  stated that "the multivariable analysis took into account the potential multiple imputation". Although neither article provided any details or references, Rubin's rules were presumably used. The remaining seven studies reported that Rubin's rules  were used to combine the estimates of interest after fitting a variety of regression models, such as a Cox regression model [29, 32–34], multiple Poisson regression models  or a Weibull model [36, 37]. The estimates reported in the published literature were predominately the regression coefficients and associated SEs, hazard ratios and 95% confidence limits, and significance of the individual covariates in the model. The estimates also included combining percentiles from the Weibull survival distribution  and the median survival time and associated 95% confidence intervals from the Cox model  using Rubin's rules. No details of any transformations applied to these estimates prior to using Rubin's rules were reported. Gill et al.  and Clark et al.  reported model performance measures after MI, but did not explicitly state how this was achieved after MI.
With the advances in computer technologies and software, MI is becoming more accessible. MI has been performed prior to the analysis of several prognostic modelling studies, e.g. [30, 31]. Few published studies explicitly stated how the reported results were obtained after MI. None of the articles identified within the current review reported that transformations were applied prior to applying Rubin's rules for any of the estimates.
This paper has suggested guidelines for combining multiply imputed estimates that are of interest when a survival model is fitted to a dataset and suitable performance measures and predicted survival probabilities are required for summarising the model (Table 1). These proposed guidelines are based on our own experiences and current evidence, although evidence for the appropriateness for some parameters of interest such as the mean and regression coefficients are more widely available than for others such as the model performance measures. Following these guidelines can provide a more uniform approach for handling these estimates in future studies and hence comparability of reported estimates between similar studies. The standard Rubin's rules  should be applied to the estimates where the asymptotic normality assumption holds or where suitable transformations can be found. When the asymptotic normality assumption does not appear to hold or is not easily achievable, the average estimate and associated variance may be unsuitable especially with highly skewed distributions, as this could give undue weight to the tails of the distribution. Median and ranges may be more suitable, e.g. for some model performance measures, where variance estimates are generally unavailable. More sophisticated robust estimators, such as the robust M-estimators , may be useful when a within imputation variance can be easily calculated. However, these robust techniques are not likelihood based, as is the case with Rubin's rules. Harel  showed that the proportion of variation explained measures, R2, from a linear regression model fitted to normally distributed data can be considered as a squared correlation coefficient and can be transformed by taking the square root and then applying Fisher's Z transformation as for the correlation coefficient. However whether this approach would apply to R2 measures from a survival regression model that may be affected by censored observations as arises in survival analysis is debatable and therefore robust methods are recommended here.
In this paper, model performance measures were calculated within each imputed dataset using the constructed prognostic model for that dataset and then combined to give an overall multiply imputed measure. The performance of a prognostic model derived using a development sample will also need to be externally validated using an independent dataset , but missing data within the development and or validation sample complicate these analyses. At present there are no clear guidelines on the appropriate handling of missing data and the use of MI when externally validating a prognostic model and therefore further research is required through the use of simulation studies. The extension to constructing prognostic models using variable selection procedures with multiply imputed datasets provides an added complexity, which also requires further investigation. One possible solution is to perform backwards elimination by fitting the full model in each imputed dataset and using the combined estimates to determine the least significant variable for which to exclude and then refit this reduced model. This process is continued until all non-prognostic variables have been eliminated . Alternatively, bootstrapping could be incorporated  or a model averaging approach, as considered within the Bayesian framework , may also be possible.
The review of current practice highlighted deficiencies in the reporting of how the multiply imputed estimates given in the published articles were obtained. Thus, it is recommended that future studies include a more thorough description of the methods used to combine all estimates after MI.
The ability to use MI methods that are readily available in standard statistical software and apply simple rules to combine the estimates of interest rather than requiring problem specific programmes makes MI more accessible to practising statistician. We hope that this may lead to a more widespread and appropriate use of MI in future prognostic modelling studies and improved comparability of the obtained estimates between studies.
number of imputations
Andrea Marshall (nee Burton) was supported by a Cancer Research UK project grant. DGA is supported by Cancer Research UK.
This article is published under license to BioMed Central Ltd. This is an Open Access article distributed under the terms of the Creative Commons Attribution License (http://creativecommons.org/licenses/by/2.0), which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited.