Assessing discriminative ability of risk models in clustered data
© van Klaveren et al.; licensee BioMed Central Ltd. 2014
Received: 7 October 2013
Accepted: 8 January 2014
Published: 15 January 2014
The discriminative ability of a risk model is often measured by Harrell’s concordance-index (c-index). The c-index estimates for two randomly chosen subjects the probability that the model predicts a higher risk for the subject with poorer outcome (concordance probability). When data are clustered, as in multicenter data, two types of concordance are distinguished: concordance in subjects from the same cluster (within-cluster concordance probability) and concordance in subjects from different clusters (between-cluster concordance probability). We argue that the within-cluster concordance probability is most relevant when a risk model supports decisions within clusters (e.g. who should be treated in a particular center). We aimed to explore different approaches to estimate the within-cluster concordance probability in clustered data.
We used data of the CRASH trial (2,081 patients clustered in 35 centers) to develop a risk model for mortality after traumatic brain injury. To assess the discriminative ability of the risk model within centers we first calculated cluster-specific c-indexes. We then pooled the cluster-specific c-indexes into a summary estimate with different meta-analytical techniques. We considered fixed effect meta-analysis with different weights (equal; inverse variance; number of subjects, events or pairs) and random effects meta-analysis. We reflected on pooling the estimates on the log-odds scale rather than the probability scale.
The cluster-specific c-index varied substantially across centers (IQR = 0.70-0.81; I 2 = 0.76 with 95% confidence interval 0.66 to 0.82). Summary estimates resulting from fixed effect meta-analysis ranged from 0.75 (equal weights) to 0.84 (inverse variance weights). With random effects meta-analysis – accounting for the observed heterogeneity in c-indexes across clusters – we estimated a mean of 0.77, a between-cluster variance of 0.0072 and a 95% prediction interval of 0.60 to 0.95. The normality assumptions for derivation of a prediction interval were better met on the probability than on the log-odds scale.
When assessing the discriminative ability of risk models used to support decisions at cluster level we recommend meta-analysis of cluster-specific c-indexes. Particularly, random effects meta-analysis should be considered.
KeywordsClustered data Concordance Discrimination Meta-analysis Prediction Risk model
Assessing the performance of a risk model is of great practical importance. An essential aspect of model performance is separating subjects with good outcome from subjects with poor outcome (discrimination) . The concordance probability is a commonly used measure of discrimination reflecting the association between model predictions and true outcomes [2, 3]. For binary outcome data it is the probability that a randomly chosen subject from the event group has a higher predicted probability of having an event than a randomly chosen subject from the non-event group. For time-to-event outcome data it is the probability that, for a randomly chosen pair of subjects, the subject who experiences the event of interest earlier in time has a lower predicted value of the time to the occurrence of the event. For both kinds of outcome data the concordance probability is often estimated with Harrell’s concordance (c)-index .
In risk modelling, clustered data are frequently used. A typical example is multicenter patient data, i.e. data of patients who are treated in different centers with similar inclusion criteria across the centers. Patients treated in the same center are nevertheless more alike than patients from different centers. A comparable type of clustering may occur in patients treated in different countries or in patients treated by different caregivers in the same center. Similarly, in public health research the study population is often clustered in geographical regions like countries, municipalities or neighbourhoods. It has been suggested that clustering should be taken into account in the development of risk models to obtain unbiased estimates of predictor effects . This can be done by using a multilevel logistic regression model for binary outcomes or a frailty model for time-to-event outcomes [5, 6].
It would be natural to take clustering also into account when measuring the performance of a risk model. For multilevel models, it has been proposed to consider the concordance probability of subjects within the same cluster (within-cluster concordance probability) separately from the concordance probability of subjects in different clusters (between-cluster concordance probability) [7, 8]. We propose using the within-cluster concordance probability when risk models are used to support decisions within clusters, e.g. in clinical practice where decisions on interventions are commonly taken within centers. A valuable risk model should then be able to separate subjects within the same cluster into those with good outcome and poor outcome. We consider the within-cluster concordance probability more relevant in this context than the between-cluster or overall concordance probability.
Here, we aimed to estimate the within-cluster concordance probability from clustered data. We explored different meta-analytic methods for pooling cluster-specific concordance probability estimates with an illustration in predicting mortality among patients suffering from traumatic brain injury.
Mortality in traumatic brain injury patients
We present a case study of predicting mortality after Traumatic Brain Injury (TBI). Risk models using baseline characteristics provide adequate discrimination between patients with good and poor 6-month outcomes after TBI [9, 10]. We used patients enrolled in the Medical Research Council Corticosteroid Randomisation after Significant Head Injury  trial (registration ISRCTN74459797, http://www.controlled-trials.com/), who were recruited between 1999 and 2004. This was a large international double-blind, randomized placebo-controlled trial of the effect of early administration of a 48-h infusion of methylprednisolone on outcome after head injury. The trial included 10,008 adults clustered in 239 centers with Glasgow Coma Scale (GCS)  Total Score ≤ 14, who were enrolled within 8 hours after injury. By design the patient inclusion criteria were equal in all 239 centers.
We considered patients with moderate or severe brain injury (GCS Total Score ≤ 12) and observed 6-month Glasgow Outcome Scale (GOS) . Patients who were treated in one of 35 European centers with more than 5 patients experiencing the event (n = 2,081), were used to assess the discriminative ability of a prediction model developed with data from 35 centers. Patients who were treated in one of 21 Asian centers with more than 5 patients experiencing the event (n = 1,421) were used to assess the discriminative ability at external validation.
We used a Cox proportional hazards model with age, GCS Motor Score and pupil reactivity as covariates similar to previously developed risk models [9, 10]. We modelled center with a Gamma frailty (random effect) to account for heterogeneity in mortality among centers. We estimated parameters on the European selection of patients with the R package survival [14, 15]. As center effect estimates are unavailable when using a risk model in new centers, we calculated individual risk predictions applying the Gamma frailty mean of 1 for each patient.
Cluster-specific concordance probabilities
We estimated the concordance probability within each cluster by Harrell’s c-index , i.e. the proportion of all usable pairs of subjects in which the predictions are concordant with the outcomes. A pair of subjects is usable if we can determine the ordering of their outcomes. For binary outcomes, pairs of subjects are usable if one of the subjects had an event and the other did not. For time-to-event outcomes, pairs of subjects are usable if their failure times are not equal and at least the smallest failure time is uncensored. For a usable subject pair the predictions are concordant with the outcomes if the ordering of the predictions is equal to the ordering of the outcomes. Values of the c-index close to 0.5 indicate that the model does not perform much better than a coin-flip in predicting which subject of a randomly chosen pair will have a better outcome. Values of the c-index near 1 indicate that the model is almost perfectly able to predict which subject of a randomly chosen pair will have a favourable outcome. We estimated the variances of the cluster-specific c-indexes with a method proposed by Quade . Formulas are provided in Appendix 1.
Pooling cluster-specific concordance probability estimates
The within-cluster concordance probability C w can be estimated by pooling the cluster-specific concordance probability estimates into a weighted average. Previously, the cluster-specific concordance probability estimates were pooled with the number of usable subject pairs as weights [7, 8]. Here, we define eight different ways for pooling of cluster-specific estimates – both on the probability scale and on the log-odds scale – based on fixed effect meta-analysis and random effects meta-analysis.
We consider a dataset with subjects in K clusters. Let m k be the number of subjects and e k be the number of events in cluster k. We denote the number of usable subject pairs – pairs of subjects for whom we can determine the ordering of their outcomes – in cluster k by n k . The cluster-specific concordance probability estimate for cluster k is denoted by with sampling variance estimate .
Fixed effect meta-analysis
The simplest approach would be to apply equal weights, w k = 1/K for each cluster (method 1). This estimator is quite naive when the cluster size varies, because small clusters are given the same weight as large clusters and information about the precision of the cluster-specific estimates is ignored. Heuristic choices of weights taking the cluster size into account are the number of subjects, w k = m k (method 2), or the number of events, w k = e k (method 3). Analogous to the definition of the c-index a fourth option is the number of usable subject pairs as weights, w k = n k (method 4). The pooled estimate is then equal to the proportion of all usable within-cluster subject pairs in which the predictions and outcomes are concordant. Another choice of meta-analysis weights are the inverse variances, (method 5). These weights express the precision of the cluster-specific estimates and are commonly used in meta-analysis of study-specific treatment effects.
Random effects meta-analysis
To consider if the normality assumption is valid we used a normal probability plot of z k and applied the Shapiro-Wilk test to z k . In a normal probability plot z k is plotted against a theoretical normal distribution in such a way that the points should form an approximate straight line. Departures from this straight line indicate departures from normality. The Shapiro-Wilk test returns the probability of obtaining the test-statistic as least as extreme as the observed one, under the null-hypothesis that z k are normally distributed (p-value). When the p-value is above significance level α, say 5%, the null hypothesis that z k is normally distributed is not rejected.
Since the concordance probability is restricted to [0, 1] the normality assumption of random effects meta-analysis may be violated. We considered inverse variance weighted meta-analysis on the log-odds scale as an alternative approach (methods 7 and 8 for fixed effect and random effects meta-analysis respectively). The resulting estimators for the within-cluster concordance probability are defined in Appendix 2. The normality assumption on log-odds scale was again assessed by the normal probability plot and the Shapiro-Wilk test.
Overview of the 8 methods for pooling of cluster-specific concordance probability estimates
Fixed effect meta-analysis
Random effects meta-analysis
Assuming the same true (logit) concordance probability within each cluster
Assuming variation in true (logit) concordance probabilities across clusters
Meta-analysis of cluster-specific estimates of the concordance probability
1. Equal weight for each cluster
6. Inverse of the sum of the cluster-specific sampling variance estimate and the between-cluster variance estimate
2. Number of subjects in the cluster
3. Number of subjects in the cluster with an event
4. Number of usable subject pairs within the cluster
5. Inverse of the cluster-specific sampling variance estimate
Meta-analysis of cluster-specific estimates of the logit concordance probability
7. Inverse of the cluster-specific sampling variance estimate on log-odds scale
8. Inverse of the sum of the cluster-specific sampling variance estimate on log-odds scale and the between-cluster variance estimate on log-odds scale
Patient characteristics in selected European and Asian centers
Measure or Category
Median (25–75 percentile)
GCS Motor score
No response (1)
Abnormal flexion (3)
Normal flexion (4)
No pupil reacted
One pupil reacted
Both pupils reacted
Patients per center
Median (25–75 percentile)
Associations between predictors and 6-month mortality in European centers
HR (95 % CI)
47 versus 23*
GCS Motor score
No response (1)
Abnormal flexion (3)
Normal flexion (4)
No pupil reacted
One pupil reacted
Both pupils reacted
Center random effect
75 versus 25 percentile
We studied how to assess the discriminative ability of risk models in clustered data. The within-cluster concordance probability is an important measure for risk models when these models are used to support decisions on interventions within the clusters. The within-cluster concordance probability can be estimated by pooling cluster-specific concordance probability estimates (e.g. c-indexes) with a meta-analysis, similar to pooling of study-specific treatment effect estimates. We considered different pooling strategies (Table 1) and recommend random effects meta-analysis in case of substantial variability – beyond chance – of the concordance probability across clusters [20, 21]. To decide if the meta-analysis should be undertaken on the probability scale or the log-odds scale we suggest considering the normality assumptions on both scales by normal probability plots and Shapiro-Wilk tests of the standardized residuals.
The illustration of predicting 6-month mortality after TBI prompted the use of random effects meta-analysis because of the strong difference – beyond chance – in concordance probability among centers. This was clearly visualized by the forest plot in Figure 2. Random effects meta-analysis results can be summarized by the mean concordance probability and a 95% prediction interval for possible values of the concordance probability. By definition, these results give insight into the variation of the discriminative ability among centers as opposed to fixed effect meta-analysis results [20, 21]. By comparing normal probability plots and Shapiro-Wilk test results based on the standardized residuals we concluded the random effects meta-analysis results on probability scale to be the most appropriate (Figure 4). Although the methodology is illustrated with time-to-event outcomes of traumatic brain injury patients, it is also applicable to binary outcomes.
Even if a risk model contains regression coefficients that are optimal for the data in each cluster, differences in case mix may lead to different concordance probabilities across clusters . Furthermore, predictor effects may vary because of cluster-specific circumstances, also leading to different cluster-specific concordance probabilities. Given the variability beyond chance in our case study, we consider a random effects meta-analysis of the cluster-specific c-indexes as most appropriate.
The assumption of random effects meta-analysis is that underlying concordance probabilities among clusters are exchangeable, i.e. cluster-specific concordance probabilities are expected to be non-identical, yet identically distributed . If part of the variation can be explained by cluster characteristics, a meta-regression – assuming partial exchangeability – of the concordance probability estimates with cluster characteristics as covariates is preferable.
We chose to analyse the concordance probability as it is the most commonly used measure of discriminative ability of a risk model. However, the same logic of pooling cluster-specific performance measure estimates can be applied to any other performance measure, like the discrimination slope, the explained variation (R 2 ) or the Brier score .
We used Harrell’s c-index to estimate cluster-specific concordance probabilities together with Quade’s formula for the cluster-specific variances of the c-index [2, 16]. The same methodology of pooling cluster-specific performance measure estimates can be applied to other concordance probability estimators and its variances. Other estimators for the concordance probability in time-to-event data can be found in Gönen and Heller  and Uno et al . These estimators are especially favourable when censoring varies by cluster as they are shown to be less sensitive to censoring distributions. Other variance estimators are described by Hanley and McNeil , and DeLong et al  for binary outcome data and by Nam and D'Agostino  and Pencina and D'Agostino  for time-to-event outcome data. The variance of the concordance probability estimate can also be estimated with a bootstrap procedure .
We recommend meta-analysis of cluster-specific c-indexes when assessing discriminative ability of risk models used to support decisions at cluster level. Particularly, random effects meta-analysis should be considered as it allows for and provides insight into the variability of the concordance probability among clusters.
The resulting pooled estimates together with confidence and prediction intervals are transformed back to probability scale.
Corticosteroid randomisation after significant head injury
Glasgow coma scale
Glasgow outcome scale
This work was supported by the Netherlands Organisation for Scientific Research (grant 917.11.383).
The authors express their gratitude to all of the principal investigators of the CRASH trial for providing the data. We thank Prof. Emmanuel Lesaffre (Department of Biostatistics, Erasmus MC, Rotterdam, The Netherlands) for helpful comments.
- Steyerberg EW, Vickers AJ, Cook NR, Gerds T, Gonen M, Obuchowski N, Pencina MJ, Kattan MW: Assessing the performance of prediction models: a framework for traditional and novel measures. Epidemiology. 2010, 21 (1): 128-138. 10.1097/EDE.0b013e3181c30fb2.View ArticlePubMedPubMed CentralGoogle Scholar
- Harrell FE, Califf RM, Pryor DB, Lee KL, Rosati RA: Evaluating the yield of medical tests. JAMA. 1982, 247 (18): 2543-2546. 10.1001/jama.1982.03320430047030.View ArticlePubMedGoogle Scholar
- Pencina MJ, D'Agostino RB: Overall C as a measure of discrimination in survival analysis: model specific population value and confidence interval estimation. Stat Med. 2004, 23 (13): 2109-2123. 10.1002/sim.1802.View ArticlePubMedGoogle Scholar
- Bouwmeester W, Twisk JW, Kappen TH, van Klei WA, Moons KG, Vergouwe Y: Prediction models for clustered data: comparison of a random intercept and standard regression model. BMC Med Res Methodol. 2013, 13: 19-10.1186/1471-2288-13-19.View ArticlePubMedPubMed CentralGoogle Scholar
- Gelman A, Hill J: Data analysis using regression and multilevel/hierarchical models. 2007, Cambridge: Cambridge University PressGoogle Scholar
- Duchateau L, Janssen P: The Frailty Model. 2008, New York: SpringerGoogle Scholar
- Van Oirbeek R, Lesaffre E: An application of Harrell's C-index to PH frailty models. Stat Med. 2010, 29 (30): 3160-3171. 10.1002/sim.4058.View ArticlePubMedGoogle Scholar
- Van Oirbeek R, Lesaffre E: Assessing the predictive ability of a multilevel binary regression model. Comput Stat Data Anal. 2012, 56 (6): 1966-1980. 10.1016/j.csda.2011.11.023.View ArticleGoogle Scholar
- Collaborators MCT, Perel P, Arango M, Clayton T, Edwards P, Komolafe E, Poccock S, Roberts I, Shakur H, Steyerberg E, et al: Predicting outcome after traumatic brain injury: practical prognostic models based on large cohort of international patients. BMJ. 2008, 336 (7641): 425-429.View ArticleGoogle Scholar
- Steyerberg EW, Mushkudiani N, Perel P, Butcher I, Lu J, McHugh GS, Murray GD, Marmarou A, Roberts I, Habbema JD, et al: Predicting outcome after traumatic brain injury: development and international validation of prognostic scores based on admission characteristics. PLoS Med. 2008, 5 (8): e165-10.1371/journal.pmed.0050165.View ArticlePubMedPubMed CentralGoogle Scholar
- Edwards P, Arango M, Balica L, Cottingham R, El-Sayed H, Farrell B, Fernandes J, Gogichaisvili T, Golden N, Hartzenberg B, et al: Final results of MRC CRASH, a randomised placebo-controlled trial of intravenous corticosteroid in adults with head injury-outcomes at 6 months. Lancet. 2005, 365 (9475): 1957-1959.View ArticlePubMedGoogle Scholar
- Teasdale G, Jennett B: Assessment of coma and impaired consciousness. A practical scale. Lancet. 1974, 2 (7872): 81-84.View ArticlePubMedGoogle Scholar
- Jennett B, Bond M: Assessment of outcome after severe brain damage. Lancet. 1975, 1 (7905): 480-484.View ArticlePubMedGoogle Scholar
- R Development Core Team: R: A Language and Environment for Statistical Computing. 2011, Vienna, Austria: R Foundation for Statistical Computing, ISBN 3-900051-07-0, URL http://www.R-project.org/Google Scholar
- Therneau T, original Splus->R port by Lumley T: survival: Survival analysis, including penalised likelihood. R package version 2.36-9. 2011, http://CRAN.R-project.org/package=survival,Google Scholar
- Quade D: Nonparametric partial correlation. Volume No. 526. 1967, North Carolina: Institute of Statistics Mimeo, Volume 526Google Scholar
- Higgins JP, Thompson SG: Quantifying heterogeneity in a meta-analysis. Stat Med. 2002, 21 (11): 1539-1558. 10.1002/sim.1186.View ArticlePubMedGoogle Scholar
- DerSimonian R, Laird N: Meta-analysis in clinical trials. Control Clin Trials. 1986, 7 (3): 177-188. 10.1016/0197-2456(86)90046-2.View ArticlePubMedGoogle Scholar
- DerSimonian R, Kacker R: Random-effects model for meta-analysis of clinical trials: an update. Contemp Clin Trials. 2007, 28 (2): 105-114. 10.1016/j.cct.2006.04.004.View ArticlePubMedGoogle Scholar
- Higgins JPT, Thompson SG, Spiegelhalter DJ: A re-evaluation of random-effects meta-analysis. J R Soc Health Series A. 2009, 172 (1): 137-159.View ArticleGoogle Scholar
- Riley RD, Higgins JPT, Deeks JJ: Interpretation of random effects meta-analyses. BMJ. 2011, 342: d549-10.1136/bmj.d549.View ArticlePubMedGoogle Scholar
- Hardy RJ, Thompson SG: Detecting and describing heterogeneity in meta-analysis. Stat Med. 1998, 17 (8): 841-856. 10.1002/(SICI)1097-0258(19980430)17:8<841::AID-SIM781>3.0.CO;2-D.View ArticlePubMedGoogle Scholar
- Lumley T: rmeta: Meta-analysis. R package version 2.16. 2009, http://CRAN.R-project.org/package=rmeta,Google Scholar
- Vergouwe Y, Moons KG, Steyerberg EW: External validity of risk models: use of benchmark values to disentangle a case-mix effect from incorrect coefficients. Am J Epidemiol. 2010, 172 (8): 971-980. 10.1093/aje/kwq223.View ArticlePubMedPubMed CentralGoogle Scholar
- Steyerberg EW: Clinical Prediction Models: A Practical Approach to Development, Validation, and Updating. 2009, New York: SpringerView ArticleGoogle Scholar
- Gönen M, Heller G: Concordance probability and discriminatory power in proportional hazards regression. Biometrika. 2005, 92 (4): 965-970. 10.1093/biomet/92.4.965.View ArticleGoogle Scholar
- Uno H, Cai T, Pencina MJ, D'Agostino RB, Wei LJ: On the C-statistics for evaluating overall adequacy of risk prediction procedures with censored survival data. Stat Med. 2011, 30 (10): 1105-1117.PubMedPubMed CentralGoogle Scholar
- Hanley JA, McNeil BJ: The meaning and use of the area under a receiver operating characteristic (ROC) curve. Radiology. 1982, 143 (1): 29-36.View ArticlePubMedGoogle Scholar
- DeLong ER, DeLong DM, Clarke-Pearson DL: Comparing the areas under two or more correlated receiver operating characteristic curves: a nonparametric approach. Biometrics. 1988, 44 (3): 837-845. 10.2307/2531595.View ArticlePubMedGoogle Scholar
- Nam BH, D'Agostino RB: Discrimination Index, the Area under the ROC Curve. Goodness-of-Fit Tests and Model Validity. 2002, Boston: Birkhauser, 267-279.View ArticleGoogle Scholar
- Efron B, Tibshirani R: An Introduction to the Bootstrap. 1993, Boca Raton, FL: CRC pressView ArticleGoogle Scholar
- The pre-publication history for this paper can be accessed here:http://www.biomedcentral.com/1471-2288/14/5/prepub
This article is published under license to BioMed Central Ltd. This is an Open Access article distributed under the terms of the Creative Commons Attribution License (http://creativecommons.org/licenses/by/2.0), which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited. The Creative Commons Public Domain Dedication waiver (http://creativecommons.org/publicdomain/zero/1.0/) applies to the data made available in this article, unless otherwise stated.