Skip to main content

Accommodating heterogeneous missing data patterns for prostate cancer risk prediction



We compared six commonly used logistic regression methods for accommodating missing risk factor data from multiple heterogeneous cohorts, in which some cohorts do not collect some risk factors at all, and developed an online risk prediction tool that accommodates missing risk factors from the end-user.


Ten North American and European cohorts from the Prostate Biopsy Collaborative Group (PBCG) were used for fitting a risk prediction tool for clinically significant prostate cancer, defined as Gleason grade group ≥ 2 on standard TRUS prostate biopsy. One large European PBCG cohort was withheld for external validation, where calibration-in-the-large (CIL), calibration curves, and area-underneath-the-receiver-operating characteristic curve (AUC) were evaluated. Ten-fold leave-one-cohort-internal validation further validated the optimal missing data approach.


Among 12,703 biopsies from 10 training cohorts, 3,597 (28%) had clinically significant prostate cancer, compared to 1,757 of 5,540 (32%) in the external validation cohort. In external validation, the available cases method that pooled individual patient data containing all risk factors input by an end-user had best CIL, under-predicting risks as percentages by 2.9% on average, and obtained an AUC of 75.7%. Imputation had the worst CIL (-13.3%). The available cases method was further validated as optimal in internal cross-validation and thus used for development of an online risk tool. For end-users of the risk tool, two risk factors were mandatory: serum prostate-specific antigen (PSA) and age, and ten were optional: digital rectal exam, prostate volume, prior negative biopsy, 5-alpha-reductase-inhibitor use, prior PSA screen, African ancestry, Hispanic ethnicity, first-degree prostate-, breast-, and second-degree prostate-cancer family history.


Developers of clinical risk prediction tools should optimize use of available data and sources even in the presence of high amounts of missing data and offer options for users with missing risk factors.

Peer Review reports


The Prostate Biopsy Collaborative Group (PBCG) was established with the aim to improve the understanding of heterogeneity in prostate cancer biopsy outcomes across international clinical centers [1]. Figure 1 shows the range of number of biopsies and prevalence of clinically significant prostate cancer, defined as Gleason grade group ≥ 2, across 11 PBCG cohorts. Previously, the PBCG developed an online risk tool based on the small set of standard risk factors routinely collected in practice: prostate-specific antigen (PSA), digital rectal exam (DRE), age, African ancestry, first-degree prostate cancer family history, and history of a prior negative prostate biopsy [2]. For developing the prior tool, multiple methods for aggregating clinical data on a small number of variables across heterogeneous centers comprising different risk factor distributions and risk factor-outcome associations were compared. The simplest approach of pooling individual-level data and fitting a multiple logistic regression model proved to be most accurate [3]. The resulting risk calculator was published online at to facilitate its use in daily routine [4,5,6,7,8].

Fig. 1
figure 1

Sample sizes represented by the height of rectangles and prevalence of significant prostate cancer represented by the width of rectangles for the 11 PBCG cohorts used in the study. The cohorts have been numbered according to their rank of clinically significant prostate cancer prevalence. The 3rd cohort in black outline was withheld to serve as an external validation cohort with the remaining 10 cohorts used for training prediction models

The PBCG had requested additional risk factors to those included in the current tool from its participating cohorts, but these were less rigorously collected, with some cohorts not collecting some of the risk factors at all (Fig. 2). We wanted to develop an adaptive tool using all the information available in Fig. 2 that would allow the user to enter as much (or as little) information as possible.

Fig. 2
figure 2

Amount of missing risk factor data by cohort on the x-axis; all patients were required to have prostate-specific antigen (PSA) and age, hence 0% missing for these covariates. The 3rd cohort separated by the black vertical line is used as an external validation set, and leave-one-cohort-out cross-validation was applied to the other cohorts. Cohorts were sorted by missing data pattern

Missing data in clinical research is a ubiquitous problem, and a large number of statistical methods to account for it have been proposed [9, 10]. Most methods are applied to missing values in training data sets used to develop a model, but with the emerging use of online and electronic record embedded clinical risk tools, approaches for handling missing risk factors on the user end of a risk tool requiring the predictor are coming into play. Recently, real-time imputation was proposed to extend needed cardiovascular disease management for patients with missing risk factors [11].

The aim of this study was to construct a clinically significant prostate cancer risk tool that would optimize the use of data from heterogeneous cohorts with varying missing data patterns and allow end-users of the tools access even when missing some risk factors. In terms of development of a risk model on multiple cohorts with varying missing data patterns, we found four philosophically distinct approaches: available case analyses, ensembles of cohort-specific models, missing indicator methods, and imputation. We compared six variations of these approaches and selected an optimal one for this application. For the end-user side, we adopted an individual patient tailored approach as we have implemented in previous tools, whereby the user inputs the risk factors he has available and a resulting prediction based on those risk factors is returned [2, 12].


The study was based on risk factor and outcome data collected from January 2006 to December 2019 from trans-rectal systematic 10–12 core biopsies from 10 PBCG cohorts spanning North America and Europe used for training and one PBCG European cohort used for validation (Figs. 1, 2, and S1). The risk factors collected included the standard risk factors used in clinical practice for prostate cancer diagnosis along with other less commonly used risk factors, but with proven associations to prostate cancer. All PBCG data were collected following local institutional review board (IRB) approval from the University of Texas Health Science Center of San Antonio, Memorial Sloan Kettering Cancer Center (MSKCC), Mayo Clinic, University of California San Francisco, Hamburg-Eppendorf University Clinic, Cleveland Clinic, Sunnybrook Health Sciences Centre, Veterans affairs (VA) Caribbean Healthcare System, VA Durham, San Raffaele Hospital, and University Hospital Zurich. Analyses for this retrospective study were approved by the Technical University of Munich Rechts der Isar Hospital ethics committee, with all methods performed in accordance with the guidelines and regulations of the committee. As data collected were anonymized and obtained as part of standard clinical care, consent was waived by all IRB’s, except regarding second-degree prostate cancer and first-degree breast cancer family history for the VA Durham. Written consent for these variables was obtained and documented as part of a larger separate study at the VA Durham prior to the beginning of this study. All institutional PBCG IRB approvals are maintained by the MSKCC central data coordinating center and IRB.

The 10 cohorts used for training the model followed the PBCG prospective protocol in data collection, whereas the external validation cohort supplied retrospective data from a single institution that performs a high annual number of prostate biopsies to the PBCG [2, 3]. Included data came from patients who had received a prostate biopsy following a PSA test under local standard-of-care and may be seen as representative of patients in North America, including Puerto Rico, and Europe. MRI biopsies as well as prostate biopsies from patients with prostate cancer were excluded. Clinically significant prostate cancer was defined as Gleason grade group ≥ 2 on biopsy [13]. For users of the developed risk calculator, two risk factors were mandatory: PSA and age. Ten risk factors were optional: DRE, prostate volume, prior negative biopsy, 5-alpha-reductase-inhibitor use, prior PSA screen (yes/no), African ancestry, Hispanic ethnicity, first- and second-degree prostate cancer- and first-degree breast cancer-family history.

We performed a literature search to identify the six most commonly used approaches for handling missing data in multivariable logistic regression modeling, for either single or multiple cohorts as found in this study. All of the approaches could be implemented in the R statistical package. Our aim was to identify the most accurate approach for implementation in the online tool. To increase flexibility of the tool, we tailored each method to the specific list of risk factors available for an individual. That is, for a validation set, the algorithms were applied for each individual in the validation set separately. All algorithms return logistic-regression-based expressions for probability of clinically significant prostate cancer; the cohort ensemble approach averages these for the individual cohorts. The methods are summarized in Tables 1 and S1.

Table 1 Methods for fitting individual predictor-specific risk models for members of a test set by combining data from multiple cohorts. All individuals in the training and test cohorts have 2 predictors, PSA and age, and then any subset, including none, of 10 additional predictors for a total of 12 predictors, denoted by \(\mathrm{X}\). The set of predictors available for the new individual is denoted by \({\mathrm{X}}^{*}\). All models use logistic regression for prediction of clinically significant prostate cancer. MICE = Multiple imputation by chained equations; BIC = Bayesian Information Criterion defined as the -2(maximized log likelihood) + (number of covariates) \(\times\) log(sample size)

The available cases algorithm pooled individual level data from the training cohorts with information on the variables that the end-user had available, fit a main effects logistic regression model for clinically significant prostate cancer to the training data, and used the coefficients in a tailored prediction model for the target patient. The iterative Bayesian information criterion (BIC) selection method added stepwise BIC-based model selection to the available cases algorithm, allowing two-way interactions to be included. If a risk factor was not chosen in the optimal model by the selection process, the procedure was re-started excluding the risk factor, allowing for a greater number of individuals from the training set to be included in model development.

Rather than pooling data across cohorts, the cohort ensemble method constructed separate models for each cohort, restricting to risk factors available by the end-user and collected by the training cohort. A risk factor was considered available in a training cohort if it was measured in 40% or more participants, otherwise it was considered missing and not included so as not to prohibitively reduce the sample size for constructing a cohort-specific model. Because models were fit to single cohorts and some of the cohorts had small sample sizes, information from individual cohorts could be low or considered inadequate for robust multivariable model construction, as for example, cohort 10 with only 243 biopsies. Such cohorts were not excluded because while they may lack power for obtaining statistical significance of individual coefficients, the goal here was optimizing out-of-sample prediction. Cohort-specific risks were averaged over the cohorts for the result provided to the end-user.

The categorization algorithm returned to pooling data across all training cohorts, and additionally transformed all continuous risk factors to categorical so that missing could be added as an extra category. For inherently categorical risk factors, such as DRE, categories were coded as normal, abnormal, and missing. Prostate volume was stratified to < 30, 30—50 and > 50 cc, as previously suggested so that it could be obtained by pre-biopsy DRE or TRUS, before adding the additional category of missing [14]. The advantage of this approach was that only one model is fit and needed by the end-user. The missing indicator algorithm was similar to the categorization algorithm, but did not require categorization of continuous variables [15]. Instead, it introduced an indicator equal to 1 if the corresponding risk factor was missing versus 0 if not missing. The model included the indicator and the interaction with the risk factor. Since prostate volume was the only continuous risk factor that was sometimes missing, the missing indicator algorithm differed from the categorization algorithm in only one variable. Second-degree prostate cancer- and first-degree breast cancer family history were either both collected or not at all by the individual cohorts. Adding a missing category to them would therefore induce multi-collinearity. In order to avoid this, they were combined to a single new 5-category risk factor with second-degree prostate cancer family history only, first-degree breast cancer family history only, both present, none present or missing.

Multiple imputation has been recommended for fitting statistical models to training data to handle either outcomes or risk factors missing at random (MAR) [16]. In the case here, the outcome of clinically significant prostate cancer was not missing for any individuals so imputation was applied only for missing risk factors. Data were pooled across all ten cohorts to form the training set and imputation was applied using the pooled set and not by cohort. For a patient in the training set with multiple missing risk factors, multiple imputation by chained equations (MICE) sequentially imputes missing data according to full conditional models appropriate to the risk factor data type using all other risk factors available as covariates along with the outcomes that have been fit to complete cases in the training set [16, 17]. The R mice package uses 5 imputations as default and the literature has also recommended 10 iterations [16, 18]. We implemented 30 imputations, as the average percentage of missing values across all risk factors in the training set, and averaged models built on the 30 imputed data sets for the final training set risk model. For the end-user or member of the validation set who is missing a risk factor, the algorithm imputed its value using mean values from the training set only, and not from other members of a validation set, as the latter would not be available in practice [17].

External validation on the European cohort, which was not used for training, was measured by discrimination using the area under the receiver-operating-characteristic curve (AUC) along with its 95% confidence interval (CI), calibration in the large (CIL), which evaluates the average difference between the predicted risk and binary clinically significant prostate cancer outcome for each patient in the validation set, and calibration-in-the-small by calibration curves of observed versus predicted risk according to deciles of predicted risk. Internal leave-one-cohort-out cross-validation using the same metrics was also performed, by alternatively holding out one of the 10 PBCG cohorts used for training the model as a test set and training the models on the remaining 9 cohorts. Distributions of AUCs and CILs from the 10 test validations were visualized by violin plots showing smoothed histograms and boxplots showing medians and inter-quartile ranges. All analyses were performed in the R statistical package [19].


Among 12,703 biopsies from 10 PBCG cohorts used for training, 3,597 (28%) had clinically significant prostate cancer, compared to 1,757 out of 5,540 (32%) clinically significant prostate cancer cases in the external validation cohort (Fig. 1). All cohorts collected PSA and age but varied in collection of the other 10 risk factors, with some cohorts not collecting some risk factors at all (Fig. 2). Differences between the cohorts in terms of distributions of the twelve risk factors and their associations with clinically significant prostate cancer are shown in Fig S1.

In leave-one-cohort-out internal cross-validation across the ten PBCG cohorts to ultimately be used for training the online model, the iterative BIC selection method had the lowest median CIL (-0.2%), while the available cases method had the highest (2.6%), all of which are minor in magnitude (Fig. 3). CIL values ranged from -11 to 11% across the ten cohorts used as test sets. All six methods had nearly the same median AUC at 80%, and values ranged from 74 to 84% across the ten test sets. The categorization and missing indicator methods had larger variation in both the CIL and AUC than the other methods.

Fig. 3
figure 3

CIL and AUC performing leave-one-cohort-out cross-validation on 10 PBCG cohorts. Median values are indicated with numbers and as vertical lines in the boxes

In external validation, all six methods either under- or over-predicted observed risks since none of the 95% CIs for CIL, computed as the average predicted risk minus the disease prevalence in the external validation cohort (32%), contained the value 0 (Table 2). The available cases method was the most accurate, under-predicting risk on average by 2.9%. The categorization and missing indicator methods over-predicted risks by 3.5% and 4.2%, respectively, while all other methods under-predicted risks, with imputation the worst by 12.4% (Table 2, Fig. 4). The AUCs ranged from a low of 75.4% for the iterative BIC selection method to a high of 77.4% for the missing indicator method, but all 95% CIs overlapped (Table 2).

Table 2 External validation CIL and AUC values with risks as percentages along with 95% confidence intervals (CI)
Fig. 4
figure 4

Calibration plots with shaded pointwise 95% confidence intervals for the 6 modeling methods applied to 10 PBCG training cohorts and validated on the external cohort. The diagonal black line is where predicted risks equal observed risks, lines below the diagonal indicate over-prediction, and lines above under-prediction, on the validation set

Comparisons of individual predictions from the six different methods for the 5,540 members of the external validation cohort are shown in Fig. 5. As can be seen on the diagonal, for all methods the distribution of predicted risks for clinically significant prostate cancer cases were higher than for non-clinically significant prostate cancer individuals, but considerable overlap remained. Correlations of predictions by the 6 methods were high, all exceeding 0.8. The iterative BIC selection, cohort ensemble and available case methods were similar methods, all just using complete cases for the risk factor profile a specified individual has, and hence were highly correlated. The remaining three methods adjusted for missing data in some manner and were less correlated with these methods, with categorization the least correlated, though still very highly correlated.

Fig. 5
figure 5

Marginal and pairwise comparisons of predictions from the 6 methods for the 5543 biopsies of the external validation set, pooled and stratified by clinically significant prostate cancer status (31.7% with clinically significant prostate cancer). Corr indicates Pearson correlation. Turquoise indicates individuals with clinically significant prostate cancer and purple not

We chose the available cases method for implementation of the risk tool online since it showed the most accuracy in terms of calibration in external validation (Fig. 4), where all six methods showed equivalent AUCs (Table 2). AUCs and CILs across the 10 cohorts used as test sets in the internal leave-one-cohort-out cross-validation were also similar, and the available cases method had the lowest variability (Fig. 3). The available cases method is less computationally intensive compared to multiple imputation and is valid under MAR assumptions based on unobserved risk factors and outcomes, which though untestable may be assumed as approximately holding when all established risk factors for outcomes are assumed to have been collected [20].

To implement the prediction tool online, we fit 1,024 models to allow for all possible missing risk factor patterns among 10 risk factors in order to use the maximum prostate biopsies possible from the 10 PBCG cohorts. R code for all 1024 models is available at the Cleveland Clinic Risk Calculator library,, as well as in the Additional file 2.

The smallest model only contains PSA and age, utilizing all 12,703 biopsies from the 10 PBCG cohorts since these two risk factors were measured for all individuals. The largest model contains all 12 risk factors and was constructed from only 1,334 biopsies from 3 PBCG cohorts, as these were the only complete cases. These two risk models are shown in Table 3, with all possible models accessible online at Evaluated on the same validation set of 5,540 biopsies as used for Table 2, the original PBCG risk tool published in 2018 [2] based on only 6 of the 12 risk factors used here obtained a CIL of -5.9 (95% CI -7.1, -4.7), and an AUC of 66.9 (95% CI 65.4, 68.5), which is 10 points lower than any of the methods incorporating the additional risk factors. Adding just prostate volume to these six risk factors and evaluating on the validation set yielded a CIL of -10.1 (95% CI -11.2, -9.0), and an AUC of 75.6 (95% CI 74.2, 76.9; p-value < 0.0001 for test of equality of this AUC to that from the standard model). Assessment of prostate volume, however, requires an invasive procedure that is not routinely performed in advance of the prostate biopsy.

Table 3 Odds ratios from the largest, standard, and smallest models in terms of number of 12 risk factors available from an end-user. Sample sizes are the number of individuals in the training set with all risk factors available (complete cases), and number of cohorts contributing the complete cases. In total 1,024 models are available based on the option for included versus not for 10 risk factors, all except PSA and age


Systematic missing clinical data across heterogeneous cohorts poses analysis challenges for both model developers and end-users. We compared six methods that have been proposed for handling missing data with the objective of finding the method most likely to perform well in multiple external validation studies of a globally accessible online risk tool. As with all online risk tools, online publication of the original PBCG continues to result in published external validation studies providing evidence for or against its generalizability to other populations, particularly in comparison to other published tools [4, 21,22,23,24,25,26]. To date, by exclusion of prostate volume, the original PBCG tool has competed less favorably with the other tools incorporating this information. Publication of the expanded risk tool incorporating prostate volume will hopefully increase its accuracy for doctors and patients as to be evinced by forthcoming external validation studies.

Available case methods have been recommended by statisticians as being robust against missing at random (MAR) data mechanisms [10, 20, 27]. The majority of risk factors collected across the PBCG are those typically collected in urological clinics from men presenting for PSA screening or follow-up. The most ubiquitous and predictive risk factors, PSA and age, have been collected for all PBCG participants, and so are exempt from missing data assumptions. Men typically receive multiple PSA screening tests, the PBCG used the PSA most recent to but prior to the prostate biopsy. The assumption of MAR for the remaining risk factors may be questionable in some cases, for example prostate volume may not have been reported when the value was assessed to be too low or clinically significant prostate cancer was not discovered on biopsy. There is no statistical test for MAR, hence we relied on external and internal cohort-based validations to compare the available case to competing methods for selection of the method producing optimal performance across a range of scenarios that would be encountered in practice.

The missing indicator method has been shown to potentially result in biased odds ratios, even when data are missing completely at random, meaning no relation, conditional or not, between whether a risk factor is missing and all other variables, leading to strong recommendations against its use for causal or explanatory inference [10, 20, 28]. The categorization method suffers from the same potential biases since it changes all continuous predictors to categorical ones before applying the missing indicator method. A recent study affirmed that such methods could be used for randomized trials as the missing-ness of protocol-specified variables would be randomized by the random treatment assignment, thus eliminating systematic bias [15].

The emergence of clinical risk prediction tools embedded in electronic health records, where missing data are large and systematic, has led to support for the missing indicator method used in model development to match the method used when the model is deployed, and that if informative presence is potentially informative with respect to prediction, then it should be leveraged [29, 30]. Machine learning and other supervised learning methods follow the principle of developing prediction models to optimize accuracy on internal and external validation, often with uninterpretable models. The renown James–Stein result shows that an estimator with effects shrunk towards zero can be preferable to the unbiased estimator, and these concepts are often applied in regularized regression approaches for situations with high numbers of predictors [31]. Alternative statistical approaches to evaluating competing methods for handling missing data concern themselves with evaluation of bias and efficiency under simulation of theoretical scenarios. Hoogland et al. (2019) performed an extensive simulation study of 9 methods for accommodating missing data in 6-variable logistic regression under 8 missing data mechanisms, including MAR and missing completely at random (MAR) [32]. Their simulation showed the available case method used for the risk tool developed here, which they referred to as the 2 k submodels method, performed optimally in terms of lowest bias of the AUC along with multiple imputation. Their simulation scenarios contained fewer covariates, did not consider calibration, and did not contain multiple cohorts as in this study. In lieu of simulation, this study performed leave-one-cohort-out cross-validation and long with the gold standard of external validation on a completely independent cohort as the tool would be used in practice. Further simulation studies using logistic regression coefficients as found in this study could be performed and would be likely to confirm the choice of method as found by Hoogland et al. (2019).

Across the validations performed in the PBCG, the potentially biased missing indicator and categorization methods did not perform substantially worse than the available cases methods. But we agree that caution should be exercised towards their use when data are combined across cohorts, where some cohorts do not collect some risk factors at all, as was the case with extended family history in this study. In this case, the effect of the missing category was confounded with that of cohort. The coefficient for missing prostate volume following the missing indicator method fit to the 10 PBCG cohort data was close to zero, meaning a patient with missing prostate volume had nearly 0 odds of clinically significant prostate cancer compared to a person not missing prostate volume, which can only be a cohort effect.


In addition to contributing to model development techniques for systematic missing data across heterogeneous cohorts, we have provided helpful methods for the end-user of online risk tools, namely the fit of multiple models for different risk factor missing data patterns. Such work enables more users to access online risk tools. Each model was fit to all complete cases that contained the risk factors, thus optimizing information and accuracy for the user. Our online tool requires PSA and age for use, and any collection of up to 10 additional risk factors. As consortia and available data grow in size, so does the amount of missing data. A flexible modeling strategy accommodating missing data on both the development- and user-end maximizes information by utilizing multiple data sources and increases accessibility to a broader band of patients, by including those limited in risk factor assessment.

Availability of data and materials

Data cannot be shared publicly because they require a Data Transfer Agreement from the Grant holder, Memorial Sloan Kettering Cancer Center (MSKCC). Data are available from the MSKCC Office of Research and Technology Management for researchers who meet the criteria for access to confidential data.



Area-underneath-the-receiver-operating characteristic curve


Bayesian information criterion


Confidence interval




Digital rectal exam


Institutional review board


Missing at random


Multiple imputation by chained equations


Memorial Sloan Kettering Cancer Center


Prostate Biopsy Collaborative Group


Prostate-specific antigen


Veterans affairs


  1. Vickers AJ, Cronin AM, Roobol MJ, Hugosson J, Jones JS, Kattan MW, et al. The relationship between prostate-specific antigen and prostate cancer risk: the prostate biopsy collaborative group. Clin Cancer Res. 2010;16(17):4374–81.

    Article  CAS  PubMed  PubMed Central  Google Scholar 

  2. Ankerst DP, Straubinger J, Selig K, Guerrios L, De Hoedt A, Hernandez J, et al. A contemporary prostate biopsy risk calculator based on multiple heterogeneous cohorts. Eur Urol. 2018;74(2):197–203.

    Article  PubMed  PubMed Central  Google Scholar 

  3. Tolksdorf J, Kattan MW, Boorjian SA, Freedland SJ, Saba K, Poyet C, et al. Multi-cohort modeling strategies for scalable globally accessible prostate cancer risk tools. BMC Med Res Methodol. 2019;19(1):191.

    Article  PubMed  PubMed Central  Google Scholar 

  4. Jalali A, Foley RW, Maweni RM, Murphy K, Lundon DJ, Lynch T, et al. A risk calculator to inform the need for a prostate biopsy: a rapid access clinic cohort. BMC Med Inform Decis Mak. 2020;20(1):148.

    Article  PubMed  PubMed Central  Google Scholar 

  5. Stojadinovic M, Trifunovic T, Jankovic S. Adaptation of the prostate biopsy collaborative group risk calculator in patients with PSA less than 10 ng/ml improves its performance. Int Urol Nephrol. 2020;52(10):1811–9.

    Article  PubMed  Google Scholar 

  6. Mortezavi A, Palsdottir T, Eklund M, Chellappa V, Murugan SK, Saba K, et al. Head-to-head comparison of conventional, and image- and biomarker-based prostate cancer risk calculators. Eur Urol Focus. 2020;S2405–4569(20):30113–9.

    Article  Google Scholar 

  7. Rubio-Briones J, Borque-Fernando A, Esteban LM, Mascarós JM, Ramírez-Backhaus M, Casanova J, et al. Validation of a 2-gene mRNA urine test for the detection of ≥GG2 prostate cancer in an opportunistic screening population. Prostate. 2020;80(6):500–7.

    Article  CAS  PubMed  Google Scholar 

  8. Carbunaru S, Nettey OS, Gogana P, Helenowski IB, Jovanovic B, Ruden M, et al. A comparative effectiveness analysis of the PBCG vs. PCPT risks calculators in a multi-ethnic cohort. BMC Urol. 2019;19(1):121.

    Article  PubMed  PubMed Central  Google Scholar 

  9. Janssen KJ, Donders AR, Harrell FE Jr, Vergouwe Y, Chen Q, Grobbee DE, Moons KG. Missing covariate data in medical research: to impute is better than to ignore. J Clin Epidemiol. 2010;63(7):721–7.

    Article  PubMed  Google Scholar 

  10. Donders AR, van der Heijden GJ, Stijnen T, Moons KG. Review: a gentle introduction to imputation of missing values. J Clin Epidemiol. 2006;59(10):1087–91.

    Article  PubMed  Google Scholar 

  11. Nijman SWJ, Groenhof TKJ, Hoogland J, Bots ML, Brandjes M, Jacobs JJL, et al. Real-time imputation of missing predictor values improved the application of prediction models in daily practice. J Clin Epidemiol. 2021;19(134):22–34.

    Article  Google Scholar 

  12. Ankerst DP, Hoefler J, Bock S, Goodman PJ, Vickers A, Hernandez J, et al. Prostate cancer prevention trial risk calculator 2.0 for the prediction of low- vs high-grade prostate cancer. Urology. 2014;83(6):1362–7.

    Article  PubMed  Google Scholar 

  13. Zhou AG, Salles DC, Samarska IV, Epstein JI. How are gleason scores categorized in the current literature: an analysis and comparison of articles published in 2016–2017. Eur Urol. 2019;75(1):25–31.

    Article  PubMed  Google Scholar 

  14. Roobol MJ, van Vugt HA, Loeb S, Zhu X, Bul M, Bangma CH, van Leenders AG, Steyerberg EW, Schröder FH. Prediction of prostate cancer risk: the role of prostate volume and digital rectal examination in the ERSPC risk calculators. Eur Urol. 2012;61(3):577–83.

    Article  PubMed  Google Scholar 

  15. Groenwold RH, White IR, Donders AR, Carpenter JR, Altman DG, Moons KG. Missing covariate data in clinical research: when and when not to use the missing-indicator method for analysis. CMAJ. 2012;184(11):1265–9.

    Article  PubMed  PubMed Central  Google Scholar 

  16. Van Buuren S, Groothuis-Oudshoorn K. mice: multivariate imputation by chained equations in R. J Stat Softw. 2011;45(3):1–67.

    Article  Google Scholar 

  17. White IR, Royston P, Wood AM. Multiple imputation using chained equations: issues and guidance for practice. Stat Med. 2011;30(4):377–99.

    Article  PubMed  Google Scholar 

  18. Bodner TE. What improves with increased missing data imputations? Psychology Press. 2008;15:651–75.

    Article  Google Scholar 

  19. R Core Team R. A language and environment for statistical computing. R Foundation for statistical computing, Vienna, Austria. 2020. Available from:

    Google Scholar 

  20. Mealli F, Rubin DB. Clarifying missing at random and related definitions, and implications when coupled with exchangeability. Biometrika. 2015;102(4):995–1000.

    Article  Google Scholar 

  21. van Riel LAMJG, Jager A, Meijer D, Postema AW, Smit RS, Vis AN, et al. Predictors of clinically significant prostate cancer in biopsy-naïve and prior negative biopsy men with a negative prostate MRI: improving MRI-based screening with a novel risk calculator. Ther Adv Urol. 2022;14:17562872221088536.

    Article  PubMed  PubMed Central  Google Scholar 

  22. Yıldızhan M, Balcı M, Eroğlu U, Asil E, Coser S, Özercan AY, et al. An analysis of three different prostate cancer risk calculators applied prior to prostate biopsy: a Turkish cohort validation study. Andrologia. 2022;54(2): e14329.

    Article  PubMed  Google Scholar 

  23. Doan P, Graham P, Lahoud J, Remmers S, Roobol MJ, Kim L, Patel MI. A comparison of prostate cancer prediction models in men undergoing both magnetic resonance imaging and transperineal biopsy: Are the models still relevant? BJU Int. 2021;128(Suppl 3):36–44.

    Article  CAS  PubMed  Google Scholar 

  24. Amaya-Fragoso E, García-Pérez CM. Improving prostate biopsy decision making in Mexican patients: Still a major public health concern. Urol Oncol. 2021;39(12):831.e11-831.e18.

    Article  Google Scholar 

  25. Presti JC, Alexeeff S, Horton B, Prausnitz S, Avins AL. Prospective validation of the Kaiser permanente prostate cancer risk calculator in a contemporary, racially diverse, referral population. Urol Oncol. 2021;39(11):783.e11-783.e19.

    Article  Google Scholar 

  26. Carbunaru S S, Nettey OS OS, Gogana P, Helenowski IB, Jovanovic B, Ruden M, et al. A comparative effectiveness analysis of the PBCG vs. PCPT risks calculators in a multi-ethnic cohort. BMC Urol. 2019;19(1):121.

    Article  PubMed  PubMed Central  Google Scholar 

  27. Hughes RA, Heron J, Sterne JAC, Tilling K. Accounting for missing data in statistical analyses: multiple imputation is not always the answer. Int J Epidemiol. 2019;48(4):1294–304.

    Article  PubMed  PubMed Central  Google Scholar 

  28. van der Heijden GJ, Donders AR, Stijnen T, Moons KG. Imputation of missing values is superior to complete case analysis and the missing-indicator method in multivariable diagnostic research: a clinical example. J Clin Epidemiol. 2006;59(10):1102–9.

    Article  PubMed  Google Scholar 

  29. Sperrin M, Martin GP, Sisk R, Peek N. Missing data should be handled differently for prediction than for description or causal explanation. J Clin Epidemiol. 2020;125:183–7.

    Article  PubMed  Google Scholar 

  30. Sisk R, Lin L, Sperrin M, Barrett JK, Tom B, Diaz-Ordaz K, et al. Informative presence and observation in routine health data: a review of methodology for clinical risk prediction. J Am Med Inform Assoc. 2021;28(1):155–66.

    Article  PubMed  Google Scholar 

  31. Stein C. Inadmissibility of the usual estimator for the mean of a multivariate distribution. Proc Third Berkeley Symp Math Statist. 1956;1:197–206.

    Google Scholar 

  32. Hoogland J, van Barreveld M, Debray TPA, Reitsma JB, Verstraelen TE, Dijkgraaf MGW, Zwinderman AH. Handling missing predictor values when validating and applying a prediction model to new patients. Stat Med. 2020;39:3591–607.

    Article  PubMed  PubMed Central  Google Scholar 

Download references


This material is based upon work supported by the Research and Development Service, Surgery Department, Urology Section and Department of Veterans Affairs, Caribbean Healthcare System San Juan, P.R.


Open Access funding enabled and organized by Projekt DEAL. Funding was provided by grants CA179115, P50-CA92629, P30-CA008748, W81XWH-15–1-0441, P30 CA054174, K24 – CA160653. The contents of this Publication do not represent the views of the VA Caribbean Healthcare System, the Department of Veterans Affairs or the US Government. Opinions, interpretations, conclusions and recommendations are those of the author and are not necessarily endorsed by the US Department of Defense.

Author information

Authors and Affiliations



Methodology, original draft preparation, writing, and interpretation of results: MN and DPA. All authors contributed to writing, editing the manuscript, approved submission of the final manuscript, and have read and agreed to the published version of the manuscript.

Authors’ information

Not applicable.

Corresponding author

Correspondence to Matthias Neumair.

Ethics declarations

Ethics approval and consent to participate

PBCG data were collected following IRB approval at UT Health San Antonio, Memorial Sloan Kettering Cancer Center, Mayo Clinic, University of Zurich, VA Durham, VA Caribbean Healthcare System San Juan, P.R., VA Durham, San Raffaele Hospital, Sunnybrook Health Systems, University of California San Francisco, Hamburg University Hospital. Analyses on anonymized data for this study were approved by the Technical University of Munich Ethics Commission. Consent was waived for all data collection, which involved only record extraction of standard clinical care elements. For detailed family history collected at the Durham VA written consent was obtained by all participants as required for a local study prior to this study.

Consent for publication

Not applicable.

Competing interests

Andrew Vickers receives royalties from the 4Kscore, a test used in prostate cancer. He owns stock options in Opko, which offers the test. All other authors declare that they have no conflict of interest.

Additional information

Publisher’s Note

Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Supplementary Information

Additional file 1:

Detailed description of the algorithms for the 6 risk modeling approaches. Differences between the cohorts in terms of distributions of the twelve risk factors and their associations with clinically significant prostate cancer.

Additional file 2:

R code for all 1,024 models available at

Rights and permissions

Open Access This article is licensed under a Creative Commons Attribution 4.0 International License, which permits use, sharing, adaptation, distribution and reproduction in any medium or format, as long as you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons licence, and indicate if changes were made. The images or other third party material in this article are included in the article's Creative Commons licence, unless indicated otherwise in a credit line to the material. If material is not included in the article's Creative Commons licence and your intended use is not permitted by statutory regulation or exceeds the permitted use, you will need to obtain permission directly from the copyright holder. To view a copy of this licence, visit The Creative Commons Public Domain Dedication waiver ( applies to the data made available in this article, unless otherwise stated in a credit line to the data.

Reprints and permissions

About this article

Check for updates. Verify currency and authenticity via CrossMark

Cite this article

Neumair, M., Kattan, M.W., Freedland, S.J. et al. Accommodating heterogeneous missing data patterns for prostate cancer risk prediction. BMC Med Res Methodol 22, 200 (2022).

Download citation

  • Received:

  • Accepted:

  • Published:

  • DOI: