 Research
 Open access
 Published:
A simple pooling method for variable selection in multiply imputed datasets outperformed complex methods
BMC Medical Research Methodology volume 22, Article number: 214 (2022)
Abstract
Background
For the development of prognostic models, after multiple imputation, variable selection is advised to be applied from the pooled model. The aim of this study is to evaluate by using a simulation study and practical data example the performance of four different pooling methods for variable selection in multiple imputed datasets. These methods are the D1, D2, D3 and recently extended MedianPRule (MPR) for categorical, dichotomous, and continuous variables in logistic regression models.
Methods
Four datasets (n = 200 and n = 500), with 9 variables and correlations of respectively 0.2 and 0.6 between these variables, were simulated. These datasets included 2 categorical and 2 continuous variables with 20% missing at random data. Multiple Imputation (m = 5) was applied, and the four methods were compared with selection from the full model (without missing data). The same analyzes were repeated in five multiply imputed realworld datasets (NHANES) (m = 5, p = 0.05, N = 250/300/400/500/1000).
Results
In the simulated datasets, the differences between the pooling methods were most evident in the smaller datasets. The MPR performed equal to all other pooling methods for the selection frequency, as well as for the Pvalues of the continuous and dichotomous variables, however the MPR performed consistently better for pooling and selecting categorical variables in multiply imputed datasets and also regarding the stability of the selected prognostic models. Analyzes in the NHANESdataset showed that all methods mostly selected the same models. Compared to each other however, the D2method seemed to be the least sensitive and the MPR the most sensitive, most simple, and easy method to apply.
Conclusions
Considering that MPR is the most simple and easy pooling method to use for epidemiologists and applied researchers, we carefully recommend using the MPRmethod to pool categorical variables with more than two levels after Multiple Imputation in combination with Backward Selectionprocedures (BWS). Because MPR never performed worse than the other methods in continuous and dichotomous variables we also advice to use MPR in these types of variables.
Background
Logistic regression analysis is a widely used method in epidemiological and medical studies for developing prognostic models. Although researchers try to avoid it, missing data occur in many study designs. The most recommended method for processing incomplete data is Multiple Imputation (MI) [1]; this method is nowadays integrated in almost all regular statistical software packages and is therefore within reach of many researchers. MI generates multiple imputed datasets, after which a complete data analysis can be applied in each imputed dataset. Finally, parameter estimates can be combined using Rubin’s Rules (RR) [2]. If a dataset contains missing values, it is recommended to apply MI before excluding variables in logistic regression with backward selection (BWS) from the pooled model [3,4,5,6]. RR can be used to calculate pooled coefficients and standard errors (SE’s). This is easy to apply for continuous and dichotomous variables where a single Wald statistic is used to calculate a Pvalue to determine significance [7]. This is complex for categorical variables with more than two levels. For variable selection with categorical variables several methods have been developed [1, 7]: 1. Method D1 (Multivariate Wald test): this test pools within and between covariance matrices across imputed datasets. After that, the total parameter covariance matrix of the multivariate Wald test is corrected to account for the missing data [1, 8]. 2. Method D2 (Pooling Chisquare statistics): this test uses the chisquare values from the multiple parameter Wald or likelihood ratio tests across imputed datasets and pools them [9]. 3. Method D3 (Combining Likelihood Ratio statistics): this test is based on pooled likelihood ratio statistics and requires fitting multiple models across imputed datasets for each categorical variable in the data and can therefore be a very timeconsuming process [10, 11]. A disadvantage is that none of these methods is integrated in regular statistical software packages for variable selection when developing prognostic models. Also, these methods do not always obtain the optimal Pvalues, so selection of the correct variables is not guaranteed [10]. Therefore, researchers sometimes opt for a single imputation, often leading to an incorrect selection of variables [12].
Van de Wiel et al. introduced the median of the Pvalues (MPR) as method to compare prognostic models [13]. Eekhout et al. compared MPR with the pooling methods D1, D2, D3 in MIdatasets in a simulation study. They found that the MPR was an attractive rule for statistical inference of categorical variables with more than two levels because it showed to have equal power as the D1, D2 and D3method but was much easier to apply in any software package [1].
The MPR method may therefore be a potential attractive method for variable selection including models with categorical variables. Until now, these methods have never been compared to derive a prognostic model in logistic regression models after MI.
Therefore, the aim of this study is to evaluate four different pooling methods for variable selection in Multiply Imputed datasets for categorical, dichotomous, and continuous variables in a logistic regression with a backward selection (BWS) procedure. The selection frequency of the variables, the Pvalues of the pooled selection results and the stability of the models in Multiply Imputed datasets will be compared with the results from the BWSprocedure in the complete dataset (without missing data). All analyzes will be repeated in a realworld dataset.
Method
To evaluate the results of the four different pooling methods in Multiply Imputed datasets after a BWSprocedure, we conducted a simulation study and repeated the procedures in a realworld dataset (NHANES).
Simulation datasets

1) To generate simulated datasets, we used as input parameters the mean values and the standard deviations (SD) of the variables of an empirical dataset of low back pain patients [14]. See Table 1. A total of 9 variables were drawn from a multivariate normal distribution using the mean and SD as input parameters.

2) A set of 9 variables, including categorical, dichotomous, and continuous variables (normally distributed) was generated.

3) In the simulated datasets, categorical and dichotomous variables were initially considered to be continuous to determine their level of correlation and were then subsequently categorized (in 4 categories) and dichotomized by using cutoff values from the empirical dataset. For the categorical variables ‘Cat1’ and ‘Cat2’ the cutoff values (in percentages) were: [0, 0.6, 0.8, 0.9, 1] and [0, 0.3, 0.6, 0.8, 1]. For both dichotomous variables the median was used as a cutoff value.

4) The outcome measure was obtained by first calculating the linear predictor score by multiplying coefficients from the empirical dataset (Table 1) by the predictor values. A logistic regression model was then used to convert these scores into probabilities and a uniform distribution to convert these into a binary outcome, according to the rule that when the probability is lower than the value from the uniform distribution the outcome is 1 and 0 otherwise.

5) Four different datasets were simulated. Two sets with 200 observations, one with a correlation degree of 0.2 and the other with a degree of 0.6 and similarly two sets with 500 observations.

6) One of the regression coefficients in the model had an effect size of zero to mimic the behavior of a noise variable during variable selection (‘Noise’).
Generating Missing data
Multiple Imputation is indicated under the missing at random (MAR) mechanism, which means that missing data can be covered by observed data [11]. In each of the four simulated datasets, 20 percent missing data were created in both categorical variables and two continuous variables (‘Noise’ and ‘Cont4’). To create these missing data, the missing at random (according the MAR) mechanism has been used. This means that the probability for missing data in the two categorical and continuous variables was related to other variables in the dataset. For this, data were made missing in each variable as a function of another variable (e.g., X1 = X2 * 0.4 and X1 = X2 * 0.167), to create a realistic data situation. We used the commands ‘defMiss’ (for defining the missing data matrix with the formula option) and ‘genMiss’ in R software package “simstudy”. These formula values were chosen in such a way that 20% of the data in each variable was missing and around 50% of the cases in each simulated sample.
Imputation method
MI was performed in each simulated dataset generating 5 imputed datasets using Multivariate Imputation by Chained Equations (MICE) including the outcome in the imputation model [15, 16].
Pooling methods
Four different pooling methods were used:

1) The pooled sampling variance method (D1), which contains a combination of the pooled parameter estimates and the pooled sampling variances of each imputed dataset to construct a test that resembles a multivariate Wald test [8, 17].

2) Multiple parameter Wald test (D2) which pools the chisquare values from the multiple parameter Wald or likelihood ratio tests [9].

3) Meng and Rubin pooling method (D3) which pools likelihood ratio tests [10].

4) The medianPrule (MPR) which uses the median Pvalue of the significance tests conducted in each imputed dataset. Hence, it depends on Pvalues only and not on the parameter estimates [13].
Statistics and analyses
Logistic regression analyses were performed in all original complete simulated datasets and in all multiply imputed datasets. The coefficients, SE’s, Pvalues, and all developed prognostic models were compared. The BWSprocedures were performed using different Pout selection values (between 1.0 and 0.05) to develop parsimonious prediction models containing the strongest prognostic variables as well as larger prediction models containing strong and less strong prognostic variables.
The entire procedure was repeated 500 times and the results were compared with the results of the BWSprocedure in the complete dataset (no missing data). All statistic procedures were performed in R.
Comparing pooling methods
When comparing the pooling methods for variable selection, the focus was on three points:
The selection frequency of the variables
The selection frequency of the variables of each method was compared with that of the complete datasets, i.e., without missing values, which served as a reference model. The frequency was obtained by summing up how many times a variable was selected in a model divided by the total number of simulated models within that run. Subsequently it was evaluated which pooling method showed the most similar selection frequencies compared to those in the complete dataset.
The Pvalues of the selected variables
To compare the Pvalues, all Pvalues were first naturally logtransformed to be able to make even the smallest differences in Pvalues graphically visible. The median of these Pvalues of the four pooling methods was compared with the median of the Pvalues in the complete dataset.
The stability of the selected prognostic models
Model stability was evaluated by providing model selection frequencies to quantify how many times a particular set of prognostic variables was selected [4, 18]. The first 10 unique prognostic models were evaluated for all pooling methods and for the models selected in the complete dataset. For example: In one simulation run 500 initial models were fitted by applying BWS and from these 500 models, 10 unique models in the complete dataset were selected. With method D1, 375 models were identical to those 10 unique models in the complete dataset and so counted for 375/500 = 75 percent of the same unique models. With method D2 we saw 350 unique models (70 percent). D3 showed 340 (68 percent) and MPR 450 (90 percent) the same unique models. In this example, MPR was the most stable pooling method. This way of analyzing the stability of the prognostic models was executed for all BWSprocedures under all different conditions.
Analyses in the NHANESdataset
The NHANESdataset was used as a realworld data set to evaluate the performance of the same four pooling methods (D1, D2, D3 and MPR). Twelve variables were analyzed of which 6 continuous, 2 dichotomous and 4 categorical variables, with a dichotomous variable as outcome measure. MI was performed, generating 5 imputed datasets (m = 5), using Multivariate Imputation by Chained Equations (MICE) and including the outcome in the imputation model [15, 16]. A BWSprocedure was conducted with Pout < 0.05 in all pooling methods. The selected variables and developed models were compared.
Software
To generate complete simulation data we used the R package “simstudy”, that was also used to generate the missing data (functions ‘defMiss’ and ‘genMiss’). Backward selection in the complete data was conducted with the package “rms” (function fastbw), imputations were done using the “mice” package and pooling with the “psfmi” package.
Results
The selection frequency of the variables
What can be determined from Fig. 1 is that in general the MPR method performed better than the other selection methods when the Pout values became stricter. With a pout ≤ 0.3 in 41.6 percent of the simulated samples, the MPR selected the same variables as in the complete dataset. The D1, D2 and D3 methods in only respectively 7.1, 3.5 and 7.1 percent of the cases. Using a stricter pout (0.1) the MPR method selected the same variables as in the complete dataset in 46.8 percent of the cases and with a Pout of 0.05 in 53.7 percent of the cases. Using less strict Pout values (1.0 and 0.5) in 60.5 percent and 51 percent of the cases all pooling methods performed similarly.
Table 2 shows for all pooling methods the selection frequencies (percentages) of variables after BWS in Multiply Imputed datasets compared to the complete dataset. Categorical variables that were selected with the MPRmethod were closest to those in the complete dataset. For the continuous and dichotomous variables none of the four pooling methods showed a better selection frequency compared to the complete dataset. Therefore, the MPR was the best pooling method to select prognostic variables. The noise variable had a very low selection frequency for all methods and was selected in less than 12 percent of the cases using a Pout ≤ 0.1 and 6 percent in models with a Pout ≤ 0.05.
The Pvalues of the selected variables
To compare the Pvalues of the selected variables with those in the complete datasets, the median of the logtransformed Pvalues was used. Table 3 shows that for the continuous and dichotomous variables, the pooling methods showed inconsistent results, sometimes the Pvalues were more close to those obtained in the complete dataset, e.g. the variable ‘Dich1’ in the dataset N = 200 with correlation degree 0.2 and a pout ≤ 0.1 and sometimes not, e.g. the variable ‘Dich1’ in the dataset N = 200 with correlation degree 0.6 and a pout ≤ 0.05. Overall, the Pvalues for categorical variables obtained with the MPRmethod were consistently closer to those obtained in the complete dataset regardless of the sample size, the degree of correlation or the chosen Pout.
Figure 2 shows the percentages of agreement between the Pvalues of the selected variables by the different pooling methods and the complete dataset. It is shown that the MPRmethod agreed most with the complete dataset for categorical variables with scores of 100 percent agreement. Evaluating the different levels of pout, it was clear that for all pouts the differences between the pooling methods in categorical variables were in favor of the MPR method. For the dichotomous variable the MPR never performed worse than the other pooling methods. For the continuous variables also the D1method performed reasonably well and performed better than the MPR by stricter pout values.
The stability of the selected prognostic models
Table 4 shows how the selected models, after applying the different pooling methods, were related to the selected models in the complete dataset. The MPRmethod provided models that were more similar to the models in the complete dataset than the other pooling methods. Especially in smaller datasets (n = 200), the MPRmethod selected more frequently the same prognostic models. The analyses with a pout ≤ 1.0 are not shown because these are not of added value for the development of prediction models. See Table 4 and Fig. 3.
Selected variables in the prognostic models in the NHANESdataset
Table 5 shows that in all imputed NHANESdatasets, the four pooling methods showed a strong similarity in variables selected. Mostly the same variables were selected with comparable pvalues. This was in agreement with the findings of the simulation study.
Discussion
Main findings
In defining a prognostic model after MI with different types of variables including categorical variables, it is important to use an overall test to conclude if a categorical variable is relevant for the model. In the present study Multiply Imputed simulated datasets were used and four selection methods (D1, D2, D3 and MPR) for categorical, dichotomous, and continuous variables in a logistic regression model with a BWSprocedure were evaluated. The selection frequency of variables, Pvalues of the selected variables and the stability of the selected models were compared with the results from those in the complete dataset. The performance of the MPR is tested in many different conditions and variations and approved to be an easytoapply method and to be consistently better (as well in terms of selection frequency as in terms of Pvalues and the stability of the models) than the other pooling methods in categorical variables in a MIcontext. For continuous and dichotomous variables no consistent differences were found between the four pooling methods.
Comparison with the literature
Eekhout et al. concluded that to obtain correct and powerful pooled Pvalues for significance testing of categorical variables with the MPR, compared to the D1, D2 and D3 methods, the outcome should be omitted from the imputation model [1]. To obtain a powerful significance test for continuous and dichotomous variables with RR after MI, the MI procedure has to include the outcome variable, as was indicated by Moons et al. [19]. We repeated our simulation study in datasets with a sample size of n = 500 and n = 2000 and an extra categorical variable with five categories under two different conditions: one included the outcome variable in the imputation model, and one excluded the outcome. We observed no differences in the medianPvalues of the selected predictor variables or the stability of the selected models. Only the selection frequency of the predictor variables was slightly higher when the outcome was included in the imputation model, but this was identical for all pooling methods. The larger the datasets, the smaller the differences between the four pooling methods. We therefore conclude that for overall significance testing of categorical variables the outcome variable can be included in the imputation model.
Heinze et al. and Wallisch et al. stated that variable selection can compromise stability of a final model. This is an oftenignored problem of datadriven variable selection [4, 18]. Also, Royston and Sauerbrei stated that model stability has to be proven because many different factors influence the stability of the selected models [20, 21]. In our simulation study, we examined the stability of the selected models in Multiply Imputed datasets by repeating each procedure 500 times. An interesting result is that the MPR pooling method resulted in more stable variable selection than the other pooling methods. This result was also reflected in the analyzes in the NHANESrealworld dataset. Austin et al. and Wood et al. stated that variable selection in Multiply Imputed datasets has to be done from the pooled model using RR which is easily done for continuous and dichotomous variables but less straightforward for categorical variables [22, 23]. We distinguished the selection of all types of variables and showed that the MPRmethod performed equally well as RR for the continuous and dichotomous variables and better than the D1, D2 and D3methods for the categorical variables. The usability of the pooling methods depends on their availability in statistical software. Most software packages do not provide these methods in combination with variable selection and are therefore out of reach for applied researchers. The strength of the MPRrule is that it can easily be applied in any software package and is not a timeconsuming process.
Strengths and limitations
Our aim was to compare four different selection methods. A strength is that we applied two different ways of pooling and selecting variables: 1. Rubin’s Rules (RR) were applied to pool the continuous and dichotomous variables and the pooling methods D1, D2, D3 and MPR for categorical variables. 2. All variables were pooled by the D1, D2, D3 and MPR method.
No differences were found between those two ways of pooling and selecting variables, i.e., the MPR outperformed all other methods. Another strength is that we used various pout values to evaluate the behaviour of the pooling methods when the selected models contained variables with a strong or less strong relationship with the outcome as can be found in normal practice. We found that in most scenario’s the MPR method resulted in the most stable models.
Also, a strength is that we, in addition to the study performed by White and Austin et al., carried out many different simulated conditions based on empirical data. We evaluated the selection frequency of the variables, the Pvalues of the selected variables and the model stability of the selected models [22, 23]. Also, we added a noise variable to evaluate if all the methods coped well with this variable. In most of these conditions, the MPRmethod was not worse than the other methods. A limitation could be that the simulation study used a small number of covariates than are used in practical data sets. However, the NHANESdataset contained a mixture of weaker and stronger variables, like in realworld datasets, and the results in the NHANES dataset confirmed what we saw in the simulation study.
Another limitation could be that we used only two different correlation levels in our simulation sets (0.2 and 0.6). However, to set up our simulation study we initially used the paper of Wood, White and Royston [23] about variable selection methods in multiply imputed datasets, that came closest to the aim of our study. They reported a correlation of 0.62 and defined that as a high correlation value. We therefore used a high correlation of 0.6 in our study. We wanted to compare this high correlation with a lower correlation and used the value of 0.2. We think, that by using these values for the correlation, we were well able to test the methods in datasets that are commonly seen in medical studies containing variables with comparable lower and higher correlations. Another limitation may be that we used a fast backward selection procedure to select variables in the complete datasets [24]. It is known that this may be not the most efficient selection method [24, 25]. An alternative may be to use more advanced methods like the least absolute shrinkage and selection operator (LASSO) [25]. However, the LASSO is developed for situations where the number of predictors is much higher than the number of persons. This is not the case in a lot of medical and epidemiological datasets. Another problem with LASSO estimation is its dependence on the scale of the covariates. A solution for this is to apply internal standardization in LASSO software to unit variance before variable selection. After that, the regression coefficients are than back transformed to the original scale. It is however not clear yet if standardization of variables of the type “one size fits all” is the best choice for all modeling purposes. Therefore, using the fast backward selection procedure was the best option to compare the pooled selection methods with a similar selection procedure in the complete datasets [4]. Another limitation could be that we considered all continuous variables as normally distributed while in practice there are also nonlinear relationships, so further research will be necessary about the selection of these type of variables in multiply imputed datasets.
Conclusion
Evaluating four pooling methods (D1, D2, D3 and MPR) for variable selection in multiply imputed datasets for categorical, dichotomous, and continuous variables in logistic regression analyses with a BWSprocedure, the MPRpooling method performed consistently better than the other methods to select categorical variables in smaller datasets (N ≤ 500 participants). The variable selection frequencies, their Pvalues, the selection frequencies of the prognostic models as well as their stability were more similar to the analyses in the complete datasets using the MPRmethod. For continuous and dichotomous variables none of the four pooling methods performed actually better than one of the others. In large datasets there were almost no differences between the four pooling methods. These results were confirmed in the analyzes in a realworld dataset (NHANES). Considering that MPR is the most simple and easy pooling method to use for epidemiologists and applied researchers, we carefully recommend using the MPRmethod to pool categorical variables with more than two levels after MI in combination with BWSprocedures. Because MPR never performed worse than the other methods in continuous and dichotomous variables we also advice to use MPR in these types of variables.
Availability of data and materials
The data that support the findings of this study are available from the web servers of the Amsterdam University Medical Center, but restrictions apply to the availability of these data, which were used under license for the current study, and so are not publicly available. Data are however available from M.W. Heymans, one of the authors, upon reasonable request and with permission of the Amsterdam University Medical center.
Abbreviations
 MI:

Multiple Imputation
 RR:

Rubin’s Rules
 BWS:

Backward Selection
 SE:

Standard Error
 MPR:

Median of the Pvalues Rule
 SD:

Standard Deviation
 MICE:

Multivariate Imputation by Chained Equations
References
Eekhout I, van de Wiel M, Heymans MW. Methods for significance testing of categorical covariates in logistic regression models after multiple imputation: power and applicability analysis. BMC Med Res Methodol. 2017;17:129.
Rubin DB. Inference and missing data Biometrica. 1976;63:581–92.
Moons KGM, Altman DG, Reitsma JB, et al. Transparent Reporting of a multivariable prediction model for Individual Prognosis Or Diagnosis (TRIPOD): Explanation and Elaboration. Ann Intern Med. 2015;162(1):W1–73.
Heinze G, Wallisch C, Dunkler D. Variable selection – A review and recommendations for the practicing statistician. Biom J. 2018;60(3):431–49.
Mantel N. Why stepdown procedures in variable selection. Technometrics. 1970;12:621–5.
Steyerberg EW. Clinical Prediction Models. A Practical Approach to Development, Validation, and Updating. Leiden: Springer Science and Business Media. LLC 2019
Marshall A, Altman DG, Holder RL, et al. Combining estimates in prognostic modelling studies after multiple imputation: current practice and guidelines. BMC Med Res Methodol. 2009;9:57.
Enders CK. Applied missing data analysis. New York: The Guilford Press; 2010.
Li KH, Meng XL, Raghunathan T, et al. Sigificance levels from repeated pvalues with multiplyimputed data. Statistica Sin. 1991;1:65–92.
Meng XL, Rubin DB. Performing likelihood ratio tests with multiplyimputed data sets. Biometrica. 1992;97:103–11.
van Buuren S. Flexible imputation of missing data. Second edition. Oxfordshire: CRC/Chapman & Hall; 2018.
Eekhout I, de Vet HCW, Twisk JWR, et al. Missing data in a multiitem instrument were best handled by multiple imputation at the item score level. J Clin Epidemiol. 2014;67:335–42.
van de Wiel MA, Berkhof J, van Wieringen WN. Testing the predicton error difference between 2 predictors. Biometrika. 2009;10:550–60.
Heymans MW, de Vet HC, Bongers PM, et al. The effectiveness of highintensity versus lowintensity back schools in an occupational setting: a pragmatic randomized controlled trial. Spine. 2006;31(10):1075–82.
van Buuren S, GroothuisOudshoorn K. Mice: multivariate imputation by chained equations in R. J Stat Softw. 2009;45:1–67.
White IR, Royston P, Wood AM. Multiple imputation using chained equations: issues and guidance for practice. Stat Med. 2011;30:377–99.
Enders CK. Analyzing longitudinal data with missing values. Rehabil Psychol. 2011;56:267–88.
Wallisch C, Dunkler D, Rauch G et al. Selection of variables for multivariable models: Opportunities and limitations in quantifying model stability by resampling. Stat Med. 2021;40(2):36981.
Moons KG, Donders RA, Stijnen T, et al. Using the outcome for imputation of missing predictor values was preferred. J Clin Epidemiol. 2006;59:1092–101.
Royston P, Sauerbrei W. Multivariable ModelBuilding. A pragmatic approach to regression analysis based on fractional polynomials for modelling continuous variables. Chapter 8: Model stability. New Jersey: John Wiley & Sons. Ltd. 2008.
Sauerbrei W, Perperoglou A, Schmid M, et al. State of the art in selection of variables and functional forms in multivariable analysis – outstanding issues. Diagnostic and Prognostic Research. 2020;4:3.
Austin PC, Lee DS, Ko DT, et al. Effect of variable selection strategy on the performance of prognostic models when using multiple imputation. Circ Cardiovasc Qual Outcomes. 2019;12: e005927.
Wood AM, White IR, Royston P. How should variable selection be performed with multiply imputed data. Statist Med. 2008;27:3227–46.
Frank E. Harrell, JR. Regression Modelling strategies. With applications to linear models, logistic and ordinal regression, and survival analysis. Switzerland: Springer International Publishing Ag; 2015.
Thao LTP, Geskus R. A comparison of model selection methods for prediction in the presence of multiply imputed data. Biom J. 2019;61(2):343–56.
Acknowledgements
The authors thank Mark v.d. Wiel for critically reading the analysis section and giving expert advice.
Funding
Not applicable.
Author information
Authors and Affiliations
Contributions
M.W. Heymans developed the study concept and design. A.M. Panken and M.W. Heymans performed the data analysis, interpretation of results and wrote and drafted the manuscript. Both authors approved the final version of the manuscript for submission. All authors read and approved the final manuscript.
Corresponding author
Ethics declarations
Ethics approval and consent to participate
Not applicable.
Consent for publication
Not applicable.
Competing interests
Not applicable.
Additional information
Publisher's Note
Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.
Rights and permissions
Open Access This article is licensed under a Creative Commons Attribution 4.0 International License, which permits use, sharing, adaptation, distribution and reproduction in any medium or format, as long as you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons licence, and indicate if changes were made. The images or other third party material in this article are included in the article's Creative Commons licence, unless indicated otherwise in a credit line to the material. If material is not included in the article's Creative Commons licence and your intended use is not permitted by statutory regulation or exceeds the permitted use, you will need to obtain permission directly from the copyright holder. To view a copy of this licence, visit http://creativecommons.org/licenses/by/4.0/. The Creative Commons Public Domain Dedication waiver (http://creativecommons.org/publicdomain/zero/1.0/) applies to the data made available in this article, unless otherwise stated in a credit line to the data.
About this article
Cite this article
Panken, A.M., Heymans, M.W. A simple pooling method for variable selection in multiply imputed datasets outperformed complex methods. BMC Med Res Methodol 22, 214 (2022). https://doi.org/10.1186/s12874022016938
Received:
Accepted:
Published:
DOI: https://doi.org/10.1186/s12874022016938