Fig. 2From: To tune or not to tune, a case study of ridge logistic regression in small or sparse datasetsScatter plots showing values of λ∗ obtained by optimizing different tuning criteria versus ‘optimal’ λ∗ as achieved by explanation oracle and prediction oracle over 1000 generated datasets in scenarios with the expected value of Y, E(Y) = 0.1, the number of predictors K = 5, noise absent or present, the sample size of N ∈ {100, 250, 500, 1000} considering A) moderate (a = 0.5) and B) strong (a = 1) predictors. Values of λ∗ were optimized by using different tuning criteria: D, deviance; GCV, generalized cross-validation; CE, classification error; RCV50, repeated 10-fold cross-validated deviance with θ = 0.5; RCV95, repeated 10-fold cross-validated deviance with θ = 0.95; AIC, Akaike’s information criterionBack to article page