Utilizing machine learning dimensionality reduction for risk stratification of chest pain patients in the emergency department

Background Chest pain is among the most common presenting complaints in the emergency department (ED). Swift and accurate risk stratification of chest pain patients in the ED may improve patient outcomes and reduce unnecessary costs. Traditional logistic regression with stepwise variable selection has been used to build risk prediction models for ED chest pain patients. In this study, we aimed to investigate if machine learning dimensionality reduction methods can improve performance in deriving risk stratification models. Methods A retrospective analysis was conducted on the data of patients > 20 years old who presented to the ED of Singapore General Hospital with chest pain between September 2010 and July 2015. Variables used included demographics, medical history, laboratory findings, heart rate variability (HRV), and heart rate n-variability (HRnV) parameters calculated from five to six-minute electrocardiograms (ECGs). The primary outcome was 30-day major adverse cardiac events (MACE), which included death, acute myocardial infarction, and revascularization within 30 days of ED presentation. We used eight machine learning dimensionality reduction methods and logistic regression to create different prediction models. We further excluded cardiac troponin from candidate variables and derived a separate set of models to evaluate the performance of models without using laboratory tests. Receiver operating characteristic (ROC) and calibration analysis was used to compare model performance. Results Seven hundred ninety-five patients were included in the analysis, of which 247 (31%) met the primary outcome of 30-day MACE. Patients with MACE were older and more likely to be male. All eight dimensionality reduction methods achieved comparable performance with the traditional stepwise variable selection; The multidimensional scaling algorithm performed the best with an area under the curve of 0.901. All prediction models generated in this study outperformed several existing clinical scores in ROC analysis. Conclusions Dimensionality reduction models showed marginal value in improving the prediction of 30-day MACE for ED chest pain patients. Moreover, they are black box models, making them difficult to explain and interpret in clinical practice.


Background
Chest pain is among the most common chief complaints presenting to the emergency department (ED) [1][2][3]. The assessment of chest pain patients poses a diagnostic challenge in balancing risk and cost. Inadvertent discharge of acute coronary syndrome (ACS) patients is associated with higher mortality rates while inappropriate admission of patients with more benign conditions increases health service costs [4,5]. Hence, the challenge lies in recognizing low-risk chest pain patients for safe and early discharge from the ED. There has been increasing focus on the development of risk stratification scores. Initially, risk scores such as the Thrombolysis in Myocardial Infarction (TIMI) score [6,7] and the Global Registry of Acute Coronary Events (GRACE) score [8] were developed from post-ACS patients to estimate short-term mortality and recurrence of myocardial infarction. The History, Electrocardiogram (ECG), Age, Risk factors, and initial Troponin (HEART) score was subsequently designed for ED chest pain patients [9], which demonstrated superior performance in many comparative studies on the identification of low-risk chest pain patients [10][11][12][13][14][15][16][17]. Nonetheless, the HEART score has its disadvantages. Many potential factors can affect its diagnostic and prognostic accuracy, such as variation in patient populations, provider determination of low-risk heart score criteria, specific troponin reagent used, all of which contribute to clinical heterogeneity [18][19][20][21]. In addition, most risk scores still require variables that may not be available during the initial presentation of the patient to the ED such as troponin. There remains a need for a more efficient risk stratification tool.
We had previously developed a heart rate variability (HRV) prediction model using readily available variables at the ED, in an attempt to reduce both diagnostic time and subjective components [22]. HRV characterizes beat-to-beat variation using time, frequency domain, and nonlinear analysis [23] and has proven to be a good predictor of major adverse cardiac events (MACE) [22,24,25]. Most HRV-based scores were reported to be superior to TIMI and GRACE scores while achieving comparable performance with HEART score [17,24,26,27]. Recently, we established a new representation of beat-tobeat variation in ECGs, the heart rate n-variability (HRnV) [28]. HRnV utilizes variation in sampling RRintervals and overlapping RR-intervals to derive additional parameters from a single strip of ECG reading. As an extension to HRV, HRnV potentially supplements additional information about adverse cardiac events while reducing unwanted noise caused by abnormal heartbeats. Moreover, HRV is a special case of HRnV when n = 1. The HRnV prediction model, developed from multivariable stepwise logistic regression, outperformed the HEART, TIMI, and GRACE scores in predicting 30-day MACE [28]. Nevertheless, multicollinearity is a common problem in logistic regression models where supposedly independent predictor variables are correlated. They tend to overestimate the variance of regression parameters and hinder the determination of the exact effect of each parameter, which could potentially result in inaccurate identification of significant predictors [29,30]. In the paper, 115 HRnV parameters were derived but only seven variables were left in the final prediction model, and this implies the possible elimination of relevant information [28].
Within the general medical literature, machine learning dimensionality reduction methods are uncommon and limited to a few specific areas, such as bioinformatics studies on genetics [31,32] and diagnostic radiological imaging [33,34]. Despite this, dimensionality reduction in HRV has been investigated and shown to effectively compress multidimensional HRV data for the assessment of cardiac autonomic neuropathy [35]. In this paper, we attempted to investigate several machine learning dimensionality reduction algorithms in building predictive models, hypothesizing that these algorithms could be useful in preserving useful information while improving prediction performance. We aimed to compare the performance of the dimensionality reduction models against the traditional stepwise logistic regression model [28] and conventional risk stratification tools such as the HEART, TIMI, and GRACE scores, in the prediction of 30-day MACE in chest pain patients presenting to the ED.

Study design and clinical setting
A retrospective analysis was conducted on data collected from patients > 20 years old who presented to Singapore General Hospital ED with chest pain between September 2010 to July 2015. These patients were triaged using the Patient Acuity Category Scale (PACS) and those with PACS 1 or 2 were included in the study. Patients were excluded if they were lost to the 30-day follow-up or if they presented with ST-elevation myocardial infarction (STEMI) or non-cardiac etiology chest pain such as pneumothorax, pneumonia, and trauma as diagnosed by the ED physician. Patients with ECG findings that precluded quality HRnV analysis such as artifacts, ectopic beats, paced or non-sinus rhythm were also excluded.

Data collection
For each patient, HRV and HRnV parameters were calculated using HRnV-Calc software suite [28,36] from a five to six-minute single-lead (lead II) ECG performed via the X-series Monitor (ZOLL Medical, Corporation, Chelmsford, MA). Table 1 shows the full list of HRV and HRnV parameters used in this study. Besides, the first 12-lead ECGs taken during patients' presentation to the ED were interpreted by two independent clinical reviewers and any pathological ST changes, T wave inversions, and Q-waves were noted. Patients' demographics, medical history, first set of vital signs, and troponin-T values were obtained from the hospital's electronic health records (EHR). In this study, high-sensitivity troponin-T was selected as the cardiac biomarker and an abnormal value was defined as > 0.03 ng/mL.
The primary outcome measured was any MACE within 30 days, including acute myocardial infarction, emergent revascularization procedures such as percutaneous coronary intervention (PCI) or coronary artery bypass graft (CABG), or death. The primary outcome was captured through a retrospective review of patients' EHR.

Machine learning dimensionality reduction
Dimensionality reduction in machine learning and data mining [37] refers to the process of transforming highdimensional data into lower dimensions such that fewer features are selected or extracted while preserving essential information of the original data. Two types of dimensionality reduction approaches are available, namely variable selection and feature extraction. Variable selection methods generally reduce data dimensionality by choosing a subset of variables, while feature extraction methods transform the original feature space into lowerdimensional space through linear or nonlinear feature projection. In clinical predictive modeling, variable selection techniques such as stepwise logistic regression are popular for constructing prediction models [38]. In contrast, feature extraction approaches [39] are less Table 1 List of traditional heart rate variability (HRV) and novel heart rate n-variability (HRnV) parameters used in this study. HRnV is a new representation of beat-to-beat variation in ECGs and parameter "n" controls the formation of new RR-intervals that are used for parameter calculation. Details of HRnV definition can be found in [28] Mean NN average of R-R intervals, SDNN standard deviation of R-R intervals, RMSSD square root of the mean squared differences between R-R intervals, NN50 the number of times that the absolute difference between 2 successive R-R intervals exceeds 50 ms pNN50, NN50 divided by the total number of R-R intervals, NN50n the number of times that the absolute difference between 2 successive RR n I/RR n I m sequences exceeds 50 × n ms, pNN50n NN50n divided by the total number of RR n I/RR n I m sequences, VLF very low frequency, LF low frequency, HF high frequency, SD standard deviation, SampEn sample entropy, ApEn approximate entropy, DFA detrended fluctuation analysis a In frequency domain analysis, the power of spectral components is the area below the relevant frequencies presented in absolute units (square milliseconds) commonly used in medical research, although they have been widely used in computational biology [40], image analysis [41,42], physiological signal analysis [43], among others. In this study, we investigated the implementation of eight feature extraction algorithms and evaluated their contributions to prediction performance in risk stratification of ED chest pain patients. We also compared them with a prediction model that was built using conventional stepwise variable selection [28]. Henceforth, we use the terms "dimensionality reduction" and "feature extraction" interchangeably. Given that there were n samples (x i , y i ), i = 1, 2, …, n, in the dataset (X, y), where each sample x i had original D features and its label y i = 1 or 0, with 1 indicating a positive primary outcome, i.e., MACE within 30 days. We applied dimensionality reduction algorithms to project x i into a d-dimensional space (d < D). As a result, the original dataset X ∈ ℝ n × D becameX∈ℝ nÂd . There was a total of D = 174 candidate variables in this study. As suggested in Liu et al. [28], some variables were less statistically significant in terms of contributions to the prediction performance. Thus, we conducted univariable analysis and preselected a subset ofD variables if their p <P. In this study, we determinedP by running principal component analysis (PCA) [44] and logistic regression through 5-fold cross-validation; we plotted a curve to visualize the choice of a threshold and its impact on predictive performance. PCA was used for demonstration because of its simplicity and fast running speed. Other than PCA, we also implemented seven dimensionality reduction algorithms, including kernel PCA (KPCA) [45] with polynomial kernel function, latent semantic analysis (LSA) [46], Gaussian random projection (GRP) [47], sparse random projection (SRP) [48], multidimensional scaling (MDS) [49], Isomap [50], and locally linear embedding (LLE) [51]. All these algorithms are unsupervised learning methods, meaning the transformation of feature space does not rely on sample labels y. Among the eight methods, MDS, Isomap, and LLE are manifold learning-based techniques for nonlinear dimensionality reduction. Table 2 gives a brief introduction to these eight methods.

Predictive and statistical analysis
In this study, we chose logistic regression as the classification algorithm to predict the MACE outcome. As described earlier, we determined the thresholdP to preselect a subset ofD variables, ensuring the removal of less significant variables as indicated by univariable analysis, after which X ∈ ℝ n × D becameX∈ℝ nÂD . In summary, the inputs to all dimensionality reduction algorithms were inD -dimensional space. Subsequently, conventional logistic regression was implemented to take d-dimensionalX to predict y, where 5-fold crossvalidation was used.
We compared the models built with machine learning dimensionality reduction with our previous stepwise model [28], in which the following 16 variables were used: age, diastolic blood pressure, pain score, STelevation, ST-depression, Q wave, cardiac history (the "History" component in the HEART score), troponin, HRV NN50, HR 2 V skewness, HR 2 V SampEn, HR 2 V ApEn, HR 2 V 1 ApEn, HR 3 V RMSSD, HR 3 V skewness, and HR 3 V 2 HF power. As described in [28], we selected candidate variables with p < 0.2 in univariable analysis and subsequently conducted multivariable analysis using backward stepwise logistic regression. In the current study, we further built eight dimensionality reduction models without using the cardiac troponin and compared them with the stepwise model without the Table 2 Summary of machine learning dimensionality reduction methods used in this study

Methods Descriptions
Principal component analysis (PCA) [44] PCA decomposes data into a set of successive orthogonal components that explain a maximum amount of the variance Kernel PCA (KPCA) [45] KPCA extends PCA by using kernel functions to achieve non-linear dimensionality reduction Latent semantic analysis (LSA) [46] LSA is similar to PCA but differs in that the data matrix does not need to be centered Gaussian random projection (GRP) [47] GRP projects the original input features onto a randomly generated matrix where components are drawn from a Gaussian distribution Sparse random projection (SRP) [48] SRP projects the original input features onto a sparse random matrix, which is an alternative to dense Gaussian random projection matrix Multidimensional scaling (MDS) [49] MDS is a technique used for analyzing similarity or dissimilarity data, seeking a low-dimensional representation of the data in which the distances respect well the distances in the original high-dimensional space Isomap is a manifold learning algorithm, seeking a lower-dimensional embedding that maintains geodesic distances between all points Locally linear embedding (LLE) [51] LLE projects the original input features to a lower-dimensional space by preserving distances within local neighborhoods troponin component. This analysis enabled us to check the feasibility of avoiding the use of laboratory results for quick risk stratification.
In evaluating the modeling performance, we performed the receiver operating characteristic (ROC) curve analysis and reported the corresponding area under the curve (AUC), sensitivity, specificity, positive predictive value (PPV), and negative predictive value (NPV) measures. Moreover, we generated the calibration plots for prediction models. In describing the data, we reported continuous variables as the median and interquartile range (IQR) and statistical significance using two-sample t-test. We reported categorical variables as frequency and percentage and statistical significance using chisquare test. All analyses were conducted in Python version 3.8.0 (Python Software Foundation, Delaware, USA).

Results
We included 795 chest pain patients in this study, of which 247 (31%) patients had MACE within 30 days of presentation to the ED. Table 3 presents the baseline characteristics of the patient cohort. Patients with MACE were older (median age 61 years vs. 59 years, p = 0.035) and more likely to be male (76.1% vs. 64.6%, p = 0.002). History of diabetes, current smoking status, and pathological ECG changes such as ST elevation, ST depression, T wave inversion, pathological Q waves, and QTc prolongation were significantly more prevalent in patients with the primary outcome. Troponin-T and creatine kinase-MB levels were also significantly elevated in patients with the primary outcome. There was no statistically significant difference in patient ethnicity between MACE and non-MACE groups. Figure 1a depicts the PCA-based predictive performance versus the thresholdP (for preselection of variables) and Fig. 1b shows the number of preselected variables versus thresholdP . The predictive performance peaked atP ¼ 0:02, where a total of 30 variables were preselected, including gender, diastolic blood pressure, pain score, ST-elevation, STdepression, T-wave inversion, Q wave, cardiac history, EKG, and risk factor components of the HEART score, troponin, HRV RMSSD, HRV NN50, HRV pNN50, HRV HF power, HRV Poincaré SD1, HR 2 V RMSSD, HR 2 V NN50, HR 2 V pNN50, HR 2 V HF power, HR 2 V Poincaré SD1, HR 2 V 1 RMSSD, HR 2 V 1 NN50, HR 2 V 1 HF power, HR 2 V 1 Poincaré SD1, HR 3 V 1 RMSSD, HR 3 V 1 HF power, HR 3 V 1 Poincaré SD1, HR 3 V 2 RMSSD, and HR 3 V 2 Poincaré SD1. These were used as inputs to all dimensionality reduction algorithms whose outputs were linear or nonlinear combinations of these 30 variables.  Figure 3 shows the ROC curves of the eight dimensionality reduction algorithms, the stepwise logistic regression [28], and three clinical scores. All eight dimensionality reduction methods performed comparably with the stepwise variable selection, and MDS achieved the highest AUC of 0.901. Table 4 presents ROC analysis results of all 12 methods/scores where sensitivity, specificity, PPV, and NPV are reported with 95% confidence intervals (CIs), noting that the performance of the stepwise model in this paper was slightly different from that reported in [28] due to the choice of cross-validation scheme, i.e., 5-fold (AUC of 0.887) versus leave-one-out (AUC of 0.888). Figure 4 presents the calibration curves of predictions by all methods/scores. The stepwise model and seven dimensionality reduction models (PCA, KPCA, LSA, GRP, SRP, MDS, and Isomap) showed reasonable model calibrations, in which their curves fluctuated along the diagonal line, meaning these models only slightly overestimated or underestimated the predicted probability of 30-day MACE. The LLE model was unable to achieve good calibration. In comparison, all three clinical scores (HEART, TIMI, and GRACE) generally underpredicted the probability of 30day MACE. Figure 5 shows the ROC curves of prediction models without using cardiac troponin. At feature dimensions of 13, 21, 13, 29, 24, 17, 18, and 18, the highest AUC values of PCA, KPCA, LSA, GRP, SRP, MDS, Isomap, and LLE were 0.852, 0.852, 0.852, 0.852, 0.851, 0.852, 0.845, and 0.849, respectively. The stepwise model without troponin yielded an AUC of 0.834 compared to 0.887 with troponin. All prediction models outperformed both the TIMI and GRACE scores while achieving comparable results with the HEART score.

Discussion
In this study, we showed that machine learning dimensionality reduction yielded only marginal, non-significant improvements compared to stepwise model in predicting the risk of 30-day MACE among chest pain patients in the ED. This corroborates with similar observations that   High-dimensional data suffers from the curse of dimensionality, which refers to the exponentially increasing sparsity of data and sample size required to estimate a function to a given accuracy as dimensionality increases [54]. Dimensionality reduction has successfully mitigated the curse of dimensionality in the analysis of high-dimensional data in various domains such as computational biology and bioinformatics [31,32]. However, clinical predictive modeling typically considers relatively few features, limiting the effects of the curse of  dimensionality. This may account for the relatively limited benefit of dimensionality reduction in our analysis. Additionally, with comparable performance to the traditional stepwise model, transparency and interpretability of machine learning dimensionality reduction models are constrained by complex algorithmic transformations of variables, leading to obstacles in the adoption of such models in real-world clinical settings. In contrast, traditional biostatistical approaches like logistic regression with stepwise variable selection deliver a simple  and transparent model, in which the absolute and relative importance of each variable can be easily interpreted and explained from the odds ratio. Marginal performance improvements should be weighed against these limitations in interpretability, which is an important consideration in clinical predictive modeling. Comparing the eight dimensionality reduction algorithms, PCA and LSA use common linear algebra techniques to learn to create principal components in a compressed data space, while MDS, Isomap, and LLE are nonlinear, manifold learning-based dimensionality reduction methods. As observed from our results, complex nonlinear algorithms did not show an obvious advantage over simple PCA and LSA methods in enhancing the predictive performance. Yet, nonlinear algorithms are more computationally complex and require Table 4 Comparison of performance of the HRnV models (based on 5-fold cross-validation), HEART, TIMI, and GRACE scores in predicting 30-day major adverse cardiac events (MACE  [39]. In applications of clinical predictive modeling, nthe number of patientsis usually larger thanDthe number of variables; in our study, n is 795 andD is 29 or 30, depending on the inclusion of troponin. This suggests that linear algorithms may be preferred due to reduced computational complexity and memory while retaining comparable performance. Another observation in this study was that the impact of preselection (as shown in Fig. 1) on predictive performance was more substantial than that of dimensionality reduction, indicating the importance of choosing statistically significant candidate variables.
Our study also reiterates the value of HRnV-based prediction models for chest pain risk stratification. Among chest pain risk stratification tools in the ED, clinical scores like HEART, TIMI, and GRACE are currently the most widely adopted and validated [55,56]. However, a common barrier to quick risk prediction using these traditional clinical scores is the requirement of cardiac troponin, which can take hours to obtain. To address these difficulties, machine learning-based predictive models that integrate HRV measures and clinical parameters have been proposed [17,22,25,26], including our development of HRnV, a novel alternative measure to HRV that has shown promising results in predicting 30-day MACE [28], which was the stepwise model in this paper. Both the dimensionality reductionbased predictive models and the stepwise model with troponin presented superior performance than HEART, TIMI, and GRACE scores. When troponin was not used, several dimensionality reduction-based models such as PCA, KPCA, and MDS still yielded marginally better performance than the original HEART score, while benefiting from generating the predictive scores in merely 5 to 6 min.
Additionally, Table 4 shows that all HRnV-based predictive models had higher specificities than the HEART score while all HRnV-based models except Isomap also improved on the already high sensitivity of the HEART score [21,57]. The specificities of KPCA, Isomap, and MDS were significantly higher by an absolute value of almost 10%. Substantial improvements to the specificity of MACE predictive models may reduce unnecessary admission and thus minimize costs and resource usage [5]. This is particularly relevant in low-resource settings, for  (COVID-19) pandemic, where novel methods in resource allocation and risk stratification could alleviate the strain on healthcare resources [58].
There remains a need for further investigation into methods that utilize information from the full set of HRV and HRnV variables. From 174 variables in the initial data set, dimensionality reduction performed the best with a preselection of 30 variables, of which 19 were HRV and HRnV parameters. That is, the majority of the newly constructed HRnV parameters were removed based on the strict significance threshold of p < 0.02 on univariable analysis. Therefore, novel HRnV measures were not fully used in prediction models of 30-day MACE, leaving room for further investigation of alternative ways of using them. Moving forward, it may be valuable to develop and evaluate deep learning frameworks [59] to synthesize novel low-dimensional representations of multidimensional information. Alternatively, building point-based, interpretable risk scores [60] can also be beneficial to implementation and adoption in real-world clinical settings, since designing inherently interpretable models is more favorable than explaining black box models [61].
We acknowledge the following limitations of this study. First, the clinical application (i.e., risk stratification of ED chest pain patients) was only one example of clinical predictive modeling, thus our conclusion on the effectiveness of machine learning dimensionality reduction algorithms may not be generalizable to other applications, particularly those with a larger number of variables. Second, only eight dimensionality reduction algorithms were investigated, while many other methods are available. Third, given the small sample size, we were unable to determine the thresholdP and build predictive models with a separate training set; this also limited the stability check [62] for both logistic regression and machine learning models. Last, we did not build a workable predictive model for risk stratification of ED chest pain patients, although several models built in this study showed promising results compared to existing clinical scores. We aim to conduct further investigations.

Conclusions
In this study we found that machine learning dimensionality reduction models showed marginal value in improving the prediction of 30-day MACE for ED chest pain patients. Being black box models, they are further constrained in clinical practice due to low interpretability. Whereas traditional stepwise prediction model showed simplicity and transparency, making it feasible for clinical use. To fully utilize the available information in building high-performing predictive models, we suggest additional investigations such as exploring deep representations of the input variables and creating interpretable machine learning models to facilitate real-world clinical implementation.