Using Machine Learning to Improve Survival Prediction After Heart
Transplantation
Abstract
Background: This study investigates the use of modern machine learning
(ML) techniques to improve prediction of survival after orthotopic heart
transplantation (OHT). Methods: Retrospective study of adult patients
undergoing primary, isolated OHT between 2000-2019 as identified in the
United Network for Organ Sharing (UNOS) registry. The primary outcome
was one-year post-transplant survival. Patients were randomly divided
into training (80%) and validation (20%) sets. Dimensionality
reduction and data re-sampling were employed during training. Multiple
machine learning algorithms were combined into a final ensemble ML
model. Discriminatory capability was assessed using area under
receiver-operating-characteristic curve (AUROC), net reclassification
index (NRI), and decision curve analysis (DCA). Results: A total of
33,657 OHT patients were evaluated. One-year mortality was 11%
(n=3,738). In the validation cohort, the AUROC of singular logistic
regression was 0.649 (95% CI 0.628-0.670) compared to 0.691 (95% CI
0.671-0.711) with random forest, 0.691 (95% CI 0.671-0.712) with deep
neural network, and 0.653 (95% CI 0.632-0.674) with Adaboost. A final
ensemble ML model was created that demonstrated the greatest improvement
in AUROC: 0.764 (95% CI 0.745-0.782) (p<0.001). The ensemble
ML model improved predictive performance by 72.9% ±3.8%
(p<0.001) as assessed by NRI compared to logistic regression.
DCA showed the final ensemble method improved risk prediction across the
entire spectrum of predicted risk as compared to all other models
(p<0.001). Conclusions: Modern ML techniques can improve risk
prediction in OHT compared to traditional approaches. This may have
important implications in patient selection, programmatic evaluation,
allocation policy, and patient counseling and prognostication.